content
stringlengths
0
1.88M
url
stringlengths
0
5.28k
CROSS-REFERENCE TO RELATED APPLICATIONS BACKGROUND OF THE INVENTION BRIEF SUMMARY OF THE INVENTION DETAILED DESCRIPTION OF THE INVENTION First Embodiment Second Embodiment Third Embodiment Fourth Embodiment Fifth Embodiment This application is based upon and claims the benefit of priority from prior Japanese Patent Application No. 2006-150540, filed May 30, 2006, the entire contents of which are incorporated herein by reference. 1. Field of the Invention The present invention relates to a wireless communication apparatus. 2. Description of the Related Art A wireless system uses a technique of transmitting a plurality of frames upon aggregating them to improve communication efficiency. For example, Task Group n (TGn), whose standardization has been under way via IEEE801.11, has proposed an aggregation technique of aggregating and transmitting a plurality of frames. This technique of aggregating and transmitting a plurality of frames can reduce overheads such as physical (PHY) and MAC layer headers accompanying each frame, which are required at the time of transmission and reception of frames, and the interval between frames. On the other hand, as the frame length increases excessively, the wireless channel state estimated at the head of a frame differs from the state at the rear half portion of the frame, resulting in an error. That is, a proper frame length depends on the state of the wireless channel. Existing IEEE802.11a/b/g also has studied an algorithm for link adaptation for controlling a transmission rate in accordance with a wireless channel. A conventional algorithm sets the initial value of a transmission rate to the minimum or maximum rate, and then starts control (M. Lacage, M. H. Manshaei and T. Turletti, “see IEEE802.11 rate adaptation: a practical approach”, Proc. Of ACM MSWiM, 2004, and reference 2: k J. C. Bicket, “Bit-rate Selection in Wireless Networks”, M. S. Thesis, MIT, 2005). Consider admission control. According to conventional admission control, whether a new terminal can be accommodated is determined from the throughput (channel occupation ratio) at the current transmission rate of an existing terminal. In addition, JP-A 2003-251791 (KOKAI) discloses a method in which a terminal sends all possible transmission rates to an AP in advance in consideration of the application of a wireless LAN to home AV devices, and then performs stable transmission/reception upon setting the transmission rate to a relatively low rate if the channel capacity is large enough. In this method, when assigning a channel to a new terminal, the AP determines whether to accept a request from the new terminal, assuming that transmission/reception is performed at the maximum transmission rate applied by an existing terminal at the time of association. In this case, the maximum value of a usable transmission rate changes depending on a wireless channel state. However, there is no description about how to obtain the maximum value of a transmission rate in JP-A 2003-251791 (KOKAI). Assume that an initial transmission rate is to be determined at the start of communication or the maximum transmission rate of each terminal in admission control is to be obtained. In this case, in consideration of the aggregation technique in IEEE802.11n which aggregates a plurality of frames and transmits the aggregated frame, depending on the number of frames to be aggregated, even if a transmission rate x is lower than a transmission rate y, the throughput at the transmission rate x may become higher than that at the transmission rate y. That is, in consideration of the aggregation technique, the assumption that the higher the transmission rate, the better does not always hold. It is therefore necessary to check the frame lengths (frame counts) to which frames can be aggregated at the respective transmission rates through all the transmission rates and compare them with each other. As described above, according to IEEE802.11n, aggregation is performed to transmit an aggregated frame including a plurality of frames. In aggregation, a channel state is estimated from the preamble of the head of a frame, and an aggregated data frame is decoded following the preamble by using the estimated value. For this reason, a channel state corresponding to frames of the second half portion of the aggregated frame differs from the channel state at the time of estimation from the preamble, and hence errors generally tend to occur in the frames of the second half portion of the aggregated frame. The number of frames included in the aggregated frame depends on various factors, e.g., a transmission rate, a wireless channel state, and the type of decoding or channel tracking to be performed on the receiving side. In general, the number of frames included in an aggregated frame at a high transmission rate, in particular, is smaller than that at a low transmission rate, it is not necessarily appropriate to assume that the higher the transmission rate, the higher the throughput. Under these circumstances, in consideration of both a transmission rate and the number of frames in an aggregated frame, it is necessary to select a combination of transmission rate and frame count which can obtain a high throughput. However, no studies have been made on a technique for selecting both a transmission rate and the number of frames in an aggregated frame from the viewpoint of throughput. According to embodiments of the present invention, a wireless communication apparatus includes: a memory to store reference data including a plurality of transmission rates usable for transmission of an aggregated frame including a plurality of frames and a plurality of reference frame lengths of the aggregated frame corresponding to the transmission rates, respectively, a throughput obtained with one of the transmission rates and a reference frame length of the reference frame lengths which corresponds to the one of the transmission rates is higher than any throughputs obtained with any frame length at another transmission rate lower than the one of the transmission rates; a selecting unit configured to select one or more trial transmission rates among the transmission rates and one or more trial frame lengths among the reference frame lengths and frame lengths other than the reference frame lengths; a transmission unit configured to transmit a trial aggregated frame with one of the trial transmission rates selected by the selecting unit, a frame length of the trial aggregated frame corresponding to one of the trial frame lengths selected by the selecting unit; a reception unit configured to receive a response corresponding to the trial aggregated frame transmitted; a determining unit configured to determine whether communication with the one of the trial transmission rates and the one of the trial frame lengths is possible or not, based on the response received; and a control unit configured to set the one of the trial transmission rates and the one of the trial frame lengths, with which the communication is determined to be possible, to initial/maximum values of a transmission rate for transmitting the aggregated frame and a frame length of the aggregated frame, respectively. The embodiments of the present invention will be described below with reference to the views of the accompanying drawing. FIG. 1 1 2 3 4 5 6 7 8 A wireless communication apparatus in includes at least an antenna , wireless transmission unit , wireless reception unit , MAC protocol processing unit , transmission control unit , channel variation determining unit , scheduling unit , and admission control unit . 4 201 202 203 204 5 101 102 103 The MAC protocol processing unit includes a frame dividing unit , frame processing unit , frame generating unit , and frame aggregation unit . The transmission control unit includes a reference data storage unit , selecting unit , and link adaptation control unit . FIG. 1 An outline of the operation of the wireless communication apparatus in at the time of signal transmission will be described first. 8 7 203 204 102 2 2 1 The admission control unit performs a series of procedures for inquiring whether it is possible to communicate with a desired communication apparatus such as an access point (AP), e.g., notifying the communication apparatus of a usable transmission rate. If there is an available channel in the communication apparatus and communication can be performed with the communication apparatus, the scheduling unit performs a series of procedures, e.g., assignment of the available channel, with the communication apparatus. When frames are to be transmitted to the communication apparatus after the above procedures, first of all, the frame generating unit generates a frame including data output from the upper layer. The frame aggregation unit collects frames up to the number of frames which can be aggregated on the basis of the frame length (or the frame count) notified from the selecting unit , and generates an aggregated frame including one or a plurality of frames. The generated aggregated frame is output to the wireless transmission unit upon a series of access control operations. The wireless transmission unit transmits the input aggregated frame through antenna upon performing processing such as coding processing, modulation processing, D/A conversion, and frequency conversion to a predetermined frequency. FIG. 1 3 4 201 4 202 An outline of the operation of the wireless communication apparatus in at the time of signal reception will be described. The wireless reception unit performs processing such as frequency conversion to baseband, A/D conversion, demodulation processing, and decoding processing, and outputs the resultant reception data including the aggregated frame to the MAC protocol processing unit . The frame dividing unit of the MAC protocol processing unit extracts a data portion by removing a header portion from the input reception data (aggregated frame). The frame processing unit performs a CRC check and retransmission processing by using the extracted data portion. 202 202 6 The received frame may be either a data frame transmitted from the communication partner of the wireless communication apparatus or a reception ACK/NAK response transmitted from the communication partner in response to the data frame transmitted from the wireless communication apparatus to the communication partner. For example, in a wireless LAN system, the receiving side which has received a data frame notifies the transmitting side of the success/failure of reception of the data frame by transmitting a response such as an ACK or a Block Ack. If the received data is a data frame, the frame processing unit outputs the payload in the data frame to the upper layer. If the received data is a response (an ACK/NAK to each frame in the aggregated frame or a Block ACK to the overall aggregated frame) to the previously transmitted data frame, the frame processing unit performs retransmission processing on the basis of the response, and notifies the channel variation determining unit of the response. 6 6 6 The channel variation determining unit calculates an error rate with respect to the overall aggregated frame or each frame in the aggregated frame by using responses such as an ACK/NAK to each frame in the aggregated frame or a Block ACK to the overall aggregated frame, and compares the calculated result with an error rate (threshold error rate) as a predetermined threshold, thereby determining whether it is possible to perform communication at the current transmission rate and with the current aggregated frame length. If, for example, the calculated error rate is less than the threshold error rate, the channel variation determining unit determines that it is possible to perform communication at the current transmission rate and with the current aggregated frame length. If the calculated error rate is equal to or more than the threshold error rate, the channel variation determining unit determines that it is impossible to perform communication at the current transmission rate and with the current aggregated frame length. 6 The error rate calculated by the channel variation determining unit will be briefly described below. Assume that an aggregated frame includes 10 frames. In this case, when ACKs are obtained with respect to all the 10 transmitted frames, the error rate of the overall aggregated frame is 0%. If ACKs are obtained with respect to only six frames out of the 10 frames, the error rate of the overall aggregated frame is 40%. As described above, the error rate of an overall aggregated frame can be obtained as the ratio of the number of frames to which no ACK response could be obtained to the number of frames in the transmitted aggregated frame. In addition, when an aggregated frame including 10 frames is transmitted a plurality of number of times, the error rate of the overall aggregated frame may be the average value of the error rates of the respective aggregated frames. Furthermore, when an aggregated frame including 10 frames is transmitted a plurality of number of times, the error rate of each frame in the aggregated frame can be obtained from the number of times no ACK response could be obtained with respect to the frame, of the number of times the aggregated frame was transmitted. The error rate of the aggregated frame may be the average value of the error rates of the respective frames in the aggregated frame. 6 3 6 6 Note that the channel variation determining unit may use the reception power value (RSSI) obtained by the wireless reception unit at the time of signal reception, a channel estimation result, or the like instead of the above error rate to determine whether it is possible to perform communication at the current transmission rate and with the current aggregated frame length. For example, the channel variation determining unit compares the RSSI or the channel estimation result with a predetermined threshold to determine whether it is possible to perform communication at the current transmission rate and with the current aggregated frame length. Alternatively, the channel variation determining unit may determine whether it is possible to perform communication at the current transmission rate and with the current aggregated frame length, on the basis of the time required to receive a response to a data frame after it is transmitted or whether a response is received within a predetermined time. 6 Assume that in this case, the channel variation determining unit compares the error rate calculated when a response to a transmitted aggregated frame is received with a predetermined threshold error rate to determine whether it is possible to perform communication at the current transmission rate and with the current aggregated frame length. If the calculated error rate is lower than the threshold error rate (or equal to or less than the threshold error rate), it is possible to perform communication at the current transmission rate and with the current aggregated frame length. If the calculated error rate is equal to or more than the threshold error rate (or higher than the threshold error rate), it is not appropriate to perform communication at the current transmission rate and with the current aggregated frame length. It is therefore necessary to adjust the transmission rate or the aggregated frame length. 102 102 The selecting unit selects initial values for a transmission rate and an aggregated frame length (count) which are suitable for communication. In particular, the selecting unit sets a transmission rate and an aggregated frame length (count) at the start of communication and determines a maximum transmission rate required at the time of admission control. 102 103 It is conceivable to use the transmission rates and the aggregated frame count, which are determined as initial values/maximum values by the selecting unit , as initial values in link adaptation control by the link adaptation control unit or as the maximum value of throughput of the wireless communication apparatus when admission control is performed between itself and an access point in a wireless LAN. 102 In general, the aggregated frame length decreases more at a high transmission rate than at a low transmission rate. For this reason, the selecting unit selects a transmission rate and an aggregated frame length in consideration of throughput. 102 The processing operation of the selecting unit will be described next. FIG. 2 FIG. 2 shows 12 transmission rates (Mb/s) on the abscissa, and 15 aggregated frame lengths on the ordinate. Each aggregated frame length is represented by the number of fixed-length frames each having 1,024 bytes. Note that a frame length is not limited to this, and may be represented, for example, every 1,000 bytes. In addition, referring to , throughputs are grouped every 10 Mb/s. Obviously, however, the manner of grouping is not limited to this, and throughputs may be grouped in arbitrary size. FIG. 2 FIG. 2 FIG. 2 FIG. 2 Referring to , each bullet represents a reference aggregated frame length at a corresponding transmission rate. The reference aggregated frame length at each transmission rate represents an aggregated frame length by which the highest throughput can be obtained among the throughputs obtained by any aggregated frame lengths at transmission rates lower than the corresponding transmission rate. Note that referring to , each reference aggregated frame length is represented by the number of frames included in an aggregated frame. According to the throughput characteristics with respect to the transmission rates and the aggregated frame lengths, reference aggregated frame lengths like those described above do not always exist with respect to all the transmission rates in . For example, referring to , there is no reference aggregated frame length like that described above with respect to a transmission rate of 65 Mb/s. If it is possible to perform communication with a reference aggregated frame length or more at a given transmission rate, a throughput higher than the throughput obtained with the transmission rate and the reference aggregated frame length cannot be obtained at a lower transmission rate no matter how the aggregated frame length is increased. FIG. 2 For example, referring to , when communication is to be performed at a transmission rate of 78 Mb/s and with a reference aggregated frame length=5 frames, the throughput becomes 60 to 70 Mb/s. When communication is to be performed at a transmission rate of 65 Mb/s lower than the above transmission rate by one level, the throughput becomes 50 to 60 Mb/s even if the aggregated frame length is 15 frames. That is, this throughput is lower than that obtained with the above transmission rate, 78 Mb/s, and the reference aggregated frame length. If the frame length of an aggregated frame which can be communicated at a given transmission rate is equal to or more than the reference aggregated frame length, since the apparatus cannot obtain any throughput higher than the current throughput even by performing communication at any transmission rate lower than the given transmission rate, there is no need to perform any trial communication at the low transmission rate. FIG. 1 Assume that the wireless communication apparatus in performs trial communication at a plurality of different transmission rates which the apparatus has, starting from the high transmission rate. In this case, if the apparatus can perform communication with a reference aggregated frame length at a given transmission rate, the apparatus sets the transmission rate and the reference aggregated frame length as initial values or maximum values for subsequent communication. FIG. 3 shows a case wherein one frame is a fixed-length frame with 512 bytes, and the aggregated frame length is represented by the number of fixed-length frames each having 512 bytes. The maximum aggregated frame length is 15 frames. FIG. 2 FIG. 3 Like , shows 12 transmission rates (Mb/s) on the abscissa, and 15 aggregated frame lengths on the ordinate. The throughputs are grouped every 10 Mb/s. In addition, each bullet represents a reference aggregated frame length at a corresponding transmission rate. FIG. 3 FIG. 2 FIG. 2 FIG. 2 The frame length in the case shown in in which one frame consists of 512 bytes is half that in the case shown in in which one frame consists of 1,024 bytes. Basically, therefore, in this case, the number of frames required to achieve a throughput at the same level as that in the case shown in is double the number of frames therein, and hence the reference aggregated frame count is double that in the case shown in . In a region where the transmission rate is high, in particular, this relationship does not hold. This is because, according to the proposal from IEEE802.11 TGn, it is conceivable that when an aggregated frame including a plurality of frames is to be generated, some control is performed, e.g., inserting four-octet control data called a delimiter between frames. FIG. 2 3 In practice, therefore, when the frame length of one frame and the maximum value of an allowable aggregated frame length (count) are determined in a given application, it is necessary to obtain a reference aggregated frame length (count) with respect to each transmission rate by calculating a throughput characteristic obtained by the combination of determined values, i.e., a characteristic like that shown in or . FIGS. 2 and 3 101 For example, as shown in , the reference data storage unit stores reference data indicating throughput characteristics with respect to transmission rates and aggregated frame lengths and the reference aggregated frame lengths determined with respect to the respective transmission rates. 101 FIGS. 2 and 3 Note that it suffices if the reference data stored in the reference data storage unit include at least the reference aggregated frame lengths determined with respect to the respective transmission rates in . In addition, it suffices to set different reference aggregated frame lengths for the respective transmission rates or set one reference aggregated frame length for each group of a plurality of adjacent transmission rates. FIGS. 2 and 3 FIG. 4 10 It suffices to express the relationship between the respective transmission rates, reference aggregated frame lengths, and throughputs, like that shown in , in the form of a table. For example, this relationship may be expressed by a table like that shown in . The control unit stores such a table. FIG. 1 FIGS. 2 to 4 FIG. 1 101 The relationship between transmission rates, aggregated frame lengths, and throughputs varies depending on the frame length of one frame. When a plurality of applications in which the length of one frame varies operate on the wireless communication apparatus in , the reference data storage unit may store a plurality of reference data for the respective frame lengths (or the respective applications), like those indicating the relationships between transmission rates, aggregated frame lengths, and throughputs, like those shown in . The wireless communication apparatus in selectively uses the reference data in accordance with the application operating on the apparatus (or the frame length corresponding to the application). 102 101 102 103 204 FIG. 1 FIGS. 2 and 3 FIG. 4 The selecting unit selects one of the plurality of transmission rates which the wireless communication apparatus in has, and selects the frame length of an aggregated frame by referring to the reference data (e.g., the graphs of and the table of ) stored in the reference data storage unit . The selecting unit notifies the link adaptation control unit and the frame aggregation unit of the selected transmission rate and the selected aggregated frame length. The apparatus then starts trial communication at the selected transmission rate and with the selected aggregated frame length. FIG. 1 102 In this case, the wireless communication apparatus in performs trial communication to determine whether it is possible to perform communication at the transmission rate and with the aggregated frame length, which are selected by the selecting unit , in order to determine the initial values of a transmission rate and an aggregated frame length at the start of communication, the maximum values of a transmission rate and a frame length used for control on a transmission rate/frame length afterward, or the maximum transmission rate notified to an AP in admission control (i.e., in order to set/re-set the initial values or maximum values of a transmission rate and an aggregated frame length). 6 103 204 If this trial communication result indicates that it is determined (by the channel variation determining unit ) that it is possible to perform communication at the selected transmission rate and with the selected aggregated frame length, the apparatus determines the selected transmission rate and the selected aggregated frame length as initial values/maximum values, notifies the link adaptation control unit of the transmission rate as an initial value/maximum value, and notifies the frame aggregation unit of the aggregated frame length as an initial value. 204 203 2 The frame aggregation unit generates an aggregated frame including a plurality of MAC frames generated by the frame generating unit in accordance with the notified frame length, and outputs the resultant frame to the wireless transmission unit . 103 103 102 103 2 The link adaptation control unit stores, in advance, a table indicating a plurality of transmission rates and modulation schemes and error correction coding schemes which are set for the respective transmission rates. The link adaptation control unit then performs conventional link adaptation control by using the transmission rate notified as an initial value/maximum value from the selecting unit as an initial value/maximum value. That is, the link adaptation control unit selects an optimal transmission rate from the plurality of transmission rates in the table in accordance with a wireless channel state, reads out a modulation scheme and error correction coding scheme corresponding to the transmission rate from the table, and notifies the wireless transmission unit of the new transmission rate, the modulation scheme, and the error correction coding scheme. 103 2 102 103 102 In addition, at the time of trial communication, the link adaptation control unit notifies the wireless transmission unit of a modulation scheme and error correction coding scheme corresponding to the transmission rate notified from the selecting unit . Furthermore, the link adaptation control unit sets the transmission rate notified as an initial value/maximum value from the selecting unit as an initial value at the start of communication or a maximum transmission rate notified to an AP in admission control. 2 2 103 The wireless transmission unit codes and modulates the data of a frame (aggregated frame) input to the wireless transmission unit in accordance with the modulation scheme and error correction coding scheme notified from the link adaptation control unit . FIG. 1 FIG. 5 102 Processing operation for selecting a frame length and a transmission rate by the wireless communication apparatus in (mainly the processing operation of the selecting unit ) will be described next with reference to the flowchart shown in . FIG. 5 FIG. 1 102 will explain a case wherein the selecting unit sequentially selects a plurality of transmission rate candidates which the wireless communication apparatus in has, starting from the highest transmission rate. 102 103 11 102 12 First of all, the selecting unit obtains, as a plurality of transmission rate candidates, all or some of usable transmission rates of the plurality of transmission rates stored in the table indicating predetermined modulation schemes and error correction coding schemes with respect to the respective transmission rates from the link adaptation control unit (step S). The selecting unit then sequentially selects the plurality of obtained transmission rate candidates one by one in descending order (step S). 12 101 13 102 103 204 14 If a reference aggregated frame corresponding to the transmission rate selected in step S is set for the reference data stored in the reference data storage unit (step S), the selecting unit notifies the link adaptation control unit and the frame aggregation unit of the selected transmission rate (trial transmission rate) and the corresponding reference aggregated frame length (trial frame length) to perform trial communication at the selected transmission rate and with the reference aggregated frame length (step S). 203 204 203 2 At the time of trial communication, the frame generating unit may generate a trial MAC frame containing arbitrary data or trial data. The frame aggregation unit generates a trial aggregated frame (a trial frame) including a plurality of trial MAC frames generated by the frame generating unit in accordance with the notified trial frame length, and outputs the resultant frame to the wireless transmission unit . 103 2 102 2 2 103 The link adaptation control unit notifies the wireless transmission unit of a modulation scheme and error correction coding scheme corresponding to the trial transmission rate notified from the selecting unit . The wireless transmission unit codes and modulates the data of the aggregated frame input to the wireless transmission unit in accordance with the modulation scheme and error correction coding scheme notified from the link adaptation control unit . In this manner, at the time of trial communication, the apparatus transmits an aggregated frame once or a plurality of number of times, and receives a response such as an ACK corresponding to each frame in the aggregated frame which is transmitted from the receiving side of the aggregated frame or a Block ACK corresponding to the overall aggregated frame. 6 202 6 6 15 The channel variation determining unit calculates an error rate with respect to the overall aggregated frame or each frame in the aggregated frame by using the response such as the ACK corresponding to each frame in the aggregated frame or the Block ACK corresponding to the overall aggregated frame, which is obtained by the frame processing unit , and compares the calculated result with a threshold error rate to determine whether it is possible to perform communication at the trial transmission rate and with the trial frame length. If the calculated error rate is less than the threshold error rate, the channel variation determining unit determines that it is possible to perform communication at the trial transmission rate and with the trial frame length. If the calculated error rate is equal to or more than the threshold error rate, the channel variation determining unit determines that it is impossible to perform communication at the trial transmission rate and with the trial frame length (step S). 15 102 103 204 16 If it is determined in step S that it is possible to perform communication at the trial transmission rate and with the trial frame length, the selecting unit determines the transmission rate and the corresponding reference frame length as initial values/maximum values, notifies the link adaptation control unit of the transmission rate as an initial value/maximum value, and notifies the frame aggregation unit of the reference aggregated frame length as an initial value (step S). 204 203 2 103 The frame aggregation unit generates an aggregated frame including the plurality of MAC frames generated by the frame generating unit by using the notified frame length, and outputs the resultant frame to the wireless transmission unit . The link adaptation control unit performs the above link adaptation control by using the notified transmission rate as an initial value or a maximum transmission rate. 13 12 15 If it is determined in step S that a reference aggregated frame length is not set to the transmission rate selected in step S, or if it is determined in step S that it is impossible to perform communication at the trial transmission rate and with the trial frame length, the apparatus performs processing operation for selecting another transmission rate and another aggregated frame length. This processing operation will be described later. 11 Note that the plurality of transmission rate candidates obtained in step S may be all the transmission rates specified by the IEEE802.11 specification or the like, e.g., four transmission rates in IEEE802.11b or eight transmission rates in IEEE802.11a. In addition, a plurality of transmission rate candidates may be only transmission rates, of the plurality of transmission rates specified by a specification, which is compulsory to use. Alternatively, it suffices to select different transmission rates for the respective applications and use only selected transmission rates as candidates. In either case, it suffices to perform trial transmission while changing the transmission rate on a frame basis without aggregating frames, and to determine the upper limit value of a transmission rate as a candidate on the basis of each transmission result (the presence/absence of ACK response to each transmitted frame). 11 102 12 102 For example, in step S, the selecting unit sequentially selects transmission rates one by one from a plurality of transmission rates provided in advance or some transmission rates of the plurality of transmission rates in descending order. When transmitting one frame at the selected transmission rate and receiving an ACK response to the frame, the apparatus determines that it can performs communication at the transmission rate, i.e., can use the transmission rate. In step S, the selecting unit selects the highest transmission rate among the transmission rates which are determined to be able to be used. 14 Assume that when performing trial communication in step S, the apparatus may transmit data up to a reference aggregated frame length or may transmit an aggregated frame having a frame length (e.g., the number of frames obtained by adding a predetermined number of frames to the number of frames corresponding to the reference aggregated frame length) longer (larger) than the reference aggregated frame length. FIGS. 2 and 3 As is obvious from in , if an aggregated frame length is larger than the reference aggregated frame length, the apparatus may obtain a throughput higher than that when the frame length is equal to the reference aggregated frame length. Therefore, transmitting an aggregated frame having a frame length longer than the aggregated frame length makes it possible to determine whether it is possible to perform communication with a throughput higher than that when the frame length is equal to the reference aggregated frame length. 14 16 5 As described above, if the apparatus performs trial communication by using an aggregated frame having a frame length longer than the reference aggregated frame length in step S, and determines as a result that it can perform communication at the frame length, it suffices to set the frame length as an initial value/maximum value in step S in step S. 15 102 Assume that it is determined in step S that it is possible to perform communication at the trial transmission rate and with the trial frame length. In this case, if there is a frame length which can obtain a throughput higher than that obtained with the trial transmission rate and with the reference frame length, the selecting unit may perform trial communication again by using the frame length as a new trial frame length. If it is determined as a result of the trial communication that it is possible to perform communication with the new trial frame length, the new trial frame length is set as an initial value/maximum value. FIG. 6 FIG. 5 FIG. 6 FIG. 6 FIG. 5 FIG. 6 FIG. 5 12 12 26 28 16 Another processing operation for selecting a frame length and a transmission rate will be described next with reference to the flowchart shown in . Note that the same reference numerals as in denote the same parts in , and different portions will be described below. That is, step S′ in replaces step S in , and steps S to S in replace step S in . 12 102 12 102 FIG. 5 FIG. 6 In step S in , the selecting unit selects the highest transmission rate among a plurality of transmission rate candidates. In contrast, in step S′ in , a selecting unit selects a transmission rate at which communication can be actually performed from a plurality of transmission rate candidates. 102 12 14 14 15 26 If, for example, the maximum transmission rate among a plurality of transmission rate candidates is 104 MB/s, the selecting unit selects a transmission rate of 52 Mb/s in step S′. If the apparatus performs trial communication by using a transmission rate of 52 Mb/s and a corresponding reference aggregated frame length as a trial transmission rate and a trial frame length in step S, and determines as a result that it can perform communication (steps S and S), the process advances to step S. 26 28 It is checked in step S whether there is another transmission rate which is higher than the current trial transmission rate, 52 Mb/s, and to which a reference aggregated frame length is set. If there is no such other transmission rate, the process advances to step S to determine the current trial transmission rate and the corresponding reference aggregated frame length as initial values/maximum values. 26 14 14 15 26 If it is determined in step S that there is another transmission rate which is higher than the current trial transmission rate and to which an aggregated frame length is set, the process returns to step S to select the another transmission rate and the corresponding reference aggregated frame length as a new trial transmission rate and a new trial frame length and perform trial communication. If it is determined as a result of the trial communication that it is possible to perform communication (steps S and S), the process advances to step S again. The subsequent operation is the same as that described above. 13 12 15 If it is determined in step S that no reference aggregated frame length is set for the transmission rate selected in step S′, or it is determined in step S that it is impossible to perform communication at the trial transmission rate and with the trial frame length, the apparatus performs processing operation for selecting another transmission rate and another aggregated frame length. This processing operation will be described later. A merit of the first embodiment is that since it searches a plurality of transmission rate candidates for the highest transmission rate, if it is possible to perform communication at the transmission rate and with the corresponding reference aggregated frame length, the processing is terminated. Unlike in the first embodiment, in the second embodiment, even if it is possible to perform communication at a given transmission rate and with a corresponding reference aggregated frame length, it is necessary to check the presence/absence of a transmission rate candidate higher than the transmission rate. However, if, for example, a given transmission rate near the transmission rate, 52 Mb/s, is marked a maximum or initial value and differs from the maximum value when viewed from a log, using the second embodiment makes it unnecessary to uselessly perform search starting from the maximum transmission rate with the lowest possibility for every operation. 14 As in the first embodiment, when the apparatus performs trial communication in step S, it suffices to use a reference aggregated frame length as a trial frame length without any change or a frame length (e.g., the number of frames obtained by adding a predetermined number of frames to the number of frames corresponding to the reference aggregated frame length) longer (larger) than the reference aggregated frame length as a trial frame length. 14 26 28 28 FIG. 6 Assume that the apparatus performs trial communication by using an aggregated frame length longer than the reference aggregated frame length as a trial frame length in step S, and determines as a result that it can perform communication with the trial frame length. In this case, when the process advances from step S to step S in , it suffices to set this trial frame length as an initial value/maximum value in step S. 15 102 In addition, as in the first embodiment, if it is determined in step S that it is possible to perform communication at a trial transmission rate and with a trial frame length, and there is a frame length with which the apparatus can obtain a throughput higher than that obtained with the trial transmission rate and the trial frame length, the selecting unit may perform trial communication again by using the frame length as a new trial frame length. If it is determined as a result of the trial communication that it is possible to perform communication with the new trial frame length, the new trial frame length is set to an initial value/maximum value. 101 FIG. 7 In the first and second embodiments, the reference data stored in the reference data storage unit may include frame lengths other than reference aggregated frame lengths and information indicating the corresponding throughputs, together with the reference aggregated frame lengths, with respect to the respective transmission rates, as shown in, for example, . FIG. 7 Referring to , each column corresponds to a transmission rate (Mb/s), and each row represents a corresponding throughput (Mb/s). The numerical value in each cell represents the number of frames in an aggregated frame. In the columns corresponding to the respective transmission rates, some cells having encircled numerical values. These encircled numerical values represent reference aggregated frame lengths at the respective transmission rates. FIG. 7 Obviously from , if the number of frames at a transmission rate of 52 Mb/s is “4”, the obtained throughput is 40 to 50 Mb/s. FIG. 7 FIGS. 5 and 6 13 14 In this manner, it suffices to store not only the relationship between reference aggregated frame lengths and corresponding throughputs but also the relationship between frame lengths other than the reference aggregated frame lengths and corresponding throughputs with respect to the respective transmission rates and to select, in the use of a transmission rate for which no reference aggregated frame length exists, an aggregated frame length with which the apparatus can obtain the maximum throughput at the transmission rate from the table in instead of a reference aggregated frame length, in step S in . In this case, it suffices to perform trial communication with the selected frame length in step S. FIG. 7 Depending on a transmission rate, increasing an aggregated frame length may make it possible to obtain a throughput higher than that with a reference aggregated frame length. For example, referring to , at a transmission rate of 104 Mb/s, the threshold with a reference aggregated frame count of “5” is 70 to 80 Mb/s, whereas a higher throughput can be obtained if it is possible to perform communication with an aggregated frame count of “8”. 14 FIG. 5 FIG. 7 In step S in described in, for example, the first embodiment, performing trial communication at the selected transmission rate and with a frame length longer than the reference aggregated frame length (a transmission rate of 104 Mb/s and an aggregated frame count of “8” in ) makes it unnecessary to perform trial communication again afterward and hence to obtain the initial values/maximum values of a transmission rate and aggregated frame length with which a higher throughput can be obtained. FIG. 7 FIG. 7 Note that the throughputs obtained with frame lengths other than reference aggregated frame lengths at the respective transmission rates may be stored in the form shown in . Alternatively, every time the apparatus performs trial communication at a given transmission rate and with a corresponding frame length, the corresponding throughput may be stored in a table like that shown in . 13 12 15 13 12 15 FIG. 5 FIG. 5 FIG. 6 FIG. 6 Assume the following cases: (a) it is determined in step S in that no reference aggregated frame length is set for the transmission rate selected in step S; (b) it is determined in step S in that it is impossible to perform communication at a trial transmission rate and with a trial frame length; (c) it is determined in step S in that no reference aggregated frame length is set for the transmission rate selected in step S′; and (d) it is determined in step S in that it is impossible to perform communication at a trial transmission rate and with a trial transmission length. In each of these cases, the apparatus performs processing operation for selecting another transmission rate and another aggregated frame length. The fourth embodiment will exemplify the processing operation for selecting another transmission rate and another aggregated frame length in the above four cases, i.e., if it is impossible to perform communication with the currently selected combination of transmission rate and aggregated frame length. FIG. 8 This operation will be described with reference to the flowchart of . 101 If it is impossible to perform communication with the currently selected combination of transmission rate and aggregated frame length (e.g., a reference aggregated frame length), it is checked first whether there is any aggregated frame length with which communication can be performed at the same transmission rate (step S). 14 6 102 6 101 102 FIGS. 5 and 6 Assume that when the apparatus performed trial communication with the currently selected combination of transmission rate and aggregated frame length (e.g., 10 frames) (step S in ), the apparatus could receive ACK responses to the first to seventh frames in the aggregated frame but could not receive any responses to the eighth and subsequent frames. In this case, if the aggregated frame length is seven frames, the channel variation determining unit should determine that it is possible to perform communication. A selecting unit therefore receives, from the channel variation determining unit , information indicating up to which frame in the aggregated frame the apparatus has received an ACK response or information indicating to which frames in the aggregated frame the apparatus has received ACK responses. In step S, the selecting unit then selects “7” frames as a new aggregated frame length or a value smaller than “7” on the basis of the position of the frame in the aggregated frame to which the apparatus has received an ACK response. 6 101 FIG. 7 Assume that the channel variation determining unit has output information about the error rate of the overall aggregated frame as a result of performing trial communication with the currently selected combination of transmission rate and aggregated frame length (e.g., 10 frames). In this case, the apparatus gradually decreases the currently selected aggregated frame length every predetermined frame length (count). Alternatively, the apparatus decreases the currently selected aggregated frame length on the basis of a predetermined frame length to be reduced (a frame count to be reduced) in accordance with an error rate. If, for example, the error rate is 50%, the apparatus decreases the frame length every two frames at a time. If the error rate is 30%, the apparatus decreases the frame length every frame at a time. In addition, if the reference data storage unit stores in advance throughputs with respect to the numbers of frames of aggregated frames for the respective transmission rates in the form of a table like that shown in , it suffices to refer to the table to select a frame length which can obtain the highest throughput next to that obtained with the currently selected frame length. 101 102 102 103 In this manner, if it is determined in step S that a new aggregated frame length can be selected with respect to the currently selected transmission rate, the selecting unit selects the new frame length (step S). The process then advances to step S. 103 104 105 FIG. 7 In step S, the apparatus performs trial communication using the combination of transmission rate and new aggregated frame length. If it is determined as a result of the trial communication that it is possible to perform communication using the combination (step S), the apparatus temporarily stores the corresponding throughput (step S). For example, it suffices to record the throughput corresponding to the combination of transmission rate and aggregated frame length on a table like that shown in . 101 106 If it is determined in step S that the currently selected aggregated frame length cannot be decreased at the currently selected transmission rate, the process advances to step S. 106 102 106 103 105 FIG. 7 In step S, the selecting unit selects, for example, one of the combinations of transmission rates lower than the currently selected transmission rate and aggregated frame lengths, in the table shown in , which can obtain the maximum throughput (step S). The process then advances to step S. Subsequently, in the above manner, the apparatus performs trial communication with the selected combination and checks whether it can perform communication using the combination. Upon determining that it is possible to perform communication, the apparatus temporarily stores the corresponding throughput (step S). 101 105 101 105 106 107 After the processing in steps S to S (or the processing from step S to step S through step S), the process advances to step S. 107 102 108 FIG. 7 In step S, the selecting unit checks by referring to, for example, the table in , whether there is any combination of transmission rate lower than the currently selected transmission rate and corresponding reference aggregated frame length which can obtain a throughput higher than the maximum throughput obtained in the trial communication performed so far. If there is such a combination, the process advances to step S to check whether it is possible to perform communication using the combination. 104 105 102 108 FIG. 7 FIG. 7 Assume that the currently selected transmission rate and aggregated frame length are 104 Mb/s and two frames, respectively, and it is determined in step S that it is possible to perform communication using the combination, with the corresponding throughput being 50 to 60 Mb/s. Assume that in step S, the apparatus stores the combination of transmission rate and aggregated frame length and throughput in the form shown in . In this case, as shown in , since a transmission rate of 78 Mb/s and a reference aggregated frame length of “6” allow to obtain a higher throughput, the selecting unit selects the combination of a transmission rate of 78 Mb/s and a reference aggregated frame length of “6”. In step S, the apparatus performs trial communication using this combination. 109 102 10 If it is determined as a result of the trial communication that it is possible to perform communication using this combination (step S), the selecting unit sets the transmission rate and corresponding reference aggregated frame length of the combination to initial values/maximum values (step S). 109 107 111 If it is determined in step S as a result of the trial communication that it is impossible to perform communication using the combination, or it is determined in step S that there is no combination of transmission rate and corresponding reference aggregated frame length which can obtain a throughput higher than the maximum throughput obtained in the trial communication performed so far, the process advances to step S. 111 102 102 105 In step S, the selecting unit sets the combination of transmission rate and aggregated frame length which has allowed to obtain the maximum throughput in the trial communication performed so far to initial values/maximum values. That is, the selecting unit sets the combination of transmission rate and aggregated frame length temporarily stored in step S to initial values/maximum values. FIGS. 5 and 6 FIG. 9 The fifth embodiment will exemplify another processing operation for, when it is impossible to perform communication with the currently selected combination of transmission rate and aggregated frame length, selecting another combination of transmission rate and aggregated frame length in by referring to the flowchart shown in . FIG. 9 FIG. 7 Referring to , if it is impossible to perform communication using the currently selected combination of transmission rate and aggregated frame length (e.g., reference aggregated frame length), the apparatus sequentially performs trial communication using combinations of transmission rates and aggregated frame lengths stored in the table shown in in descending order of throughput. 101 FIG. 7 A reference data storage unit stores a table indicating the throughputs obtained with reference aggregated frame lengths and frame lengths other than the reference aggregated frame lengths with respect to the respective transmission rates. Alternatively, every time the apparatus performs trial communication, the table in may be updated with the used combination of transmission rate and aggregated frame length and the corresponding throughput. 02 102 FIGS. 5 and 6 FIG. 7 First of all, if it is determined in step S that it is impossible to perform communication using the currently selected combination of transmission rate and aggregated frame length in , a selecting unit reads, from the combinations of transmission rates and aggregated frame lengths stored in the table shown in , a combination of transmission rate and frame length which can obtain the highest throughput next to the throughput obtained with the currently selected combination of transmission rate and aggregated frame length. FIG. 7 201 Assume that in the case shown in , it is impossible to perform communication at a transmission rate of 104 Mb/s and with a reference aggregated frame length of “5”. In this case, two candidate combinations, i.e., a combination of a transmission rate of 78 Mb/s and an aggregated frame length of “6” and the combination of a transmission rate of 104 Mb/s and an aggregated frame length of “3”, are obtained in step S as candidate combinations which cannot obtain a throughput of 70 to 80 Mb/s, which can be obtained by the above combination, but can obtain a throughput of 60 to 70 Mb/s lower than it by one rank. 102 203 The selecting unit then sequentially selects one or a plurality of combinations each including a transmission rate and a frame length, in ascending or descending order of the transmission rate (step S). 102 203 102 203 If, for example, the above two candidate combinations are obtained, the selecting unit selects first a combination of a transmission rate of 104 Mb/s and a frame length of “3” in step S when the apparatus sequentially performs trial communication in descending order of transmission rate. In addition, if the above two candidate combinations are obtained, the selecting unit selects the combination of a transmission rate of 78 Mb/s and a frame length of “6” in step S when the apparatus sequentially performs trial communication in ascending order of transmission rate. 204 203 206 In step S, the apparatus performs trial communication using the combination selected in step S. If it is determined as a result of this trial communication that it is possible to perform communication using the combination, the process advances to step S to set the transmission rate and frame length of the combination to initial values/maximum values. 203 202 203 If it is determined as a result of this trial communication using the combination selected in step S that it is impossible to perform communication using the combination, the process returns to step S to select another unselected combination (step S). Subsequently, the same operation as that described above is performed. 202 201 102 201 102 FIG. 7 If it is determined in step S as a result of trial communication using each combination read in step S that the selecting unit could obtain no combination which allows communication, the process returns to step S. The selecting unit then reads a combination of transmission rate and frame length which can obtain a higher throughput from the plurality of combinations stored in the stable in . Subsequently, the same operation as that described above is performed. FIG. 9 102 102 Referring to , if it is impossible to perform communication with the initial combination of transmission rate and frame length, the selecting unit checks first how many combinations of transmission rates and frame lengths exist, which can obtain a throughput lower than the throughput obtained with the initial combination by one rank. If there is only one such combination, it suffices to perform communication using the combination. If there are a plurality of such combinations, the selecting unit sequentially selects the combinations one by one in descending or ascending order of transmission rate, and the apparatus performs trial communication to check whether it can perform communication. The fifth embodiment undergoes switching of transmission rates more frequently than the fourth embodiment. However, the fifth embodiment always sequentially performs trial communication using combinations of transmission rates and frame lengths in descending order of throughput. As has been described above, the first to fifth embodiments can select a transmission rate and the number of frames in an aggregated frame which can obtain the maximum throughput in the current channel state. Note that the techniques of the present invention which have been described in the embodiments of the present invention can be distributed as computer-executable programs by being stored in recoding media such as magnetic disks (flexible disks, hard disks, and the like), optical disks (CD-ROMs, DVDs, and the like), and semiconductor memories. BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING FIG. 1 is a block diagram showing an example of the arrangement of a wireless communication apparatus according to the first embodiment; FIG. 2 is a graph showing an example of throughput characteristics with respect to transmission rates and aggregated frame lengths, and reference aggregated frame lengths with respect to the respective transmission rates; FIG. 3 is a graph showing another example of throughput characteristics with respect to transmission rates and aggregated frame lengths, and reference aggregated frame lengths with respect to the respective transmission rates; FIG. 4 is a view of an example of a table showing the relationship between transmission rates, aggregated frame lengths, and throughputs; FIG. 5 is a flowchart for explaining processing operation for selecting a frame length and a transmission rate; FIG. 6 is a flowchart for explaining another processing operation for selecting a frame length and a transmission rate; FIG. 7 is a view of another example of a table showing the relationship between transmission rates, aggregated frame lengths, and throughputs; FIG. 8 FIGS. 5 and 6 is a flowchart for explaining processing operation for selecting a frame length and a transmission rate, which follows the processing shown in ; and FIG. 9 FIGS. 5 and 6 is a flowchart for explaining another processing operation for selecting a frame length and a transmission rate, which follows the processing shown in .
Crypto-currencies and OFAC: UK residents. A correspondent asks "As a UK individual how do I report / alert the US authorities to the a craptocurrency used by employees and the Chairman of a group of companies with offices in St Louis, Missouri ?" Here's the answer, and it explains differences between OFAC and FinCEN, etc. reports. To answer your question the first thing is to establish if you are subject to OFAC. The answer is that, as an individual with no physical or legal connection to the USA, you are probably not. If you are doing business in the USA and have any kind of "footprint" there, then you are subject to some but not all aspects of it and of those you are subject to there are degrees. OFAC publishes the lists for a number of agencies. The safe option is to assume that, if you do business in or have a footprint there and do business in US Dollars, you should consider it at least likely that you have some obligations. Complying is easy but may have commercial implications. From your question, though, it seems as if what you are concerned about is not whether the persons you mentioned in your question (but which we have, for many reasons, removed) are listed on an OFAC list but rather than you think there is some questionable conduct in progress or in the past. The important differences between OFAC reports and money laundering or terrorist financing reports are as follows: 1. OFAC reports are reports of fact whereas money laundering and terrorist financing reports (which are made, by US businesses that fall within certain classes to FinCEN, a different Treasury department / regulator) are reports of suspicion. Individuals and businesses that are not classified as regulated under FinCEN make reports to the police where the predicate crime happened or, if the crime is inter-state or international to the FBI. Experience tells us they tend to file reports from foreign entities under "forget about it: no political benefit in taking a case from a foreigner to chase an American." 2. A report to OFAC is not confidential; a report to FinCEN must not be publicised to anyone. So called "tipping off" is a criminal offence. 3. If you think that a crime is committed in the USA, then your reporting line would be to the local police or the FBI. If the offence is committed in the UK, then, if you work in a regulated business and the crime was committed using your employer as a vehicle, your reporting line is to your money laundering reporting officer (MLRO). There are various names used in the UK these days. If you are not employed in such a business, but the information of the offence comes to you in the course of your trade, profession, business or employment, then you have a duty to make a report. That report is made to your local police. There is a technical obligation to make a report even if you find out in your private life but the duty is not well phrased and has not been tested in Court, so far as I am aware. Note: if you knowingly take part in a money laundering scheme, then you are subject to criminal proceedings. You may make a defensive report at any time. So, as you can see, there is a certain flow to deciding when and to whom to make a report. From your question, I suspect that OFAC is the wrong route unless the persons you mention are already on an OFAC list (you can check on-line - it's free). DISCLAIMER: this comment is made with regard to the limited facts revealed in the question. It is not intended to be and must not be considered legal advice. All persons are cautioned to take proper legal advice from a suitably qualified professional in their own jurisdiction.
https://pleasebeinformed.com/publications/BankingInsuranceSecurities_com/crypto_currencies_and_ofac_uk_residents
February 6, 1972 Birthday Facts Here are some snazzy birthday facts about 6th of February 1972 that no one tells you about. Don’t wait a minute longer and jump in using the content links below. Average read time of 6 minutes. Enjoy! Contents What day was my birthday Feb 6, 1972? February 6, 1972 was a Sunday and it was the 37th day of the year 1972. The next time you can reuse your old 1972 calendar will be in 2028. Both calendars will be exactly the same! This is assuming you are not interested in the dates for Easter and other irregular holidays that are based on a lunisolar calendar. What day was February 6 this year? The day of the week of your birthday this year was Tuesday. Next year it will be Wednesday and two years from now it will be Thursday. You can check the calendars below if you’re planning what to do on your birthday. |February 2018| |Sun||Mon||Tue||Wed||Thu||Fri||Sat| |1||2||3| |4||5||6||7||8||9||10| |11||12||13||14||15||16||17| |18||19||20||21||22||23||24| |25||26||27||28| |February 2019| |Sun||Mon||Tue||Wed||Thu||Fri||Sat| |1||2| |3||4||5||6||7||8||9| |10||11||12||13||14||15||16| |17||18||19||20||21||22||23| |24||25||26||27||28| |February 2020| |Sun||Mon||Tue||Wed||Thu||Fri||Sat| |1| |2||3||4||5||6||7||8| |9||10||11||12||13||14||15| |16||17||18||19||20||21||22| |23||24||25||26||27||28||29| How many days until my next birthday? There are 291 days left before your next birthday. You will be 47 years old when that day comes. There have been 16,876 days from the day you were born up to today. Since night and day always follow each other, there were exactly 571 full moons after you were born up to this day. How many of them did you see? The next full moon that you can see will be on April 30 at 01:00:00 GMT – Monday. How old are you in dog years? If a dog named Sydney Marie - a Miniature Pinscher breed, was born on the same date as you then it will be 204 dog years old today. A dog’s first human year is equal to 15 dog years. Dogs age differently depending on breed and size. When you reach the age of 6 Sydney Marie will be 40 dog years old. From that point forward a small-sized dog like Sydney Marie will age 4 dog years for every human year. Wanna share this info in social media? Make sure to take a screenshot first. Dog name and breed are randomly generated. Try reloading this page to see a new pet name and a different breed. Hey! How’s your lovelife today? Get a free love reading with the most frank answers. Start to seize love opportunities in your life! Try it today and improve your lovelife. Did I mention it’s F-R-E-E? (Sponsored link; 18+ only) Which celebrity shares my birthday? You might be happy to know that the following celebrities share your birthday. The list was randomly chosen and arranged in chronological order. - 1943Georgeanna Tillman - 1951Jacques Villeret - 1951Kevin Whately - 1952Viktor Giacobbo - 1957Robert Townsend - 1965Jan Svěrák - 1967Anita Cochran - 1978Yael Naim - 1987Luisa Värk - 1993Teresa Scanlan View the complete list of celebrity birthdays for February 6. Who else were born on February 6? Here’s a short list of famous people in history who were born on Feb 6. - 1465Scipione del Ferro, Italian mathematician and theorist (d. 1526) - 1796John Stevens Henslow, English botanist and geologist (d. 1861) - 1802Charles Wheatstone, English-French physicist and cryptographer (d. 1875) - 1818Henry Litolff, English pianist and composer (d. 1891) - 1887Josef Frings, German cardinal (d. 1978) - 1911Ronald Reagan, American actor and politician, 40th President of the United States (d. 2004) - 1947Richard Bowring, English orientalist and academic - 1963David Capel, English cricketer and coach - 1971Brian Stepanek, American actor - 1980Mamiko Noto, Japanese voice actress and singer View the complete list of famous birthdays for February 6. What happened on my birthday – Feb 6? These were the events that made history that coincide with your birthday. - 1819Sir Thomas Stamford Raffles founds Singapore. - 1833Otto becomes the first modern King of Greece. - 1862American Civil War: Forces under the command of Ulysses S. Grant and Andrew H. Foote give the Union its first victory of the war, capturing Fort Henry, Tennessee in the Battle of Fort Henry. - 1958Eight Manchester United F.C. players and 15 other passengers are killed in the Munich air disaster. - 1959Jack Kilby of Texas Instruments files the first patent for an integrated circuit. - 1976In testimony before a United States Senate subcommittee, Lockheed Corporation president Carl Kotchian admits that the company had paid out approximately $3 million in bribes to the office of Japanese Prime Minister Kakuei Tanaka. - 1978The Blizzard of 1978, one of the worst Nor'easters in New England history, hit the region, with sustained winds of 65 mph and snowfall of four inches an hour. - 1988Michael Jordan makes his signature slam dunk from the free throw line inspiring Air Jordan and the Jumpman logo. - 2012A 6.9 magnitude earthquake hits near the central Philippines off the coast of Negros Island causing at least 51 deaths and injuring 112 others. - 2013A 8.0 magnitude earthquake hits the Solomon Islands killing 10 people and injuring 17 others. View the complete list of historical events for February 6. Curious about this Lemon Jelly Bear? This is a party item you can activate and send to your friends when you play the free game Jump Birthday Party. It’s a fun and easy-to-play mobile game for all ages. Get free 1,000 gold coins when you download today! What does my birthday February 6, 1972 mean? Your birthday numbers 2, 6 and 1972 reveal that your Life Path number is 9. It represents selflessness, forgiveness and creativity. You are the philanthropist, humanitarian, socially conscious, and are deeply concerned about the state of the world. The following celebrities also have the same life path number: Ben Lee, Dawn Robinson, Wanda Nara, LisaRaye McCoy-Misick, Tsunku, Yū Kobayashi, Deirdre Lovejoy, Allison Janney, Jack Lemmon, Rebekah Elmaloglou. What is the birthday horoscope for Feb 6, 1972? The zodiac sign of a person born on February 6 is Aquarius ♒. According to the ancient art of Chinese astrology (or Chinese zodiac), Pig is the mythical animal and Metal is the element of a person born on February 6, 1972. Hmmm... what do the stars have to say about you today? Get your daily Aquarius horoscope for free. Get a boost from the stars! Make today a great day for love, money, and career. Your horoscope is available in daily, weekly, monthly, and even yearly forecast. What is the birthstone for February 6? Amethyst is the modern birthstone for the month of February while Bloodstone is the mystical birth stone (based on Tibetan origin). The zodiac gemstone for Aquarius is garnet. Lastly, the birthday stone for the day of the week ‘Sunday’ is topaz. What was the number one song on my birthday? The number-one hit song in the U.S. at the day of your birth was Let’s Stay Together by Al Green as compiled by Billboard Hot 100 (February 12, 1972). Ask your parents if they know this popular song. Let's Stay Together Al Green Greatest Hits What were you in your past life? I do not know how you feel about this, but you were a male ♂ in your last earthly incarnation. You were born somewhere around the territory of Germany approximately on 1875. Your profession was teacher, mathematician, geologist. Your brief psychological profile in that past life: Inquisitive, inventive, liked to get to the very bottom of things and to rummage in books. Talent for drama, natural born actor. Lessons that your last past life brought to present: The world is full of ill and lonely people. You should help those, who are less fortunate than you are. Can you remember who you were? Try another birth date of someone you know or try the birthday of these celebrities: October 2, 1981 – Sidney Samson, Dutch DJ and producer; September 24, 1936 – Jim Henson, American puppeteer, director, producer and screenwriter, created The Muppets (d. 1990); September 28, 1923 – Tuli Kupferberg, American singer, poet, and author (The Fugs) (d. 2010). Don’t forget to share the info to your friends, loved ones or social media followers. Who knows, they might appreciate and thank you for it.
https://mybirthday.ninja/?m=February&d=6&y=1972&go=Go
Making sure a new home is inspected is essential. A recent court ruling found a home inspector liable for the cost of removing mould from a house. The inspector did not find any mould during his inspection, but the owner had a mould allergy and after she took possession there were problems. She sued and a judge ruled that the inspector should have suspected mould based on his review of the premises. The ruling highlights the importance of checking or identifying mould, especially in older homes. Here is what happened: Jane and Fred Smith (not their real name) bought an 80 year old home in Toronto in 2006. They told their real estate agent that Jane was allergic to mould. The agent referred her to a home inspector who found no evidence of a leaky roof or basement and so nobody suspected any problem. The inspector said in his report that the exterior brickwork concrete at the base of the home near the driveway needed repair, as did sections of the driveway itself, but you could probably find this in most old homes. The report had a standard limited liability clause which meant that if the inspector made a mistake, the most the buyer could expect would be the cost of the report. This clause is common in most home inspection reports, mostly due to the fact that the inspector can’t look behind walls or under floors. The Smiths bought the house and within three months after closing, moisture, mould and mildew presented problems for the allergic Mrs. Smith. She sued the seller, the home inspector and the real estate agent for the cost to fix the problem. In court, it was revealed that the seller had lived in the house for six years and the house was leak-free. There was no evidence that he tried to cover anything up by building a wall or repainting the basement walls. In a decision in January, 2011, the judge decided that the inspector should have known that the damaged concrete and driveway at the front of the home could result in leaks to the foundation which could eventually cause mould, which would be especially problematic for someone who was allergic to it. The judge decided that the home inspector should pay 50 per cent of the buyer’s loss. Even though there was a limitation of liability clause, the judge accepted the evidence of Mrs. Smith that it was not explained to her so it had no effect. The judge also decided that the buyer’s real estate agent was 25 per cent responsible for the loss, saying that the agent should have also read the inspection report and come to the same conclusion about the possibility of mould occurring. The buyers were found to be 25 per cent responsible for not reading the report themselves. The sellers were not responsible because they did not know about any leaking. Everyone appealed. In a decision released last month, Ontario appeal court judges decided that the home inspector should pay all of the loss. It was too much to ask a real estate agent or a buyer to make the connection that defects in the concrete and driveway at the front of the house could somehow later lead to mould. I think the buyers were fortunate in the case that the court found that the inspector did not properly explain the limitation of liability clause. Real estate agents are not general contractors, and should not be expected to provide this type of advice to buyers. Still, agents should be suspicious if there is any visible slope in the floor, cracks in the walls or water stains. In addition, any time the seller has done recent renovations or paint jobs, it could be that the sellers are trying to hide an old problem. In all cases, buyers should be warned to conduct detailed home inspections to satisfy these concerns. Mould is becoming a serious issue for buyers. It can cause illness if one is exposed to it over an extended period of time and costs a lot to remove. The problem was that testing for mould once cost over $1,000. Now companies such as Tristar Disaster Recovery with offices in Hamilton, Toronto and Waterloo, can conduct tests for mould for as low as $250, and can assist homeowners with removing mould as well. Since most homes for sale in the GTA are over 50 years’ old, a mould test should be mandatory for every buyer. Note: There are several companies offering mould inspection and removal in Ottawa. Robert.
https://roberthof.ca/home-inspector-liable-for-cost-of-removing-mould/
BACKGROUND SUMMARY DETAILED DESCRIPTION Wireless communication devices are incredibly widespread in today's society. For example, people use cellular phones, smart phones, personal digital assistants, laptop computers, pagers, tablet computers, etc. to send and receive data wirelessly from countless locations. Moreover, advancements in wireless communication technology have greatly increased the versatility of today's wireless communication devices, enabling users to perform a wide range of tasks from a single, portable device that conventionally required either multiple devices or larger, non-portable equipment. Various mobile device applications, such as navigation aids, business directories, local news and weather services, or the like, leverage knowledge of the position of the device. In various cases, the position of a mobile device is identified via motion tracking with respect to the device. For example, in the case of sensor-aided pedestrian navigation applications, motion direction is determined using the orientation of the device sensors in relation to the direction of forward motion. The angle between the orientation of the mobile device and the forward motion direction is referred to as the alignment angle or misalignment angle (MA). When calibration data (such as satellite navigation data) is available, the MA corresponding to a device can be determined using the calibration data. However, when connectivity to a satellite navigation system and/or other sources of calibration data is lost and the sensor orientation of the device changes (e.g., corresponding to movement of the device from a user's hand to the user's pocket, etc.), other techniques are required for computing or estimating the MA. The present disclosure is directed to systems and methods for measuring sensor orientation with respect to pedestrian motion direction. An example of a mobile device according to the disclosure includes an accelerometer configured to generate acceleration information relating to motion of the device and to identify information relating to an orientation of the device, a step detector configured to identify pedestrian steps of a user of the device and corresponding pedestrian step duration information, a motion direction tracking module communicatively coupled to the accelerometer and the step detector and configured to separate a forward motion direction of the device with respect to the user of the device indicated by the acceleration information from vertical and transverse motion directions of the device based on the pedestrian steps identified by the step detector, and a misalignment angle computation module communicatively coupled to the accelerometer and the motion direction tracking module and configured to determine a misalignment angle between the forward motion direction of the device and the orientation of the device with respect to the user of the device. Implementations of such a mobile device may include one or more of the following features. A step shifter module communicatively coupled to the motion direction tracking module and configured to shift the acceleration information in time by about one pedestrian step in accordance with the pedestrian step duration information, thereby obtaining shifted acceleration information; and a step summation module communicatively coupled to the step shifter module and the motion direction tracking module and configured to combine the acceleration information with the shifted acceleration information. The acceleration information includes horizontal acceleration information and vertical acceleration information, and the device further includes a step shifter module communicatively coupled to the motion direction tracking module and configured to shift the acceleration information forward and backward in time by about a quarter pedestrian step in accordance with the pedestrian step duration, thereby obtaining forward-shifted acceleration information and backward-shifted acceleration information, respectively; and a step correlation module communicatively coupled to the step shifter module and the motion direction tracking module and configured to compute a first correlation of vertical acceleration information with forward-shifted horizontal acceleration and to compute a second correlation of vertical acceleration information with backward-shifted horizontal acceleration information. Implementations of such a device may additionally or alternatively include one or more of the following features. The motion direction tracking module is further configured to subtract the first correlation from the second correlation, thereby obtaining a resulting correlation. The misalignment angle computation module includes an eigen analysis module configured to determine the misalignment angle by performing eigen analysis of the resulting correlation with respect to the forward motion direction of the device. The acceleration information includes horizontal acceleration information and vertical acceleration information, and the device further includes an angle direction inference module communicatively coupled to the misalignment angle computation module and configured to resolve forward/backward ambiguity associated with the misalignment angle by analyzing horizontal acceleration information corresponding to the forward motion direction of the device in relation to the vertical acceleration information based on positivity or negativity of the resulting correlation. The step detector includes a pedometer. The step detector is communicatively coupled to the accelerometer and configured to identify the pedestrian steps of the user of the device based on the acceleration information generated by the accelerometer. The accelerometer is configured to identify a direction of gravity relative to the device and the misalignment angle computation module is further configured to determine the orientation of the device based on the direction of gravity relative to the device. An example of a method of identifying a misalignment angle associated with motion of a mobile device according to the disclosure includes obtaining acceleration information associated with the mobile device, partitioning the acceleration information according to respective detected pedestrian steps of the user, identifying a forward motion direction of the user of the mobile device based on the acceleration information and the detected pedestrian steps, and computing a misalignment angle between the forward motion direction of the user of the mobile device and an orientation of the mobile device. Implementations of such a method may include one or more of the following features. The obtaining includes obtaining first acceleration information corresponding to a first pedestrian step of the user and obtaining second acceleration information corresponding to a second pedestrian step of the user that follows the first pedestrian step, and the identifying includes summing the first acceleration information with the second acceleration information. The obtaining includes obtaining vertical acceleration information and horizontal acceleration information associated with the user of the mobile device and the identifying includes correlating the vertical acceleration information of a selected pedestrian step with the horizontal acceleration information of the selected pedestrian step shifted forward and backward in time by about one quarter of a pedestrian step based on a vertical/forward correlation function. The computing includes computing the misalignment angle between the forward motion direction of the mobile device and the orientation of the mobile device by performing eigen analysis of results of the vertical/forward correlation function. The computing further includes resolving forward/backward ambiguity of the misalignment angle based on positivity or negativity of the results of the vertical/forward correlation function. Identifying the respective detected steps of the user based on the acceleration information. Another example of a mobile device according to the disclosure includes an accelerometer configured to generate acceleration information relating to motion of the device and to identify information relating to an orientation of the device; a step detector configured to identify pedestrian steps of a user of the device and corresponding pedestrian step duration information; direction means, communicatively coupled to the accelerometer and the step detector, for inferring a forward motion direction of the device from the acceleration information based on the pedestrian steps identified by the step detector; and misalignment means, communicatively coupled to the accelerometer and the direction means, for determining a misalignment angle between the forward motion direction of the user of the device and the orientation of the device with respect to the user of the device. Implementations of such a mobile device may include one or more of the following features. Shift means, communicatively coupled to the direction means, for shifting the acceleration information in time by an interval having a length of approximately one pedestrian step in accordance with the pedestrian step duration information, and summation means, communicatively coupled to the shift means and the direction means, for combining the acceleration information with a result of the shift means. The acceleration information includes horizontal acceleration information and vertical acceleration information and the device further includes shift means, communicatively coupled to the direction means, for shifting the horizontal acceleration information forward and backward in time by an interval having a length of approximately one quarter pedestrian step in accordance with the pedestrian step duration information, first correlation means, communicatively coupled to the shift means and the direction means, for computing a first correlation between the vertical acceleration and forward-shifted horizontal acceleration information obtained from the shift means, and second correlation means, communicatively coupled to the shift means and the direction means, for computing a second correlation between the vertical acceleration and backward-shifted horizontal acceleration information obtained from the shift means. Implementations of such a mobile device may additionally or alternatively include one or more of the following features. The direction means includes a combiner means, communicatively coupled to the first correlation means and the second correlation means, for subtracting the second correlation from the first correlation. The misalignment means is configured to compute the misalignment angle between the forward motion direction of the user of the device and the orientation of the device by performing eigen analysis of a result of the combiner means. The misalignment means is configured to resolve forward/backward ambiguity associated with the misalignment angle according to positivity or negativity of the result of the combiner means. The step detector is communicatively coupled to the accelerometer and configured to identify the pedestrian steps of the user of the device based on the acceleration information generated by the accelerometer. The accelerometer is configured to identify a direction of gravity relative to the device and the misalignment means is further configured to determine the orientation of the device based on the direction of gravity relative to the device. An example of a computer program product according to the disclosure resides on a non-transitory processor-readable medium and includes processor-readable instructions configured to cause a processor to obtain acceleration information associated with a mobile device, divide the acceleration information according to respective detected pedestrian steps of a user of the mobile device, identify a forward motion direction of the user of the mobile device based on the acceleration information and the detected pedestrian steps, and compute a misalignment angle between the forward motion direction of the mobile device and an orientation of the mobile device with respect to the user of the mobile device. Implementations of such a computer program product may include one or more of the following features. The instructions configured to cause a processor to identify the forward motion direction are further configured to cause the processor to obtain first acceleration information corresponding to a first pedestrian step of the user, obtain second acceleration information corresponding to a second pedestrian step of the user that follows the first pedestrian step, and sum the first acceleration information and the second acceleration information. The acceleration information includes vertical acceleration information and horizontal acceleration information and the instructions configured to cause a processor to identify the forward motion direction are further configured to cause the processor to compute a first correlation result between vertical acceleration information of a selected pedestrian step and horizontal acceleration information of the selected pedestrian step shifted forward in time by about one quarter of a pedestrian step, compute a second correlation result between the vertical acceleration information of the selected pedestrian step and horizontal acceleration information of the selected pedestrian step shifted backward in time by about one quarter of a pedestrian step, and subtract the second correlation result from the first correlation result to obtain a combined correlation result. The instructions configured to cause a processor to compute the misalignment angle are further configured to cause the processor to compute the misalignment angle using eigen analysis of the combined correlation result. The instructions configured to cause a processor to compute the misalignment angle are further configured to cause the processor to resolve forward/backward ambiguity of the misalignment angle based on positivity or negativity of the combined correlation result. Items and/or techniques described herein may provide one or more of the following capabilities, as well as other capabilities not mentioned. Cost and power requirements associated with sensors for tracking motion of a mobile device can be reduced. The accuracy of pedestrian motion direction computation can be increased by leveraging the biomechanics of pedestrian motion. Monitoring of device motion direction can be performed with increased robustness to changes in sensor orientation and/or loss of calibration data. While at least one item/technique-effect pair has been described, it may be possible for a noted effect to be achieved by means other than that noted, and a noted item/technique may not necessarily yield the noted effect. Techniques are described herein for measuring the sensor orientation of a mobile device in relation to the motion direction of a pedestrian user of the mobile device. For example, a mobile device, such as a mobile telephone handset, a laptop or tablet computer, a PDA, etc., can collect data from a sensor ensemble composed of one or more orientation sensors. A step tracker, such as a pedometer or step counter, collects further data relating to pedestrian steps (e.g., walking, jogging, or running steps, etc.) of a user of the mobile device, based on which the data collected by the orientation sensors are partitioned according to their corresponding pedestrian steps. The sensor data corresponding to respective steps are processed to identify a direction of forward motion (e.g., in relation to earth, as determined by obtaining the direction of gravity from data collected by the orientation sensors). Cancellation of the transverse motion component of the motion data is performed to improve identification of the direction of forward motion. Based on the computed direction of forward motion, a MA between the direction of forward motion and the orientation of the mobile device is determined These techniques are examples only and are not limiting of the disclosure or the claims. 12 2 12 2 12 12 12 12 12 2 12 12 12 2 12 12 FIGS. 1-3 FIG. 1 FIG. 2 FIG. 3 FIG. 1 When data from a satellite navigation system, such as GPS data, are available, the MA can be calibrated as a filtered delta between a course over ground reading given by the satellite navigation system and the compass heading. However, in the event that connection to the satellite navigation system is lost, the MA may require autonomous measurement. The MA can be measured independently of satellite navigation data based on sensor data relating to the orientation of a mobile device , as shown by . However, the complexity of these computations is significantly increased in the case of a mobile pedestrian user of the mobile device . For instance, a user of a mobile device may position the mobile device in a variety of orientations, corresponding to positioning of the mobile device in a handbag or backpack, as illustrated by ; on a belt or other similar item of clothing, as illustrated by ; in the user's hand, as illustrated by ; in a coat or pants pocket; or the like. Each of these orientations can affect the MA associated with the mobile device . Further, the orientation of the mobile device may change during movement due to various factors. For example, the user can move the mobile device between different positions (e.g., from the user's pocket to the user's hand, etc.), the mobile device can shift between varying orientations (e.g., such as in a case where the mobile device is placed in the backpack of a user , as shown by ), or normal body movements associated with walking or running can cause changes to the orientation of the mobile device . Therefore, techniques are described herein by which the MA is made adaptable to the current orientation of the mobile device . FIG. 4 10 12 14 16 18 10 Referring to , a wireless communication system includes mobile access terminals (ATs), base transceiver stations (BTSs) disposed in cells , and a base station controller (BSC) . The system may support operation on multiple carriers (waveform signals of different frequencies). Multi-carrier transmitters can transmit modulated signals simultaneously on the multiple carriers. Each modulated signal may be a Code Division Multiple Access (CDMA) signal, a Time Division Multiple Access (TDMA) signal, an Orthogonal Frequency Division Multiple Access (OFDMA) signal, a Single-Carrier Frequency Division Multiple Access (SC-FDMA) signal, etc. Each modulated signal may be sent on a different carrier and may carry pilot, overhead information, data, etc. 14 12 14 14 12 18 14 16 16 14 The BTSs can wirelessly communicate with the ATs via antennas. Each of the BTSs may also be referred to as a base station, an access point, an access node (AN), a Node B, an evolved Node B (eNB), etc. The BTSs are configured to communicate with the ATs under the control of the BSC via multiple carriers. Each of the BTSs can provide communication coverage for a respective geographic area, here the respective cells . Each of the cells of the BTSs is partitioned into multiple sectors as a function of the base station antennas. 10 14 14 The system may include only macro base stations or it can have base stations of different types, e.g., macro, pico, and/or femto base stations, etc. A macro base station may cover a relatively large geographic area (e.g., several kilometers in radius) and may allow unrestricted access by terminals with service subscription. A pico base station may cover a relatively small geographic area (e.g., a pico cell) and may allow unrestricted access by terminals with service subscription. A femto or home base station may cover a relatively small geographic area (e.g., a femto cell) and may allow restricted access by terminals having association with the femto cell (e.g., terminals for users in a home). 12 16 12 12 FIG. 4 The ATs can be dispersed throughout the cells . The ATs may be referred to as terminals, mobile stations, mobile devices, user equipment (UE), subscriber units, etc. The ATs shown in include cellular phones and a wireless router, but can also include personal digital assistants (PDAs), other handheld devices, netbooks, notebook computers, etc. FIG. 5 12 20 22 24 26 28 30 32 30 14 30 20 22 22 24 20 24 20 Referring also to , an example one of the ATs comprises a computer system including a processor , memory including software , input/output (I/O) device(s) (e.g., a display, speaker, keypad, touch screen or touchpad, etc.), accelerometer(s) , antenna(s) , and a satellite positioning system (SPS) receiver . The antennas include a transceiver configured to communicate bi-directionally with the BTSs via the antennas . Here, the processor is an intelligent hardware device, e.g., a central processing unit (CPU) such as those made by Intel® Corporation or AMD®, a microcontroller, an application specific integrated circuit (ASIC), etc. The memory includes non-transitory storage media such as random access memory (RAM) and read-only memory (ROM). The memory stores the software which is computer-readable, computer-executable software code containing instructions that are configured to, when executed, cause the processor to perform various functions described herein. Alternatively, the software may not be directly executable by the processor but is configured to cause the computer, e.g., when compiled and executed, to perform the functions. 28 12 12 28 12 28 12 The accelerometer(s) are configured to collect data relating to motion and/or orientation of the mobile device as well as changes in the motion and/or orientation of the mobile device over time. The accelerometer(s) can provide information over time, e.g., periodically, such that present and past orientations and/or motion directions can be compared to determine changes in the motion direction and/or orientation of the mobile device . Further, the accelerometer(s) are configured to provide information as to gravitational acceleration such that the direction of gravity relative to the mobile device can be determined. 12 28 12 28 12 12 12 12 12 12 Within the mobile device , the accelerometer(s) comprise a sensor ensemble that collects information relating to the orientation of the mobile device . In addition to an accelerometer , the sensor ensemble may also include a gyroscope that measures rotational acceleration of the mobile device with respect to one or more of roll, pitch or yaw; a magnetometer or compass configured to provide an indication of the direction of magnetic north relative to the mobile device ; and/or other sensor mechanisms. The sensor ensemble is associated with a set of three axes, which respectively correspond to the three spatial dimensions of the mobile device . These axes, in turn, define a coordinate plane for the sensor ensemble and its associated mobile device . By way of example, a coordinate plane for the mobile device may be defined by three orthogonal axes that respectively run along the length, width and depth of the mobile device . 32 12 32 32 20 12 32 14 12 The SPS receiver includes appropriate equipment for monitoring navigation signals from satellites and determining position of the mobile device . The SPS receiver can monitor navigation signals from satellites corresponding to any suitable satellite navigation system, such as GPS, GLONASS, the Beidou navigation system, the Galileo positioning system, etc. Here, the SPS receiver includes one or more SPS antennas, and can either communicate with the processor to determine location information or can use its own processor for processing the received satellite navigation signals to determine the location of the mobile device . Further, the SPS receiver can communicate with other entities such as a position determination entity and/or the BTS in order to send and/or receive assistance information for use in determining the location of the mobile device . 28 12 40 42 44 40 12 40 FIG. 6 Information obtained by an accelerometer associated with the mobile device is provided to a step detector , a motion direction tracking module , and/or a MA computation module for further processing, as further shown by . The step detector analyzes the motion of the mobile device to identify movement patterns or signatures corresponding to pedestrian steps (e.g., running, walking, jogging, etc.). Upon identifying device movement that matches that of a pedestrian step, the step detector can further collect or otherwise determine information corresponding to the step, such as the step length, the duration of the step in time, a count of consecutive identified steps, or the like. 40 28 12 40 12 40 12 12 40 20 24 22 40 12 2 12 12 40 12 40 12 FIG. 6 Here, the step detector analyzes information from the accelerometer(s) corresponding to movement of the mobile device in order to detect respective pedestrian steps. Alternatively, the step detector can obtain motion information corresponding to the mobile device using other motion or orientation sensors not shown in . As another alternative, the step detector can track movement of the mobile device independently of other sensors associated with the mobile device . Further, the step detector can be implemented as one or more software modules (e.g., by the processor in conjunction with the software stored in the memory ), one or more hardware components (e.g., a pedometer, step counter, etc.), or a combination of hardware and software. The step detector can be physically coupled to the mobile device , worn by a user of the mobile device , and/or placed in any other location suitable for monitoring the motion of the mobile device . In the event that the step detector is not physically coupled to the mobile device , the step detector can be communicatively connected to the mobile device via any known wired and/or wireless communication technology. 42 44 20 24 22 20 28 12 12 The motion direction tracking module and the MA computation module are implemented by the processor in conjunction with the software stored in the memory . These modules, as implemented by the processor (e.g., by executing software algorithms), are configured to process the information from the accelerometer(s) in order to aid one or more applications associated with the mobile device in determining the direction of motion of the mobile device (e.g., expressed in relation to north). 42 12 42 The motion direction tracking module can express the direction of motion of the mobile device as an angle relative to north, e.g., with respect to a horizontal plane in an earth-based coordinate system such as the north-east-down (n-e-d) coordinate system. As used herein, the term “north” refers to any known definition, including true north, magnetic north, etc. In some cases, the motion direction tracking module can be configured to translate a motion direction determined in relation to true north into a motion direction given in relation to magnetic north, or vice versa, using one or more compensation algorithms (e.g., based on magnetic declination or other parameters). 12 44 12 12 42 12 12 12 FIG. 7 For a sensor-aided pedestrian navigation application running on the mobile device , the MA computation module is utilized to determine the angular offset (the MA) between the orientation of the mobile device and the direction of forward motion of the mobile device , as given by the motion direction tracking module . For example, as shown by , the MA is defined by the angular difference between the direction of motion M of a mobile device and the direction of orientation O of the mobile device. By calculating and utilizing the MA, the direction of motion M of the mobile device can be obtained in cases in which conventional motion direction techniques fail. More particularly, as the MA can have any value (e.g., from 0 to 360 degrees) depending on the direction of orientation O of the mobile device , without the MA even approximate conversion of device heading to motion direction is not possible. 12 12 12 12 12 12 12 12 The MA is utilized to facilitate positioning of the mobile device . For example, a mobile device can be equipped with a compass or other mechanisms to provide information indicating the heading of the mobile device , which is defined as the direction at which the mobile device is oriented (e.g., in relation to magnetic north) within a given precision or tolerance amount. However, unless the mobile device is immovably positioned such that it is always oriented in the direction of motion, the compass heading of the mobile device alone does not represent the direction in which the mobile device is moved. Thus, the MA can be utilized to convert the direction of orientation of the mobile device to the direction of motion in the event that the mobile device is not oriented in the direction of motion. As an example, the direction of motion in a compass-aided dead reckoning application can be computed as the compass heading plus the MA. 42 44 40 12 2 28 40 12 42 44 FIG. 8 The motion direction tracking module and the MA computation module can operate based on sensor data, information obtained from a step detector , etc., to determine the MA associated with movement of a mobile device being carried by a pedestrian user , as shown by . Initially, based on data collected from accelerometer(s) and/or the step detector , pedestrian steps are identified and the direction of gravity relative to the sensor axes of the mobile device is determined. These initial computations form a basis for the operation of the motion direction tracking module and the MA computation module , as described below. With regard to pedestrian motion, such as walking, running, etc., the direction of motion changes within a given pedestrian step and between consecutive steps based on the biomechanics of pedestrian motion. For example, rather than proceeding in a constant forward direction, a moving pedestrian shifts left to right (e.g., left during a step with the left foot and right during a step with the right foot) with successive steps and vertically (e.g., up and down) within each step. Accordingly, transverse (lateral) acceleration associated with a series of pedestrian steps cycles between left and right with a two-step period while forward and vertical acceleration cycle with a one-step period. 42 42 28 42 50 52 50 52 The motion direction tracking module can leverage the above properties of pedestrian motion to isolate the forward component of motion from the vertical and transverse components. For example, the motion direction tracking module records acceleration information obtained from accelerometer(s) (e.g., in a buffer) over consecutive steps. To rectify forward acceleration and suppress or cancel the transverse component of the acceleration, the motion direction tracking module utilizes a step shifter and a step summation module to sum odd and even steps. In other words, the step shifter shifts acceleration data corresponding to a series of pedestrian steps in time by one step. Subsequently, the step summation module sums the original acceleration information with the shifted acceleration information. As noted above, transverse changes sign with consecutive steps with a two-step period due to body rotation and rolling while forward and vertical acceleration exhibit a one-step period. As a result, summing pedestrian steps after a one-step shift reduces transverse acceleration while having minimal impact on vertical or forward acceleration. 12 50 52 54 28 If the mobile device is not centrally positioned on a pedestrian user's body or shifts orientation during the pedestrian motion, transverse acceleration will not be symmetrical from step to step. Accordingly, while the step shifter and step summation module operate to reduce the transverse component of acceleration, these modules may not substantially eliminate the transverse acceleration. To enhance the removal of transverse acceleration, a step correlation module can further operate on the acceleration data obtained from the accelerometer(s) . 54 54 50 As a pedestrian steps forward (e.g., when walking), the center of gravity of the pedestrian moves up at the beginning of the step and down at the end of the step. Similarly, the forward speed of the pedestrian decreases when the foot of the pedestrian reaches the ground at the end of a step and increases during the step. This relationship between forward and vertical motion during the progression of a pedestrian step is leveraged by the step correlation module in further canceling transverse acceleration. In particular, if the acceleration associated with a pedestrian step is viewed as a periodic function, it can be observed that the vertical acceleration and forward acceleration associated with the step are offset by approximately a quarter of a step (e.g., 90 degrees). Accordingly, the step correlation module correlates vertical acceleration with horizontal acceleration shifted (by the step shifter ) by one quarter step both forwards and backwards (e.g., +/− 90 degrees). After shifting and correlation as described above, the vertical/forward correlation is comparatively strong due to the biomechanics of pedestrian motion, while the vertical/transverse correlation is approximately zero. Thus, the correlations between vertical and horizontal acceleration shifted forward and backward by one quarter step are computed, and the forward shifted result is subtracted from the backward shifted result (since the results of the two correlations are opposite in sign) to further reduce the transverse component of acceleration. 42 44 12 44 56 58 42 56 12 58 12 58 Once the motion direction tracking module substantially cancels transverse acceleration as discussed above, the MA computation module determines the angle between the forward component of acceleration and the orientation of the mobile device . Here, the MA computation module identifies the MA via eigen analysis, as performed by an eigen analysis module , and further processing performed by an angle direction inference module . Based on information provided by the motion direction tracking module , the eigen analysis module determines the orientation of the sensor axes of the mobile device with respect to the earth, from which a line corresponding to the direction of motion of the mobile device is obtained. The angle direction inference module analyzes the obtained line, as well as forward and vertical acceleration data corresponding to the corresponding pedestrian step(s), to determine the direction of the MA based on the direction of motion of the mobile device (e.g., forward or backward along the obtained line). By doing so, the angle direction inference module operates to resolve forward/backward ambiguity associated with; the MA. 58 The angle direction inference module leverages the motion signature of a pedestrian step to determine the direction of the MA. As discussed above, forward and vertical acceleration corresponding to a pedestrian step are related due to the mechanics of leg rotation, body movement, and other factors associated with pedestrian motion. Thus, the angle direction inference module utilizes knowledge of these relationships to identify whether a motion direction is forward or backward along a given line. While the above discussion relates to obtaining a two-dimensional motion direction, e.g., with respect to a horizontal plane, similar techniques could be utilized to obtain a direction of motion in three dimensions. Thus, the techniques described herein can be extended to account for changes in altitude, pedestrian motion along an uneven surface, and/or other factors impacting the direction of motion in three dimensions. 28 12 Additionally, the techniques described above can be extended to leverage a gyroscope in addition to accelerometer(s) . With further reference to the biomechanics of pedestrian motion, leg rotation and other associated movements during a pedestrian step can be classified as angular movements, e.g., measured in terms of pitch or roll. Accordingly, a gyroscope can be used to separate gravity from acceleration due to movement such that the reference frame for computation can be rotated to account for the orientation of the mobile device prior to the calculations described above. FIG. 9 FIGS. 1-8 60 12 60 60 60 Referring to , with further reference to , a process of computing the direction of motion of a mobile device includes the stages shown. The process is, however, an example only and not limiting. The process can be altered, e.g., by having stages added, removed, rearranged, combined, and/or performed concurrently. Still other alterations to the process as shown and described are possible. 62 12 28 12 64 62 40 28 At stage , acceleration information associated with a mobile device is obtained. This information can be obtained by one or more accelerometers and/or other sensor devices associated with the mobile device . At stage , the acceleration information obtained at stage is partitioned according to respective detected pedestrian steps (e.g., running steps, walking steps, etc.). The pedestrian steps are detected by a step detector , with assistance from or independently of an accelerometer . 66 12 64 66 42 52 54 20 24 22 At stage , a forward motion direction of the mobile device is identified based on the acceleration information corresponding to the respective detected pedestrian steps, as partitioned at stage . The forward motion direction is identified at stage by a motion direction tracking module , e.g., with the aid of a step summation module and/or a step correlation module implemented by a processor executing software stored on a memory as described above. 68 12 12 44 20 24 22 At stage , a MA between the forward motion direction of the mobile device and an orientation of the mobile device is computed. The MA is computed by, e.g., a MA computation module implemented by a processor executing software stored on a memory , based on eigen analysis and direction inference procedures as described above. FIG. 10 FIGS. 1-8 70 12 70 70 70 Referring next to , with further reference to , an alternative process of computing the direction of motion of a mobile device includes the stages shown. The process is, however, an example only and not limiting. The process can be altered, e.g., by having stages added, removed, rearranged, combined, and/or performed concurrently. Still other alterations to the process as shown and described are possible. 72 2 12 28 12 40 28 At stage , acceleration information is obtained that corresponds to a first pedestrian step (e.g., running step, walking step, etc.) of a user of a mobile device and a second pedestrian step immediately following the first pedestrian step. The acceleration information can be obtained by an accelerometer associated with the mobile device and/or by any other suitable means. Further, the first pedestrian step and the second pedestrian step can be delineated by a step detector , which can operate based on data obtained from the accelerometer or independent movement data. 74 52 50 20 24 22 76 54 50 28 40 At stage , the acceleration information corresponding to the first pedestrian step is summed (e.g., by a step summation module with shifting by a step shifter , as implemented by a processor executing software stored on a memory ) with the acceleration information corresponding to the second pedestrian step. At stage , the acceleration information is further processed by correlating the vertical acceleration of the first pedestrian step with the horizontal acceleration of the first pedestrian step shifted forward and backward by one quarter pedestrian step using a vertical/forward correlation function. Here, the correlation function is implemented by a step correlation module or other suitable mechanisms. Further, a step shifter is used to provide the forward and backward shifting utilized in the correlations. Vertical acceleration and horizontal acceleration corresponding to the first pedestrian step are separated based on acceleration data provided by an accelerometer (e.g., based on measurements corresponding to different sensor axes with respect to gravity, etc.), a step detector , or the like. 78 12 12 56 44 20 24 22 76 80 58 78 At stage , a misalignment angle between a motion direction of the mobile device and an orientation of the mobile device is identified by performing eigen analysis (e.g., via an eigen analysis module associated with a MA computation module , each of which are implemented by a processor executing software stored on a memory ) with respect to the results of the vertical/forward correlation function at stage . At stage , forward/backward ambiguity of the misalignment angle is resolved by an angle direction inference module or other suitable mechanisms based on the sign (i.e., positivity or negativity) of the results of the vertical/forward correlation function utilized at stage . 80 78 Upon computation of the MA as shown at stage , various further functions can be performed. For example, the eigen analysis performed at stage can be utilized to obtain an error estimate for the computed MA. As another example, the computed MA can be applied to a motion direction estimate to enhance the accuracy of pedestrian navigation applications or other appropriate applications. Other uses of the computed MA are also possible. Still other techniques are possible. BRIEF DESCRIPTION OF THE DRAWINGS FIGS. 1-3 are graphical illustrations of a technique for computing and applying a misalignment angle in a position location system for a moving pedestrian user. FIG. 4 is a schematic diagram of a wireless telecommunication system. FIG. 5 FIG. 1 is a block diagram of components of a mobile station shown in . FIG. 6 FIG. 2 is a partial functional block diagram of the mobile station shown in . FIG. 7 is a graphical illustration of a technique for computing and applying a misalignment angle in a position location system. FIG. 8 is a partial functional block diagram of a system for computing a forward motion direction of a mobile station. FIG. 9 is a block flow diagram of a process of computing the direction of motion of a mobile device. FIG. 10 is a block flow diagram of an alternative process of computing the direction of motion of a mobile device.
OK. The human-caused global warming hypothesis is completely model-dependent. We can't directly observe cars and cows turning up the earth thermostat. Whatever the human contribution there may be to climate constitutes just a few signals among many hundreds or thousands. All our models of the earth climate are incomplete. That's why they keep changing, and that's why climate scientists keep finding surprises. As Rummy used to say, there are a ton of "unknown unknowns" out there. The real world is full of x's, y's and z's, far more than we can write little models about. How do you extract the human contribution from a vast number of unknowns? That's why constant testing is needed, and why it is so frustrating to do frontier science properly. Science is difficult because nature always has another surprise in store for us, dammit! Einstein rejected quantum mechanics, and was wrong about that. Newton went wrong on the proof of calculus, a problem that didn't get solved until 1900. Scientists are always wrong --- they are just less wrong now than they were before (if everything is going well). Check out the current issue of Science magazine. It's full of surprises. That's what it's for. Now there's a basic fact about complexity that helps to understand this. It's a point in probability theory (eek!) about many variables, each one less than 100 percent likely to be true. If I know that my six-sided die isn't loaded, I'll get a specific number on average one out of six rolls. Two rolls of the die produces 1/6 x 1/6 = 1/36. For n rolls of the die, I get (1/6) multiplied by itself n times, or (1/6) to the nth power. That number becomes small very quickly. The more rolls of the die, the less likely it is that some particular sequence will come up. It's the first thing to know in any game of chance. Don't ever bet serious money if that isn't obvious. Now imagine that all the variables about global climate are known with less than 100 percent certainty. Let's be wildly and unrealistically optimistic and say that climate scientists know each variable to 99 percent certainty! (No such thing, of course). And let's optimistically suppose there are only one-hundred x's, y's, and z's --- all the variables that can change the climate: like the amount of cloud cover over Antarctica, the changing ocean currents in the South Pacific, Mount Helena venting, sun spots, Chinese factories burning more coal every year, evaporation of ocean water (the biggest "greenhouse" gas), the wobbles of earth orbit around the sun, and yes, the multifarious fartings of billions of living creatures on the face of the earth, minus, of course, all the trillions of plants and algae that gobble up all the CO2, nitrogen-containing molecules, and sulfur-smelling exhalations spewed out by all of us animals. Got that? It all goes into our best math model. So in the best case, the smartest climatologist in the world will know 100 variables, each one to an accuracy of 99 percent. Want to know what the probability of our spiffiest math model would be, if that perfect world existed? Have you ever multiplied (99/100) by itself 100 times? According to the Google calculator, it equals a little more than 36.6 percent. The Bottom line: our best imaginable model has a total probability of one out of three. How many billions of dollars in Kyoto money are we going to spend on that chance? Or should we just blow it at the dog races? So all ye of global warming faith, rejoice in the ambiguity that real life presents to all of us. Neither planetary catastrophe nor paradise on earth are sure bets. Sorry about that. (Consider growing up, instead.) That's why human-caused global warming is an hypothesis, not a fact. Anybody who says otherwise isn't doing science, but trying to sell you a bill of goods. Probably. James Lewis Uh oh. Another global warming denier! He's an International criminal for daring to dispute the theology. Arrest that man. The ultimate cop out is when they state waiting for difinitive proof would be too late to change it. How convenient for their agenda. fake but accurate science Wonder how Colorado is enjoying their "Global Warming" today? -15 in Greeley. Excellent article to read on a 32 degree day in Houston. Thanks... I get nervous when someone asserts that a problem is too complicated to tackle. He questions the reliability of the current global climate (computer) models, but neglects to mention that these have been succesfully tested on real-world conditions. E.g., if you plug in the initial conditions resulting from the eruption of Mt. Pinatubo (1991), the models correctly predict the amount and duration of the resulting global cooling. I go deaf to the global warming crowd incessantly wailing about man's impact on climate. There doesn't seem to be a credible argument for it. The message from the global warmist theology is often anti-American, fails to equally judge other nations, and usually calls for solutions that are not practical at best. What I do buy into can be expressed as follows: Consumption of resources and generation of waste are inevitable - and acceptable. Our ability to manage resources and pollution can positively affect quality of life. nice read It's an argument quite a few of us could have made. Thanks to James Lewis for packaging it! The biggest problem is that many people can't think except in cliches, and we don't have any cliche to say "I don't know." That's why the headline says "probably a crock" instead of "unknown certainty". Ping for later read. Wrong. My brother has a Ph.D in atmospheric science and he BUILDS the computer models that you're talking about. His advice: "don't ever trust a weather forecast more than two days out." If they can't predict local weather patterns more than a couple days into the future, what confidence do you have that they can predict global weather patterns years in advance? E.g., if you plug in the initial conditions resulting from the eruption of Mt. Pinatubo (1991), the models correctly predict the amount and duration of the resulting global cooling. Those models only worked after the fact. Occasionally a computer model will actually "predict" a weather event to some degree of accuracy, but even a broken clock is right twice a day. I feel like leaving that image as the desktop for my fiance, who's European and very, "Americans are killing us with global warming." Why Global Warming is Probably a Crock .....Al Gore supports it. Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.
https://freerepublic.com/focus/f-news/1768268/posts
Naples has excellent transport accessibility. The city has its own airport, where flights from Russia regularly arrive. In addition, the place can be reached by trains, buses or ferries from different cities in Italy. Transport Almost all public transport in Naples – buses, trolleybuses, trams, metros and funiculars – is operated by the ANM transport company. Ground transportation is convenient and comfortable (except during peak hours), but tends to get stuck in traffic jams. The movement starts at 5:00 and ends at midnight. The rest of the time, residents and guests of the city who are on a spree are taken home by rare night buses and taxis. During peak hours, the public transport interval is 10-15 minutes, in the evening – up to 30 minutes. Note: according to allcitypopulation, the population of Italy is 59.07 million (2021). The metro, which appeared in Naples only in 1993, consists of two lines. Line 1 runs from the city center to the north. Its station “Toledo”, similar to a snow cave, is recognized as the most beautiful in Western Europe. Line 6 serves the western part of the city. An important component of the transport network is the city funiculars. Three lines: Chiaia, Montesanto, Centrale lead to Vomero Hill. Another line, Mergellina, connects pl. Montesanto from st. Raphael Morgen. For trips around the city, you can use the city trains. On the metro map (website in Italian) they are marked as lines 2, 3, 4, 5 and 7. - Center of Naples Travel tickets Tickets are valid for travel on all types of public transport. Single Bigletto Orario (1.60 EUR) is valid for 90 minutes, Bigletto Giornaliero (4.50 EUR) is valid for the whole day, Bigletto Settimanale (15.80 EUR) is valid for the whole week. Tickets can be bought at bars, tobacco and newsstands. Bicycles for rent For those who want to pedal, there is a Bike Sharing Napoli program (website with an English version). You can borrow a bike for free at one of the 10 bicycle stations in the center of Naples and leave it at any other. Travel time is no more than 30 minutes. If the limit is exceeded – a fine from 60 to 600 EUR, depending on the time of delay. The number of trips per day is unlimited. Taxi Official Neapolitan taxis are painted white. The landing fee is EUR 3.50 from 07:00 to 22:00 on weekdays and EUR 6 from 22:00 to 07:00, as well as on weekends and public holidays. The fare for 1 km of the journey is 0.83 EUR, the minimum cost of a trip on weekdays is 4.50 EUR. For every 10 seconds of downtime, another 0.05 EUR will be charged. Naples has a system of fixed fares for trips to major destinations between the airport, train stations, port and other areas of the city. Naples Hotels There are many budget hotels around the Central Station and Piazza Garibaldi. It will not be difficult to rent a room for 30-40 EUR. The reverse side of such cheapness is a dirty area, noise, and a low level of service. A hotel in the historic center is a good option for those who want to do some sightseeing. The best of them occupy old mansions with antique furniture and gilded stucco. At the same time, the prices are quite humane: 140-190 EUR per day. If you refuse gilded stucco, you can rent a hotel room or apartment for 70-100 EUR. Few city hotels have their own parking lots, the cost is 20-30 EUR per day. Hotels in the coastal part are the most expensive. There are many four- and five-star hotels with views of the Gulf of Naples, a swimming pool and designer interiors. Prices exactly match the view from the window: 150-300 EUR per room. Rent a Car Renting a car in Naples is easy, but driving around the city is very difficult, especially in the rain and on Fridays during rush hours. Another thing is if you are planning to tour the outskirts of the city – a car is the best assistant. At Capodichino airport there are counters of leading international companies Alamo, Avis, Budget, Europcar, Hertz and others, as well as local Firefly, Locauto, Maggiore, offering car rental at lower prices. For those wishing to pick up a car in the city at the Central Railway Station, there are offices of Avis, Hertz, Sixt and Maggiore. Keep in mind that the cost of renting at the station is 30-50% higher than at the airport. At the airport, one day rental of a small Fiat 500 will cost from 37 EUR, a compact Ford Focus – from 49 EUR. A week – from 127 and 176 EUR, respectively. Searching for free parking in the center of Naples is useless. For security reasons, preference should be given to covered or underground parking. The cost of parking is 1-2.50 EUR per hour or 5-25 EUR per day, depending on the area. Communication and Wi-Fi The center of Naples is covered with a free Wi-Fi network, and the mayor’s office is working to extend it to the entire city. You can freely connect to the Internet in bars, cafes and pizzerias, museums, railway stations and other public places. Almost all hotels provide their guests with free WI-Fi. In the offices of mobile operators, you can buy tourist SIM cards. ALL-IN Explorer Week (operator Tre, validity period 7 days, 20 EUR) includes 10 GB of mobile data; Italy Tourist Pass (operator Wind, validity 28 days, 20 EUR) — 100 minutes of calls to 40 countries and 2 GB of mobile internet. City subscriptions In Naples, there are two types of tourist cards that can significantly save time and money when visiting attractions. With the ArteCard Napoli, you can visit any of the first two attractions for free and all the other 40 with a 50% discount. It costs 32 EUR and is valid for 3 days. It also entitles you to free public transport. ArteCard Campania for 3/7 days (32/34 EUR) includes more than 80 sites in Naples and the Campania region, including Pompeii and Herculaneum. It allows you to visit the first two attractions for free and the rest with a 50% discount, as well as free rides on public transport throughout the region. Cards can be ordered online or purchased at the box office of any of the facilities, at the airport, railway stations, travel agencies. A complete list of objects is given at website from English version. For cardholders – entry without a queue.
https://www.diseaseslearning.com/how-to-get-to-naples-italy/
Email address: To cite this article: Habib Ahmed Elsayir. Significance Test in Meta-analysis Approach: A theoretical Review. American Journal of Theoretical and Applied Statistics. Vol. 4, No. 6, 2015, pp. 630-639. doi: 10.11648/j.ajtas.20150406.35 Abstract: Meta-analysis, a statistical procedure that integrates the results of several independent studies, plays a central role in statistical research, and a very important task in research problems and statistical significance tests. This paper discusses these principles, along with the practical steps in performing meta-analysis. It describes the issue of meta-analysis, explains what meta-analysis is, how it is done and how it can be interpreted. Some related problems such as statistical significance, effect size and power analysis are described. Examples of implementation on theoretical data would be carried. Results, conclusions, recommendations on the use of meta-analysis would be summarized. Keywords: Effect Size, Meta-analysis, Sample Size, Sensitivity Analysis, Significance Test, Systematic Review 1. Introduction Meta-analysis used in many application fields. Pharmaceutical companies use meta-analysis to gain approval for new drugs. Clinicians and applied researchers in medicine, education, psychology, criminal justice, and several of other fields use meta-analysis to determine which interventions work, and which ones work best. Meta-analysis is also widely used in basic research to evaluate the evidence in areas as diverse as sociology, social psychology, sex differences, finance and economics, political science, marketing, ecology and genetics, among others. Decisions about the utility of an intervention or the validity of a hypothesis cannot be based on the results of a single study, since results vary from one study to another. Rather, a mechanism is needed to synthesize data across studies. Narrative reviews had been used for this purpose, but considered largely subjective (different conclusions) and becomes impossibly difficult when there are more than a few studies involved. Meta-analysis, by contrast, applies objective formulas and can be used with any number of studies. Meta-analysis is a statistical procedure that integrates the results of several independent studies considered to be "combinable". Well conducted meta-analyses allow a more objective appraisal of the evidence than traditional narrative reviews, provide a more precise estimate of a treatment effect, and may explain heterogeneity between the results of individual studies. Conducted meta-analyses, on the other hand, may be biased owing to exclusion of relevant studies or inclusion of inadequate studies Egger, M et al (1997). It is is a statistical technique in which the results of two or more studies are mathematically combined to see if the overall effect is significant in order to improve the reliability of the results. When there are multiple studies with conflicting results, meta-analysis will be useful since it combines and tests the results of all the studies. The result is the same as doing one study with a really big sample size, one large enough to conclusively demonstrate an effect if there is one, or conclusively reject an effect if there isn't one of an appreciable size John H. McDonald (2014). Studies chosen for inclusion in a meta-analysis must be sufficiently similar in a number of characteristics in order to accurately combine their results. When the treatment effect (or effect size) is consistent from one study to the next, meta-analysis can be used to identify this common effect. When the effect varies from one study to the next, meta-analysis may be used to identify the reason for the variation. statistical-solutions-software In this article, the general steps involved in doing a meta-analysis will be described. Some of the basic steps of a meta-analysis will be explained. Sufficient detail can be seen in: Berman and Parker (2002), Gurevitch and Hedges (2001), Hedges and Olkin (1985), or some other books. This paper also gives a brief demonstration of basic methodologies of effect size, reviews issues of the topic, accompanied by numerical illustrations. Tables made are computed from different sources and verified using online software on effect size (see the list of websites references here). The use of effect sizes, however, has generally been limited to meta-analysis for combining and comparing estimates from different studies. This is despite the fact that measures of effect size have been available for decades, Huberty (2002). The concept of effect size is tight to a school of methodology which known as meta-analysis, (see Baker, R. & Dwyer, F. (2000), Biostat (2006), Poston, J. M., & Hanson, W. E. (2010). Heavily laying on Rosenthal (1994), Rosenthal & Rosnow (2000), has introduced a useful summary of effect sizes computation and transformations for inferential statistics. Michael Fur (2008) has also discussed effect sizes and their links to inferential statistics. Meta analysis always deals with two issues: publication bias (also known as the file drawer problem) and the varying quality of the studies. Publication bias is "the systematic error introduced in a statistical inference by conditioning on publication status. Publication bias can lead to misleading results when a statistical analysis is performed after assembling all of the published literature on some subject. Gerard E. Dallal (2015). Meta-analysis would be used for the following purposes: 1) To establish statistical significance with studies that has conflicting results. 2) To develop a more correct estimate of effect magnitude. 3) To provide a more complex analysis of harms, safety data, and benefits. 4) To examine subgroups with individual numbers that are not statistically significant. There is, as yet, no unanimously accepted strategy for performing a meta-analysis but researchers agree that each meta-analysis should be conducted like a scientific experiment and begin with a protocol, which clearly states its aim and methodology. J Hypertens (1996). Meta-analysis should be as carefully planned as any other research project, with a detailed written protocol being prepared in advance. Egger, M. et al (1997). Potential advantages of meta-analysis (eg. over classical literature reviews, simple overall means of effect sizes etc.) include:(see Jonathan J Deeks, Julian PT Higgins and Douglas G Altman, and see. statistical-solutions-software webpage/) 1) Derivation and statistical testing of overall factors / effect size parameters in related studies 2) The ability to answer questions not posed by individual studies and generalization to the population of studies. 3) Ability to control for between-study variation. 4) Including moderators to explain variation. 5) Higher statistical power to detect an effect than in ‘n=1 sized study sample’ 6) An improvement in precision. Considered an evidence-based resource, meta-analysis offers the opportunity to critically evaluate and statistically combine results of comparable studies or trials. However, disadvantage of meta-analysis is that - (see The Himmelfarb Health Sciences Library (2011)-it looks difficult and time consuming to identify appropriate studies and not all studies provide adequate data for inclusion and analysis. In addition to that it requires advanced statistical techniques as well as the issue of heterogeneity of study populations. In general, Weaknesses of Meta Analysis is as follows: (see the statistical-solutions-software web page). 1) Meta-analysis can never follow the rules of hard science. Weaknesses include: 2) Sources of bias are not controlled by the method. 3) A good meta-analysis of badly designed studies will still result in bad statistics. 4) Heavy reliance on published studies, which may create exaggerated outcomes, as it is very hard to publish studies that show no significant results. (File Drawer Problem). 5) Dangers of Agenda Driven Bias: From an integrity perspective, researchers with a bias should avoid meta-analysis and use a less abuse-prone (or independent) form of research. A meta-analysis answers three general questions: (see Overview of Meta-Analysis web page): 1) Central tendency – The central purpose of a meta analysis is to test the relationship between two variables such that X affects Y. Central tendency identifies whether X affects Y via statistically summarizing significance levels, effect sizes, and/or confidence intervals, and try to answer whether X affects Y, is the effect significant, and how strong is that effect? 2) Variability – There is always some degree of variation between the outcomes of the individual studies that compose the meta-analysis. The question is whether the degree of variability is significantly different than what we would expect by chance alone. If so, then its called heterogeneity. 3) Prediction – If there is heterogeneity (variability), then we look for moderating variables that explain the variability (does the effect of X on Y differ with moderator variables?). 1.1. Meta-Analysis Basic Steps There are generally five separate steps in conducting a meta-analysis: (see Meta-analysis. From PsychWiki web page). 1. Hypothesis defining – A well-defined statement of the relationship between the variables under investigation must be determined to define carefully the inclusion and exclusion criteria when locating potential studies. 2. Locate the studies – A meta-analysis is only informative if it adequately summarizes the existing literature, such as database searches, unpublished studies, conference proceedings, etc). 3. Data collection– Gather empirical findings from primary studies (e. g., p-value, effect size, etc) and input into statistical database. 4. Effect sizes Calculation– Calculate the overall effect by converting all statistics to a common metric, making adjustments as necessary to correct for issues like sample-size or bias, and then calculating central tendency (e. g., mean effect size and confidence intervals around that effect size) and variability (e. g., heterogeneity analysis). 5. Variables Analysis – If heterogeneity exists, you may want to analyze moderating variables by coding each variable in the database and analyzing either mean differences (for categorical variables) or weighted regression (for continuous variables) to see if the variable accounts for the variability in the effect size. 1.2. Steps of Conducting a Meta-analysis First, select suitable statistical approach: Generally, there are three different statistical approaches to conduct a meta-analysis so first you need to choose which approach best fits your needs. Detailed comparison of these three approaches, are found in (Johnson, Mullen, & Salas, 1995) and (Schmidt and Hunter, 1999). Hedges & Olkin Approach – see (Hedges, 1981); (Hedges, 1982); (Hedges & Olkin, 1985) 1. Rosenthal & Rubin Approach – see (Rosenthal, 1991); (Rosenthal & Rubin, 1978); (Rosenthal & Rubin, 1988) 2. Hunter, Schmidt, & Jackson - see (Hunter, Schmidt, & Jackson, 1982); (Hunter & Schmidt, 1990) Second, choose which effect size index to calculate: The commonly used effect size indexes are "the "r" family and the "d" family" of effect sizes. Since "r" and "d" can be transformed into each other statistically you may wonder why it matters which metric you choose. Empirical research can take many forms (e. g., dichotomous and/or continuous, dichotomous and/or continuous, two variables relationships, etc) and the form of research you are analyzing helps determine which metric may be best to use. For complete information and statistical formulas for all effect size indexes for each form of research, see (Lipsey & Wilson (2001), (Practical Meta-Analysis). 1. The r family – Correlation Coefficient - The "r" family includes all types of correlation coefficients (e. g., r, phi, rho, etc) and (Johnson & Eagly, (2000) suggest using r when the studies composing the meta analysis primarily report the correlation between variables, but also see (Rosenthal & DiMatteo, (2001) for a discussion of the advantages of using r over d. 2. The d family – Standardized Difference - The "d" family includes Cohen's d (unweighted) and Hedges g (weighted), and (Johnson & Eagly, (2000) suggest using d when the studies composing the meta-analysis primarily report ANOVAs and t-tests comparisons between groups. Third, choose your statistical software: There are two basic options -- use specialized software designed to conduct meta-analyses, or use standard statistical software such as SPSS and SAS. For websites provide effect size calculations and software see (Becker, L., (2000), Biostat (2006), Buchner, A, and Karl L. Wuensch (2010)). 1. SPSS and SAS. 2. The David B. Wilson website provides an excel spreadsheet for calculating effect sizes, and SPSS and SAS. MIX 2.0. MIX 2.0 - Professional Software for Meta-analysis in Excel. Meta-Analysis. Developed by (Schwarzer, 1996), it can be found on the Ralf Schwarzer website and each of the three meta-analytic approaches can be selected (i. e., Hedges/Olkin approach, Rosenthal approach, or Hunter/Schmidt/Jackson approach). 3. META (Meta-Analysis Easy to Answer). Developed by David A. Kenny, a description of the software can be found on the David A. Kenny website. 4. Meta-Analysis Calculator. Developed by Larry C. Lyons as a web based meta-analysis application and companion to the meta-analysis Pages. 5. CMA (Comprehensive Meta-Analysis). Developed by many of the experts in meta-analyses, it includes a comparison between CMA and other meta-analytic software. 2. The Meta-analysis Procedure The basic idea of a meta-analysis is that you take a weighted average of the difference in means, slope of a regression, or other statistic across the different studies. Experiments with larger sample sizes get more weight, as do experiments with smaller standard deviations or higher r2 values John H. McDonald (2014). You can then test whether this common estimate is significantly different from zero. Before starting collecting studies, it's essential to decide which ones are to be included ore excluded through objective criteria. For instance, if you're looking at the effects of a drug on a disease, you might decide that only double-blind, placebo-controlled studies are worth looking at, or you might decide that single-blind studies are acceptable; or you might decide that any study at all on the drug and the disease should be included. Sample size shouldn't be used as a criterion for including or excluding studies, because the statistical techniques used for the meta-analysis will give studies with smaller sample sizes the lower weight they deserve. John H. McDonald (2014). It is important to obtain all relevant studies, because loss of studies can lead to bias in the study. Typically, published papers and abstracts are identified by literature search. Crosschecking of references, citations in review papers, and communication with scientists who have been working in the relevant field are important methods used to provide a comprehensive search. A B Haidich. (2010). It is not feasible to find absolutely every relevant study on a subject. Some or even many studies may not be published, and those that are might not be indexed in computer-searchable databases. The decision whether to include unpublished studies is difficult. Although language of publication can provide a difficulty, it is important to overcome this difficulty, provided that the populations studied are relevant to the hypothesis being tested. A B Haidich. (2010). A critical issue in meta-analysis is what's known as the "file-drawer effect"; people who do a study and fail to find a significant result are less likely to publish it than if they find a significant result. To limit the file-drawer effect, it's important to do a thorough literature search, including really obscure journals, then try to see if there are unpublished experiments. To find out about unpublished experiments, you could look through summaries of funded grant proposals, which for government agencies; look through meeting abstracts in the appropriate field; write to the authors of published studies; and send out appeals on e-mail mailing lists. There are ways to estimate how many unpublished, non-significant studies there would have to be to make the overall effect in a meta-analysis non-significant. If that number is absurdly large, you can be more confident that your significant meta-analysis is not due to the file-drawer effect. 2.1. Systematic Review and Meta-analysis A subset of systematic reviews; a method for systematically combining pertinent qualitative and quantitative study data from several selected studies to develop a single conclusion that has greater statistical power. This conclusion is statistically stronger than the analysis of any single study, due to increased numbers of subjects, greater diversity among subjects, or accumulated effects and results. Just like other research articles, can be of varying quality, systematic reviews answer a defined research question by collecting and summarizing all empirical evidence that fits pre-specified eligibility criteria. A meta-analysis is the use of statistical methods to summarize the results of these studies. There are some questions that must be asked when assessing the quality of a systematic review, such as: (see the web page of National Center for Biotechnology Information) • Was the review conducted according to a pr-specified protocol? • Were the "right" types of studies eligible for the review? • Was the method of identifying all relevant information comprehensive? • Was the data abstraction from each study appropriate? • How was the information synthesized and summarized? The strength of a systematic review lies in the transparency of the process, allowing the reader to focus on the decision made in compiling the information, rather than a simple contrast of one study to another as sometimes occurs in other types of reviews. Well-conducted systematic review attempts to reduce the possibility of bias in the method of identifying and selecting studies for review. Mathematically combining data from a series of well-conducted primary studies may provide a more precise estimate of the underlying "true effect" than any individual study. In other words, by combining the samples of the individual studies, the size of the "overall sample" is increased, enhancing the statistical power of the analysis and reducing the size of the confidence interval for the point estimate of the effect. It is also more efficient to communicate a pooled summary than to describe the results for each of the individual studies. For these reasons, a meta-analysis of similar, well-conducted, randomized, controlled trials has been considered one of the highest levels of evidence. When the existing studies have important scientific and methodological limitations, including smaller sized samples (which is more often the case), the systematic review may identify where gaps exist in the available literature. In this case, an exploratory meta-analysis can provide a plausible estimate of effect that can be tested in subsequent studies. Conducting a meta-analysis does not overcome problems that were inherent in the design and execution of the primary studies. It also does not correct biases as a result of selective publication, whereby studies that report dramatic effects are more likely to be identified, summarized, and subsequently pooled in meta-analysis than studies that report smaller effect sizes (publication bias). Combining studies of poor quality with those that were more rigorously conducted may not be useful and can lead to worse estimates of the underlying truth or a false sense of precision around the truth. A false sense of precision may also arise when various subgroups of subjects defined by characteristics such as their age or gender differ in their observed response. In such cases, reporting an aggregate pooled effect might be misleading. A sensitivity analysis is essential to assess the robustness of combined estimates to different assumptions and inclusion criteria. Egger, M. et al (1997). Opinions will often diverge on the correct method for performing a particular meta-analysis. The robustness of the findings to different assumptions should therefore always be examined in a thorough sensitivity analysis. 2.2. A Study Example Seto KC, etal (2011) reviewed the English language literature for studies that monitor urban land-use change using satellite or airborne remotely sensed data published between 1988 and December 2008. In analysis, the study had to meet the following four criteria: 1. Study must quantify the urban area extent for at least in one point in time. 2. Study must quantify either the rate or amount of urban land expansion over a specific period of time. 3. Study area extent must be at city, metro, or regional scale (<100,000 km). 4. Study must not repeat the results presented in another paper. The literature review generated more than 1,000 papers. Among these, filtered those that met criteria 1 and 2, which resulted in 264 papers, further narrowed this set of papers to those that meet criteria 3 and 4, which yielded 180 papers. In addition to this set of peer-reviewed papers, the authors reviewed and included a World Bank study that was similar in method and scientific rigor and used a multivariate regression on the pooled dataset to model the global rate of urban land expansion. They selected a range of independent variables based on urban theory and models, representing the major forces that drive the physical expansion of urban land cover. Dependent variable was a single annual rate for each decadal period in each study. Results showed considerable variation in the rates of urban expansion over the study period. Variations in urban expansion rates point to differences in national and regional socio-economic environments and political conditions. 2.3. Meta-analyses Evolution The classical meta-analysis compares two treatments while network meta-analysis (or multiple treatment meta-analysis) can provide estimates of treatment of multiple treatment regimens. Meta-analysis can also be used to summarize the performance of diagnostic and prognostic tests. However, studies that evaluate the accuracy of tests have a unique design requiring different criteria to appropriately assess the quality of studies and the potential for bias. Furthermore, there are many methodologies for advanced meta-analysis that have been developed to address specific concerns, such as multivariate meta-analysis. Meta-analysis is no longer a novelty in medicine. Numerous meta-analyses have been conducted for the same medical topic by different researchers. Recently, there is a trend to combine the results of different meta-analyses, known as a meta-epidemiological study, to assess the risk of bias. 3. Computing Effect Size in Meta-analysis Methods used for meta-analysis use a weighted average of the results techniques to which can be broadly classified into two models Egger, M. et al (1997), the difference consisting in the way the variability of the results between the studies is treated. The "fixed effects" model considers that the variability is exclusively due to random variation. Therefore, if all the studies were infinitely large they would give identical results. The "random effects" model assumes a different underlying effect for each study and takes this into consideration as an additional source of variation, which leads to somewhat wider confidence intervals than the fixed effects model. Some statisticians feel that other statistical approaches are more appropriate than either of the above. One approach uses Bayes's theorem. Bayesian statisticians express their belief about the size of an effect by specifying some prior probability distribution before seeing the data, and then they update that belief by deriving a posterior probability distribution, taking the data into account. Bayesian models are available under both the fixed and random effects assumption, but this approach is controversial because the definition of prior probability will often be based on subjective assessments and opinion. Egger, M. et al (1997). Effect size is an important tool in reporting and interpreting effectiveness, and has many advantages over the use of tests of statistical significance. 'Effect size' is valuable for quantifying the effectiveness of a particular intervention, relative to some comparison, and a one of the tools that will help researchers move beyond null hypothesis testing. Effect size is a name given to a set of indices that measure the magnitude of a treatment effect. Unlike significance tests, these indices are independent of sample size. Effect size measures are the common currency of meta-analysis studies that summarize the findings from a specific area of research. Effect size quantifies the size of the difference between two groups, and may therefore be said to be a true measure of the significance of the difference. Another use of effect size is its use in performing power analysis, (see Buchner, A., Erdfelder, E. and Faul, F (2009). Researcher designers use power analysis to minimize the likelihood of both false positives and false negatives (Type I and Type II errors, respectively), Richard A. Zeller and Yan Yan (2007). 3.1. Effect Sizes & Confidence Intervals Meta analysis shows findings in terms of effect sizes. The effect size provides information about how much change is evident across all studies and for subsets of studies. There are many different types of effect size, but they fall into two main types: standardized mean difference (e. g., Cohen's d or Hedges g) or correlation (e. g., Pearson's r). It is possible to convert one effect size into another, so each really just offers a differently scaled measure of the strength of an effect or a relationship. The standardized mean effect size is basically computed as the difference score divided by the standard deviation of the scores. In meta-analysis, effect sizes should also be reported with: The number of studies and the number of effects used to create the estimate. Confidence intervals to help readers determine the consistency and reliability of the mean estimated effect size. Tests of statistical significance can also be conducted and on the effect sizes. Different effect sizes are calculated for different constructs of interest, as predetermined by the researchers based on what issues are of interest in the research literature. A number of statistics are sometimes proposed as alternative measures of effect size, other than the 'standardized mean difference'. One of these is the Proportion of variance accounted for, the R2 which represents the proportion of the variance in each that is 'accounted for' by the other. There are also effect size measures for multivariate outcomes. A detailed explanation can be found in Olejnik and Algina (2000). Calculating effect size is important when testing the goodness fit, or contingency test, for this test, the effect size symbol is w. Once effect size is known, this information can be used to calculate the number of participants needed and the critical chi-square value (for sample size rules (see Aguinis, H. & Harden, E. E. (2009)), (and see the effect of sample size on effect size in Slavin, R., & Smith, D. (2008). The developed formulas for effect size calculation vary depending on whether the researcher plans to use analysis of variance (ANOVA), t test, regression or correlation, (see Morris and DeShon's (2002)). Formulas used to measure effect size can be computed in either a standardized difference between two means, or in the correlation between the independent variable classification and the individual scores on the dependent variable, which is called the "effect size correlation" (Rosnow & Rosenthal (1996). Effect size for differences in means is given by Cohen's "d" Cohen, J. (1988), is defined in terms of population means (μs) and standard deviation (σ), as shown below: (1) There are several different ways that one could estimate σ from sample data which leads to multiple variants within the Cohen's d family.(see Karl L. Wuensch(2010)). When using the root mean square standard deviation, the "d" is given as: (2) A version of Cohen's d uses the pooled standard deviation and is also known as Hedges': (3) The value can be obtained from an ANOVA program by taking the square root of the mean square error which is also known as the root mean square error. Another model of Cohen's " d" using the standard deviation for the control group is also known as Glass' Δ (see Karl L. Wuensch (2010)), where: (4) The control group's standard deviation is used because it is not affected by the treatment. It is suggested to use a pooled within group standard deviation because it has less sampling error than the control group standard deviation such that equal size constrain is adopted. When there are more than two groups, the difference between the largest and smallest means divided by the square root of the mean square error will be used, i. e.: (5) As for OLS regression the measure of effects size is F which is defined by Cohen as follows: (6) Or, as usually computed by taking the square root of f2. Once again there are several ways in which the effect size can be computed from sample data. It can be noted that η2 is another name for R2, the coefficient of determination, where: (see Karl L. Wuensch (2010)). (7) The effect size used in analysis of variance is defined by the ratio of population standard deviations: (8) Based on definitional formula in terms of population values, effect size w can be viewed as the square root of the standardized chi-square statistic. (9) And w is computed using sample data by the formula: (10) According to Poston &Hanson(2010), when a study reports a hit rate (percentage of success after taking the treatment or no treatment), the following formula can be used: d= arcsine(p1)+ arcsine(p2) Where p1 and p2 are the hit rates of the two groups. If the effect size estimate from the sample is d, then it is normally distributed, with standard deviation: (11) (Where and are the numbers in the experimental and control groups, respectively.) The control group will provide the best estimate of standard deviation, since it consists of a representative group of the population who have not been affected by the experimental intervention. Therefore, it is often better to use a 'pooled' estimate of standard deviation, which is given by (12) (Where and are the numbers in the experimental and control groups, respectively, and and are their variances.) To calculate the effect size g and its correction d, In meta-analysis, we use the Cohen's g defined as: (13) Where is the mean of the experimental group,is the mean of the control group, and is the pooled sample standard deviation, where g is a biased estimator of the population effect size (14) According to DeCoster (2004), g can be corrected by multiplication of the term (15) where The resulting statistic (16) is known as Hedges'd, which is an unbiased estimator of The variance of d, given relatively large sample, is (17) The confidence level c can for be constructed by (18) Where is the critical value from the normal distribution. The pooled standard deviation can be calculated from two groups by the formula (19) Following DeCoster(2004), the t statistic for between subjects that compares the experimental and control group is given by the formula (20) when we have the same number of subjects in the experimental and control group, the above formula can be reduced to (21) Where in using z–score comparing the experimental and control groups, (22) Whereas for F statistic comparing the experimental and control groups, (23) The method of calculating g from within-subjects design is similar to that of between-subjects comparison. Hence, depending on the above logic, (24) (25) where is the standard deviation of the difference score and is the correlation between the experimental and control scores. Based on the above formulas values, the larger the effect size, the greater is the impact of an intervention. Cohen suggested that a correlation of 0.5 is large, 0.3 is moderate, and 0.1 is small Cohen defined.40 as the medium effect size because it was close to the average observed effect size (Aguinis, & Harden (2009)). The usual interpretation of this statement is that anything greater than 0.5 is large, 0.5-0.3 is moderate, 0.3-0.1 is small, and anything smaller than 0.1 is trivial. 3.2. Effect Size, Significance and Meta-analysis Results Meta-analysis was invented to be a more objective way of surveying the literature on a subject. The hard work of a meta-analysis is finding all the studies and extracting the necessary information from them, so it's tempting to be impressed by a meta-analysis of a large number of studies. A meta-analysis of 50 studies sounds more impressive than a meta-analysis of 5 studies; it's 10 times as big and represents 10 times as much work, after all. |small||medium||large| |t-test for means||d||0.20||0.50||0.80| |t-test for correlation||r||0.10||0.30||0.50| |F-test for regression||f2||0.02||0.15||0.35| |F-test for ANOVA||f||0.10||0.25||0.40| |chi-square||w||0.10||0.30||0.50| |d||r||r2||f||f2| |2||0.707||0.49985||0.999698||0.999396| |1.8||0.669||0.44756||0.900086||0.810155| |1.6||0.625||0.39063||0.800641||0.641026| |1.4||0.573||0.32833||0.699160||0.488824| |1.2||0.514||0.26420||0.599214||0.359058| |1.0||0.447||0.19981||0.499702||0.249702| |0.8||0.371||0.13764||039951 0||0.159610| |0.6||0.287||0.08237||0.299604||0.089763| |0.4||0.196||0.03842||0.199877||0.039951| |0.2||0.100||0.01000||0.100504||0.010100| |0.1||0.05||0.0025||0.050063||0.002506| |0||0||0||0||0| *Notice the relationship between d, r, and . The interpretations of effect-sizes given in Table (1), in which a suggested values for low, medium and high effects is given, depend on the assumption that both control and experimental groups have a 'normal' distribution, otherwise, it may be difficult to make a fair comparison between an effect-size based on normal distributions and one based on non-normal distributions. In practice, the values for large effects may be exceeded with values Cohen's d greater than 1.0 not uncommon. Considering table (1) and table (2), it can be noted that, d can be converted to r and vice versa. For example, the d value of 0.8 corresponds to an r value of 0.371. The square of the r-value is the percentage of variance in the dependent variable that is accounted for by the effect in the explanatory variable groups. For a d value of 0.8, the amount of variance in the dependent variable by membership in the treatment and control groups is 13.8%. T-tests are used to evaluate the null hypothesis. For this test, the effect size symbol is r. If the desired effect size is known, statistical power and needed sample size can be calculated. For instance, if the target is to find how many elements are need in a study for a medium effect size (r = 0.30) with an alpha of.05. and power of 0.95, this information can be used to find the answer. |Effect size||Delta||Critical t||Total sample size||Actual power| |0.001||3.605||1.96||51978840||0.95| |0.1||3.606||1.96||5200||0.95| |0.2||3.608||1.962||1302||0.95| |0.3||3.613||1.964||580||0.95| |0.4||3.622||1.967||328||0.951| |0.5||3.623||1.971||210||0.95| |0.6||3.65||1.976||148||0.952| |0.7||3.671||1.982||110||0.953| |0.8||3.666||1.989||84||0.952| |0.9||3.711||1.997||68||0.955| |10.00||10.000||4.303||4||0.993| Footnotes:*Power depends on the effect size, the sample size and the significance level. For ANOVA, the effect size index f is used, and the effect size index from the group means can then be computed. Power is the chance that if "d" exists in the real world, one gets a statistically significant difference in the data. if the power level is taken to be 80%, there is an 80% chance to discover a really existing difference in the sample. Alpha is the chance that one would conclude that an effect difference "d", has been discovered, while in fact this difference or effect does not exist. If alpha is set at 5%, this means that in 5%, or one in twenty, the data indicate that "something" exists, while in fact it does not. In table (3), consider that: power = 1-β = p (HA is accepted/HA is true). Set α, the probability of false rejecting Ho, equal to some small value. Then, considering the alternative hypothesis HA, choose a region of rejection such that the probability of observing a sample value in that region is less than or equal to α when Ho is true. If the value of sample statistic falls within the rejection region, the decision is made to reject the null hypothesis. Typically is set at 0.05, and critical t values are specified. The calculation works as follows: Entering α=0.05, power=0.95, effect size specified as in column (1), we find the needed elements (sample size (column 4)) and so on. The effect size is seen in table (3) Column (1). The effect size conventions are small =0.20, medium=0.50, large=0.80. Calculate d and r using t values and df (separate groups t test) calculate the value of Cohen's d and the effect size correlation r, using the t test value for a between subjects t test and the degrees of freedom. Results are shown in table (4), while in table (5), d and r are calculated using t values and df. |Group I||Group II| |M1||SD1||Cohen's d||Effect size r||M2||SD2||Cohen's d||Effect size r| |1||1||0||0||1||1||0||0| |2||5||0.505-||0.245-||6||10||0.505-||0.245-| |5||10||0.632-||0.302-||10||5||0.632-||0.302-| |5||10||0.5||0.243||0||10||0.5||0.243| |15||50||0.1-||0.049-||20||50||0.1-||0.049-| |20||50||0||0||20||10||0||0.5| |50||100||0.380||0.186||20||50||0.380||0.186| |50||100||-0.280||0.139-||50||100||-0.280||0.139-| Note: d and r are positive if the mean difference is in the predicted direction. |T value||D f||Cohen's d||Effect size r| |1||1||2||0.7071| |1.5||2||2.1213||0.7276| |2.0||5||1.7888||0.6666| |2.0||10||1.2649||0.5345| |2.5||30||0.9128||0.4152| |3.0||30||1.0954||0.4803| |3.0||50||0.8485||0.3905| Note: d and r are positive if the mean difference is in the predicted direction. 4. Discussion Meta-analyses can play a key role in planning new studies. The meta-analysis can help identify which questions have already been answered and which remain to be answered, which outcome measures or populations are most likely to yield significant results, and which variants of the planned intervention are likely to be most powerful. Meta analysis can be used as a guide to answer the question 'does what we are doing make a difference to X? 'even if 'X' has been measured using different instruments across a range of different people. Meta-analysis provides a systematic overview of quantitative research which has examined a particular question. The appeal of meta analysis is that it in effect combines all the research on one topic into one large study with many participants. The danger is that in amalgamating a large set of different studies the construct definitions can become imprecise and the results difficult to interpret meaningfully. Meta-analysts disagree on the criteria for inclusion or exclusion of primary studies, with relation to publication status, comparability and required scientific quality, but sensitivity analyses make it possible to assess the impact of various selection criteria on the results based on effect analysis. Used in meta-analysis, the effect size refers to the magnitude of the effect under the alternative hypothesis. It should represent the smallest difference that would be of significance. It varies from study to study. It is also variable from one statistical procedure to the other. It could be the difference in cure rates, or a standardized mean difference or a correlation coefficient. If the effect size is increased, the type II error decreases. Power is a function of an effect size and the sample size. For a given power, 'small effects' require larger sample size than 'large effects'. Power depends on (a) the effect size, (b) the sample size, and (c) the significance level. But if the researcher knew the size of the effect, there would be no reason to conduct the research. To estimate a sample size prior to doing the research, requires the postulation of an effect size, which might be related to a correlation, an f-value, or a non-parametric test. In the procedure implemented here,'d' is the difference between two averages, or proportions. Effect size 'd' is mostly subjective, it is the difference you want to discover as a researcher or practitioner and it is a difference that you find relevant. However, if cost aspects are included,'d' can be calculated objectively. The size of the difference in the response to be detected, which relates to underlying population, not to data from sample, is of importance since it measures the distance between the null hypothesis (HO) and specific value of the alternative hypothesis (HA). A desirable effect size is the degree of deviation from the null hypotheses that is considered large enough to attract the attention. The concept of small, medium, and large effect sizes can be a reasonable starting point if you do not have more precise information. (Note that an effect size should be stated in terms of a number in the actual units of the response, not a percent change such as 5% or 10 %.). Sample size determination and power analysis involve steps that are fundamentally the same. These include the investigation of; type of analysis and null hypothesis; power and required sample size for a reasonable range of effect as well as calculation of the sample size required to detect a reasonable effect with a reasonable level of power. Although effect size is a simple and readily interpreted measure of effectiveness, it can also be sensitive to a number of spurious influences, so some care needs to be taken in its use. 5. Conclusion Meta-analysis should be seen as structuring the processes through which a thorough review of previous research is carried out. The issues of completeness and combinability of evidence, which need to be considered in any review are made explicit. On the use of Meta-analysis, the following can be summarized: i. Despite limitations, meta-analytic approaches have demonstrable benefits in addressing the limitations of study size, can include diverse populations, provide the opportunity to evaluate new hypotheses, and are more valuable than any single study contributing to the analysis. ii. Assumptions about the population nature is essential in using effect size, for the interpretation depends mainly on the assumptions of normality and equality of deviations of 'control' and 'experimental' group values. Effect sizes can be interpreted in terms of the percentiles or ranks at which two distributions overlap. iii. Use of an effect size with a confidence interval holds the same information as a test of statistical significance, but with the emphasis on the significance of the effect, rather than the sample size. iv. Like all types of research, meta-analyses has both potential strengths and weaknesses. meta-analysis does not work nearly as well as we might want it to work. The problems are so deep and so numerous that the results are simply not reliable. Meta-analysis simply does not work very well in practice. v. Meta-analysis is superior to narrative reports for systematic reviews of the literature, but its quantitative results should be interpreted with caution even when the analysis is performed according to rigorous rules. vi. By using meta-analysis, a wide variety of questions can be investigated, as long as a reasonable body of primary research studies exist. References Biography | | Dr. habib Ahmed Elsayir received his Ph. D degree in Statistics from Omdurman Islamic University, Sudan in 2001. He was appointed manager of Omdurman Islamic University Branch at AlDaein 2002-2005. Now he is associate prof in the Dept. of Mathematics, Al qunfudha University college, University of um Al Qura, Saudi Arabia.
http://article.sciencepublishinggroup.com/html/10.11648.j.ajtas.20150406.35.html
Flowers are part of our everyday lives, almost everywhere we go there are flowers to be had in all different shapes, sizes, colors, and smells. People often forget how amazing flowers are ignoring the endless list of amazing facts about them. When you do a little digging you learn that they are not only pretty and important to insects but do a lot of other amazing things as well. Did You Know - The gas plant otherwise known as the burning bush has lemon scented flowers and leaves with a leathery texture that can be lit with a match. - The biggest flower in the world is called the Titan Arums, growing to be 10 feet high and 3 feet wide they’re also called the corpse flower. - China has the oldest flower in the world, called Archaefructus sinensis, it first bloomed over 125 million years ago. - All the plants of a bamboo species will flower at the exact same time no matter where they are in the world. Activity Flowers all look different from each other but contain many of the same parts. Have each student bring in a different flower to compare, showing them all of the parts each flower has in common.
https://www.wedrawanimals.com/flowers/
Mancao’s U.S.-based lawyer, Arnedo S. Valera, said, “the findings of his testimony as “incredible and untrustworthy” are separate and distinct issue with regard to his request that any case against him connected with these crimes should be dropped. Valera said, “The interest of the international community is to see that the administration of President Aquino is committed to the pursuit of truth and justice and not to coddle those who are involved in these heinous crimes.” Former Philippine National Police Chief and now Sen. Panfilo M. Lacson, one of the suspected masterminds in the double murder, was discharged from the case after questioning the credibility of Mancao’s testimonies, which linked Lacson to the double murder. Mr. Lacson was one of the supporters of then Sen. Noynoy Aquino during the 2010 presidential elections. In an email to Philippine Justice Secretary Leila de Lima, furnished this reporter, Mr. Valera said he has been “in constant communication with our client Cezar Mancao as he expressed to me his grave concerns on the current development of his case. “As a firm advocate for Justice and Human Rights, despite the findings of “untrustworthiness and credibility” of Col. Mancao’s testimony (for which the international legal community in the U.S. completely disagree), Cezar wanted to reiterate the following request: “That his name as an accused in the Dacer-Corbito double murder cases should be dropped.” “There is no solid evidence linking him to these heinous crimes. What he possessed are vital and significant information to support the filing of double murder cases against identified suspects for these crimes.” Valera said the DOJ (Department of Justice) can still build a strong case against the “suspects” for the double murder cases and Mr. Mancao (“A person of Interest”) can still testify as a witness to the extent of his knowledge and what he knows about the murders. “I am sure that the gathered evidence and facts for the last 10 years after the commission of these crimes are enough to put to trial the “suspects” /accused for these crimes. “We in the United States, especially the international community, are interested on how the administration of President Aquino will resolve these crimes and justice served to the victims. “We should not lose sight of the main issue: Mr. Dacer and Corbito were murdered. DOJ has significant amount of information, evidence and witnesses to support a double murder case. “Mr. Mancao’s testimony is vital and he should remain one of the Government’s important sources of information. Cezar is cooperating fully and diligently working with the Philippine Government to help resolve these murder cases.” Valera reiterated Mancao “should not be considered as an accused where there is no solid evidence to hold him as an accused. Otherwise, it becomes an absurd scenario, where in our pursuit for justice for the Dacer-Corbito double murder cases, the Philippine Government is prosecuting not the real suspects but a vital witness providing important information to help solve these murders.
https://www.philippinedailymirror.com/mancao-vital-witness-says-us-based-lawyer/
Li, Luchen, Komorowski, Matthieu, Faisal, Aldo A. Off-policy reinforcement learning enables near-optimal policy from suboptimal experience, thereby provisions opportunity for artificial intelligence applications in healthcare. Previous works have mainly framed patient-clinician interactions as Markov decision processes, while true physiological states are not necessarily fully observable from clinical data. We capture this situation with partially observable Markov decision process, in which an agent optimises its actions in a belief represented as a distribution of patient states inferred from individual history trajectories. A Gaussian mixture model is fitted for the observed data. Moreover, we take into account the fact that nuance in pharmaceutical dosage could presumably result in significantly different effect by modelling a continuous policy through a Gaussian approximator directly in the policy space, i.e. the actor. To address the challenge of infinite number of possible belief states which renders exact value iteration intractable, we evaluate and plan for only every encountered belief, through heuristic search tree by tightly maintaining lower and upper bounds of the true value of belief. We further resort to function approximations to update value bounds estimation, i.e. the critic, so that the tree search can be improved through more compact bounds at the fringe nodes that will be back-propagated to the root. Both actor and critic parameters are learned via gradient-based approaches. Our proposed policy trained from real intensive care unit data is capable of dictating dosing on vasopressors and intravenous fluids for sepsis patients that lead to the best patient outcomes. Komorowski, Matthieu, Celi, Leo A., Badawi, Omar, Gordon, Anthony C., Faisal, A. Aldo In this document, we explore in more detail our published work (Komorowski, Celi, Badawi, Gordon, & Faisal, 2018) for the benefit of the AI in Healthcare research community. In the above paper, we developed the AI Clinician system, which demonstrated how reinforcement learning could be used to make useful recommendations towards optimal treatment decisions from intensive care data. Since publication a number of authors have reviewed our work (e.g. Given the difference of our framework to previous work, the fact that we are bridging two very different academic communities (intensive care and machine learning) and that our work has impact on a number of other areas with more traditional computer-based approaches (biosignal processing and control, biomedical engineering), we are providing here additional details on our recent publication. We acknowledge the online comments by Jeter et al (https://arxiv.org/abs/1902.03271). The sections of the present document are structured so as to address some of their questions. For clarity, we label figures from our main Nature Medicine publication as "M", figures from Jeter et al.'s arXiv paper as "J" and figures from our response here as "R". Jeter et al. state "the only possible response we can afford is a more aggressive and open dialogue". Li, Luchen, Komorowski, Matthieu, Faisal, Aldo A. Health-related data is noisy and stochastic in implying the true physiological states of patients, limiting information contained in single-moment observations for sequential clinical decision making. We model patient-clinician interactions as partially observable Markov decision processes (POMDPs) and optimize sequential treatment based on belief states inferred from history sequence. To facilitate inference, we build a variational generative model and boost state representation with a recurrent neural network (RNN), incorporating an auxiliary loss from sequence auto-encoding. Meanwhile, we optimize a continuous policy of drug levels with an actor-critic method where policy gradients are obtained from a stablized off-policy estimate of advantage function, with the value of belief state backed up by parallel best-first suffix trees. We exploit our methodology in optimizing dosages of vasopressor and intravenous fluid for sepsis patients using a retrospective intensive care dataset and evaluate the learned policy with off-policy policy evaluation (OPPE). The results demonstrate that modelling as POMDPs yields better performance than MDPs, and that incorporating heuristic search improves sample efficiency. Raghu, Aniruddh, Komorowski, Matthieu, Singh, Sumeetpal Sepsis is a dangerous condition that is a leading cause of patient mortality. Treating sepsis is highly challenging, because individual patients respond very differently to medical interventions and there is no universally agreed-upon treatment for sepsis. In this work, we explore the use of continuous state-space model-based reinforcement learning (RL) to discover high-quality treatment policies for sepsis patients. Our quantitative evaluation reveals that by blending the treatment strategy discovered with RL with what clinicians follow, we can obtain improved policies, potentially allowing for better medical treatment for sepsis. Peng, Xuefeng, Ding, Yi, Wihl, David, Gottesman, Omer, Komorowski, Matthieu, Lehman, Li-wei H., Ross, Andrew, Faisal, Aldo, Doshi-Velez, Finale Sepsis is the leading cause of mortality in the ICU. It is challenging to manage because individual patients respond differently to treatment. Thus, tailoring treatment to the individual patient is essential for the best outcomes. In this paper, we take steps toward this goal by applying a mixture-of-experts framework to personalize sepsis treatment. The mixture model selectively alternates between neighbor-based (kernel) and deep reinforcement learning (DRL) experts depending on patient's current history. On a large retrospective cohort, this mixture-based approach outperforms physician, kernel only, and DRL-only experts.
https://aitopics.org/mlt?cdid=arxivorg%3ACCE2D8E9&dimension=pagetext
To celebrate Emory's 175th anniversary, Emory Report brings you a look at little-known facts about the university. The Candler Mansion on Emory's Briarcliff campus is a beautiful yet eerie estate. Built in 1920 by Asa "Buddy" Candler, Jr., the second son of the founder of Coca-Cola, the 42 acre property once housed wild animals including four elephants: Coca, Cola, Refreshing and Delicious. In the 1960s, the mansion was turned into a mental health institute. Today, the house is boarded up, too expensive to renovate, but is still useful as a place to shoot scary movies and TV shows. To see the entire collection of Emory History Minutes, log on to Emory Report's website at http://bit.ly/emoryhistoryminutes. And for more information about Emory's 175th Anniversary, log on tohttp://www.emory.edu/175.
http://emory.11alive.com/news/real-estate/88088-emory-history-minute-candler-mansion
Contents: What is Contour Interval?A contour interval in surveying is the vertical distance or the difference in the elevation between the two contour lines in a topographical map. Usually there are different contour intervals for the different maps. Considering the size of the area to be mapped, contour intervals are assumed. In every map, on the right-hand bottom side, the contour interval is specified. When the contour interval is not specified in the map, it can be calculated as explained in the following sections. The commonly used contour interval is 20 feet for a 1:24,000 map scale. Factors Affecting the Selection of Contour IntervalThe selection of the contour interval is decided by the survey leader before the start of the mapping process depending upon the ground factors. |Sl. No||Factors||Select High CI like 1m, 2m, 5m or more||Select Low CI like 0.5m, 0.25m, 0.1m or less| |1||Scale of the map||For small scale maps covering a wide area of varying terrain||For large maps showing details of a small area| |2||Extent of survey||For rough topographical map meant for initial assessment only||If a detailed map is to be prepared for execution work| |3||Nature of ground||If the ground has large variation in levels, for instance, hills and ponds||If the terrain is comparatively level| |4||Time and resources available||If less time and resources are available||If time and resources abundant| How to Calculate Contour Interval from Maps?A contour map consists of contour lines of a given geographical region. To keep the contour map simple and easy to read, not every contour line is marked with its elevation reading. These marked or labeled lines are known or termed as Index Contour Lines. Step 1:Firstly locate 2 index contour lines that are labeled with a specific elevation. Step 2:Now calculate the difference between the two-selected index contour line selected from a map. To take the difference, subtract the higher elevated line with the lower elevated line reading. Step 3:Now count the number of non-index lines contour lines between the 2 index contour lines selected for the contour interval calculating in the 1st step. Step 4:The number of lines obtained in the above step is taken and added with 1. For Ex: If the number of lines between 2 index lines are 5. Then add 1 to 5 that becomes 6. Step 5:the final step is the quotient of the difference between 2 index lines (step 2) and the number of lines in between two index lines plus 1 (step 5). Step 6:The final answer we get after dividing is the contour interval of the specific topographical map Example Calculation of Contour Intervals: Uses of Contour Intervals in Surveying - When a large area is to be mapped in small piece of paper contour intervals are used. A higher contour interval is used for a large area and small contour interval for small area. - In a large map, index contour lines are less to keep it simple to read the map easily. In this case, to find out the intermediate points elevation, contour intervals are used. - Earthwork estimations for any type of structure like bridges, dams or roads can be found out with the help of contour intervals in a map.
https://theconstructor.org/surveying/contour-interval-calculations-uses/16247/?amp=1
1. On 2 yd. (1.8 m) of Fireline, leaving a 12-in (30 cm) tail, sew through both holes of a Tila bead. Tie the working thread and tail together with a square knot, and sew back through the second hole. 2. Working in ladder stitch, pick up two 8/0 hex-cut beads, and sew through the second hole of the Tila bead and the 8/0s (FIGURE 1, a–b). Pick up two 8/0s, and work another stitch (b–c). 3. Working in modified ladder stitch, pick up a Tila bead, and sew through the previous pair of 8/0s, the first hole of the Tila bead, and the second hole of the Tila bead (c–d). 4. Continue working in ladder stitch and modified ladder stitch to add an alternating pattern of a Tila bead and two pairs of 8/0s for a total of three sets. 5. With your thread exiting up out of the end pair of 8/0s, pick up a Tila bead, and sew down through its second hole and the second pair of 8/0s (FIGURE 2, a–b). 6. Sew up through the first hole of the next Tila bead, pick up four 8/0s, and sew down through the second hole of the Tila bead (b–c). 7. Sew up through the next pair of 8/0s, and repeat steps 5 and 6 across the row (c–d). 8. At the end of the row, sew under the adjacent thread bridge, and back up through the end hole of the Tila bead and the corresponding pair of 8/0s (d–e). 9. Work three more rows as in steps 5–8, alternating Tila beads and groups of 8/0s to create a checkerboard pattern. 10. Work the sixth row as in steps 5–8, but pick up an 11/0 seed bead between the two holes of each Tila bead and between each pair of 8/0s (FIGURE 3). End the thread but not the tail. 11. Make 11 to 13 units, as desired. 1. On a head pin, string an 8 mm round faceted fire-polished bead, and make a plain loop. 2. Using eye pins instead of a head pin, repeat step 1 five times. 3. Open the loop of the head pin unit, attach a loop of an eye pin unit, and close the loop. Open the remaining loop of the eye pin unit, attach a loop of another eye pin unit, and close the loop. Repeat to connect four eye pin units (PHOTO A). 4. Open a loop of the remaining eye pin unit, attach the hook clasp, and close the loop (PHOTO B). Turn the herringbone units so the edge 11/0s are at the bottom of each unit. 1. To connect the herringbone units, thread a needle on the tail of one unit, and exit the end edge hole of a Tila bead. Sew through the end pair of 8/0s of the next unit and the hole of the Tila bead your thread just exited (FIGURE 4). Retrace the thread path, and end the thread. Repeat to connect all the units. 2. Thread a needle on the remaining tail, and sew through the Tila bead to exit the end edge hole. Pick up an 8/0, the loop of the 8 mm clasp unit, and an 8/0, and sew through the hole of the Tila bead your thread just exited (FIGURE 5). Retrace the thread path. 3. Pick up two 11/0s, and sew under the thread bridge between the two holes of the end Tila bead (FIGURE 6, a–b). Pick up an 11/0, and sew under the thread bridge between the first pair of 8/0s. Pick up an 11/0, and sew under the thread bridge between the next 8/0 and Tila bead (b–c). Repeat across all the units, ending and adding thread as needed. 4. Sew through the end edge pair of 8/0s, pick up an 8/0, the end loop of the connected 8 mm units, and an 8/0 as in step 2, and sew through the pair of 8/0s your thread just exited. Retrace the thread path, and end all remaining threads.
http://www.facetjewelry.com/stitching/projects/2017/06/elegance-squared-collar
Table of Contents Is a dolphin in the whale family? Did you know these marine mammals are part of the cetacean family? Check out these facts to learn more. Marine mammals in the cetacean family include whales, dolphins, and porpoises. What group are dolphins and whales in? Scientifically, all whales, dolphins and porpoises are classified as Cetacea. And within Cetacea are two suborders: baleen whales and toothed whales. Baleen whales include the really big ones, such as blue whales and humpbacks. Toothed whales include dolphins and orcas, or killer whales, as they’re often known. Are dolphins toothed? Differences between a dolphin and a porpoise Porpoises don’t have the pronounced beak that most, but not all dolphins have and they also have different shaped teeth. Porpoise teeth are spade-shaped whilst dolphins are conical. Are dolphins baleen or toothed whales? The order Cetacea, that includes whales, dolphins, and porpoises is divided into two main groups: the toothed whales (Odontocetes) and the baleen or whalebone whales (Mysticetes). Toothed whales include dolphins, porpoises, as well as the large sperm and killer whales. Do killer whales eat humans? From our historical understanding of killer whales and the recorded experiences people have shared with these marine mammals, we can safely assume that killer whales do not eat people. In fact, there have been no known cases of killer whales eating a human to our knowledge. Do dolphins poop? Yes dolphins do poop or release feces or excrement depending on how you’d like to phrase it. The amount of food a dolphin can eat varies with some species consuming between 2% – 10% of their body weight in food on a daily basis. Is whale a fish or mammal? Whales and porpoises are also mammals. There are 75 species of dolphins, whales, and porpoises living in the ocean. They are the only mammals, other than manatees, that spend their entire lives in the water. Do orcas eat humans? Orcas (Orcinus orca) are often called killer whales, even though they almost never attack humans. In fact, the killer whale name was originally “whale killer,” as ancient sailors saw them hunting in groups to take down large whales, according to Whale and Dolphin Conservation (WDC). Do whales eat humans? Experts noted that whales do not eat people, but consume small aquatic lifeforms like fish, squid and krill. Can a humpback whale swallow a human? Though a humpback could easily fit a human inside its huge mouth—which can reach around 10 feet—it’s scientifically impossible for the whale to swallow a human once inside, according to Nicola Hodgins of the Whale and Dolphin Conservation, a U.K. nonprofit. … What is the fastest cetacean? fin whale The fin whale is one of the fastest cetaceans and can sustain speeds between 37 km/h (23 mph) and 41 km/h (25 mph) and bursts up to 46 km/h (29 mph) have been recorded, earning the fin whale the nickname “the greyhound of the sea”. Why do orcas not eat humans? There are a few theories about why orcas don’t attack humans in the wild, but they generally come down to the idea that orcas are fussy eaters and only tend to sample what their mothers teach them is safe. Since humans would never have qualified as a reliable food source, our species was never sampled. How many species of toothed whales are there? Flower, 1869. Families. See text. Diversity. Around 73. The toothed whales ( systematic name Odontoceti) are a parvorder of cetaceans that includes dolphins, porpoises, and all other whales possessing teeth, such as the beaked whales and sperm whales. Seventy-three species of toothed whales (also called odontocetes) are described. Which is the most common species of dolphin? Species 1 Lisodelphis borealis (Northern right whale dolphin) 2 Lssodelphis peronii (Southern right whale dolphin) More What kind of skin does a toothed whale have? Risso’s dolphins are a medium-sized toothed whale that have stout bodies and a tall, falcate dorsal fin. The skin of these dolphins lightens as they age. Young Risso’s dolphins are black, dark gray or brown while older Risso’s may be light gray to white. Where are toothed whales and baleen whales found? They are found in all of the world’s oceans, and even in some freshwater rivers. Despite their very different diets and sizes, both baleen whales (Mysticeti) and toothed whales (Odontoceti) share a common (and perhaps surprising) ancestor—land-dwelling mammals related to today’s hippos that lived over 50 million years ago.
https://stwnews.org/is-a-dolphin-in-the-whale-family/
How to use a locker in the Library How to use a locker in the Library Most lockers in the building use keys that are issued in the same way as you borrow other library materials. Borrowing a locker key - They are available for 14 day loan (all borrower types) - Collect the key and case from the High Demand Area (Floor 0) and use the self-service kiosks to issue. - All undergraduate and taught postgraduate locker keys must be returned by the due date as these keys are not renewable. - If all keys are out on loan, you can place a request for one by visiting the Library Helpdesk and asking to be added to the locker key waiting list. - Research postgraduate keys will automatically renew until they are requested by another borrower. Please be aware that - With the exception of bottled water, food, drink or other perishable items should not be stored in the lockers. - To ensure fair access to library materials periodicals, reference books, and other unissued library material must not be stored in lockers. - Staff may enter any locker which they believe is being used inappropriately. Locations of lockers and who can use them Undergraduates: There are 48 lockers available in the Group Study Room on Floor 01 of the library which are available to all undergraduates; in addition a further 68 lockers are located in the library cloakroom. It may be possible for students with additional needs to have a locker for a longer period; please contact the Student Support Service for further information. There are lockers administrated by the Student Support Service in the Library foyer on floor 0 for those unable to access the lockers on Floor 01. Postgraduate Students using the Postgraduate Reading Rooms: There are 76 lockers available in the Taught Postgraduate Reading Room on Floor 2 There are 60 lockers available in the Research Postgraduate Reading Room on Floor 2 RAC students, INTO students & Visitors: There are 12 coin operated lockers in the Library Cloakroom available for RAC & INTO students, External and SCONUL Borrowers and Library visitors on a daily basis. Please ask at the reception desk for details on how to access these.
https://portal.uea.ac.uk/library/study-space/lockers
Welcome to UT Arlington! We want to ensure you have an exceptional parking experience with us. Visitor parking has been prepared for you in the College Park Garages, located at 500 S. Center Street, Arlington, TX, 76019 (located next to the College Park Center), with additional parking at Lot 45 (south of the Lipscomb resident dorm) and Lot 53 (located at the intersection of Mitchell and Pecan). Please enter your vehicle information in the link below to obtain a parking e-permit for your vehicle to park on campus during your event. This will give your vehicle access to park in the designated lot noted on the link's event page. Any vehicle parked on campus without an e-permit will receive a citation. FWRSEF Activities at UTA Thank you for your interest in participating in our on-campus activities in the College of Engineering and the College of Science on Sunday, February 24, 2019 and Monday, February 25, 2019. The College of Engineering and the College of Science will host hands-on and informational activities for a group of 30 middle school students and 30 high school students who registered to participate by February 16, 2019. Our events are now full and no further registrations are possible. The sessions will take place from 1:00-3:00 p.m. on Monday, February 25, 2019. Selected registrants for the College of Engineering and College of Science activities will be notified via email the week of February 18, 2018 with further directions. Questions and Contact - For questions regarding rules and forms, contact: - Dr. Yuan B Peng, Fair Director Email: [email protected] P.O. Box 19528 University of Texas at Arlington Arlington, TX 76019-0528 - For questions about forming an SRB or IRC contact: - Dr. Michael Roner, SRC Chair Email: [email protected] - For information about supporting the fair or volunteering, email: - Dr. Yuan B Peng Email: [email protected] Fair Schedule and Important Dates Important Dates For some activities specific dates will be posted shortly. Use the information below for general guidance. - October 20, 2018 - Scienteer Teacher Training - 9 AM – 12 PM at Tarrant County College - Location: Tarrant County College Trinity River Campus - 300 Trinity Campus Circle, Fort Worth, TX 76102 - Bldg TRTR, Room 3905 - - Driving Directions to TCCD - - How to get to Room TRTR 3905 - - Please help us for headcount by registering at https://doodle.com/poll/g8bqkv77u8y23cvc, and feel free to pass this information to your colleagues who might be interested. - - December 23, 2018 - Projects needing approval from the Scientific Review Committee (SRC) must be approved either by a local SRC or the FWRSEF SRC if the school is small enough to establish its own SRC (e.g., home school). - January 25, 2019 - Registration deadline - February 1, 2019 - Project categories are finalized. No changes to a category may be requested by a teacher, parent or student. - February 24, 2019 (Sunday) - Project setup for all participants (12 PM to 6 PM) at the College Park Events Center at The University of Texas at Arlington in Arlington, Texas - February 25, 2019 (Monday) - The Fort Worth Regional Science and Engineering Fair! - Judging and Awards Ceremony - March 29 – 30, 2019 - Texas Science and Engineering Fair at Texas A&M University in College Station, Texas at Kyle Field and the Hall of Champions. - May 12 – 17, 2019 (Senior Only) - The International Science and Engineering Fair in Phoenix, AZ FWRSEF Schedule Sunday, February 24, 2019 |12:00 PM to 6:00 PM||Project setup (all grades)| Monday, February 25, 2019 |9:00 AM – 10:00 AM||Closed-door judging| |9:30 AM – 11:30 AM||Project presentation| |Students demonstrate their projects to the judges | Mandatory for Senior High students (Grades 9–12). |11:30 AM – 1:00 PM||Lunch| |Science fair participants and teachers are invited to have lunch in the Connection Cafe on campus. Lunch vouchers will be provided in the project registration packets.| |1:00 PM – 2:50 PM||Student Activities (CPC)| |Science fair participants may participate in UTA Science and Engineering Fun activities provided by UTA and Lockheed Martin. Prior registration required! All Students Who Registered Will Meet at the Arena!| |3:00 PM – 7:00 PM||Public project viewing| |7:00 PM – 8:30 PM||Awards Ceremony| |8:30 PM – 9:00 PM||Public viewing and teardown| |9:00 PM||Exhibit hall closes| |All project displays must be removed. | Unattended items will be removed after closing.
http://fwrsef.org/info.php
BACKGROUND Embodiments of the disclosure are directed generally to electronic content display systems and methods. Embodiments of the disclosure are directed more specifically to systems and methods for selection of supplemental content according to skip likelihood. SUMMARY Many electronic content display systems allow users to skip supplemental content (e.g., content shown in addition to main content being consumed) after a specified amount of time. For example, users may be permitted to skip supplemental content after it has played for 5 seconds, such as via selecting a “Skip” button displayed on-screen. Allowing users to skip supplemental content presents certain challenges, however. In particular, a supplemental content's intended message is often not conveyed to the viewer, at least not in complete form. Further, skipped supplemental content results in fewer views of that supplemental content, which may in turn result in lower revenue generated, fewer products sold, and the like. Accordingly, to overcome the limited ability of computer-based content display systems to convey supplemental content when skipping such content is allowed, systems and methods are described herein for a computer-based system and process that predicts when a viewer is likely to skip an advertisement or other supplemental content, and adjusts supplemental content presentation to compensate. Systems of embodiments of the disclosure may predict user intent to skip either before the supplemental content is presented, or shortly after the supplemental content is presented. Once a likelihood of skipping content is determined, the system may take any compensatory action, such as selecting different supplemental content that is shorter or conveys its intended message prior to being skipped, disallowing a user skip, or the like. In some embodiments of the disclosure, during display of a content item, systems may identify and optionally display times at which content play is to be interrupted by display of supplemental content such as advertisements, e.g., paused while supplemental content is overlaid on the same screen, and resumed once play of the supplemental content is complete. Various user actions approaching or soon after these times may be employed as inputs used to predict a likelihood that the user intends to skip this supplemental content. When it is determined that the user is likely to select his or her option to skip, the system may respond by selecting a particular supplemental content item to compensate and may transmit this selected content for display to the user. The selected supplemental content may be content that displays its intended message quickly, before the user can skip. For example, the system may select an advertisement with the product name, picture, and logo prominently displayed in its opening frames. As another example, very short supplemental content may be selected, or supplemental content may be selected for playback at increased speed, so that it may be completed before the user's skip command is entered. As a further example, only a portion of supplemental content may be selected, such as a still image, a text summary, or short clip that may be completed quickly. Selection of supplemental content may be performed based on a likelihood of skipping content, when the likelihood exceeds some threshold value. That is, supplemental content may be selected when it is deemed sufficiently likely that the user is going to attempt to exercise his or her skip option. In particular, in some embodiments, one supplemental content item may be selected when the likelihood of skipping exceeds this threshold value, while another supplemental content item may be selected when the likelihood does not exceed the threshold. Thus, for example, if it is determined that the user likely does not intend to skip a supplemental content item, the system may continue with display of the originally intended supplemental content. Conversely, if it is determined that the user likely intends to skip the supplemental content, another supplemental content item may instead be selected with, e.g., an intended message shown at its beginning. Any threshold having any value or values may be employed, and any supplemental content may be selected. Skip likelihood may be determined in any manner. In some embodiments, skip likelihood may be based on cursor position, and more specifically cursor positioning over the location of a user interface (UI) element that presents the user with the option to skip supplemental content. As one example, display systems may present users with a “Skip in X Seconds” icon or element, which a user can select by directing a cursor thereover and selecting with a controller button push when X seconds have elapsed. In this example, a user may attempt to preemptively move his or her cursor over the “Skip” icon before its time has expired, to prepare for selecting the icon as soon as it is possible to do so. Accordingly, systems of embodiments of the disclosure may consider this action as an indication that the user intends to skip the upcoming supplemental content. Similarly, other actions may also be used to determine skip intent, such as eye gaze directed at the “Skip” icon just before or otherwise prior to timer expiration, the user's hand or other body part moved to be positioned over a controller just before timer expiration and thus indicating intent to skip, pressure applied to a controller button just before timer expiration, movement of the controller itself, or any other action that may be detectable by a system and may be indicative of intent to skip. Likewise, any of these actions may be performed before the “Skip” icon appears, also indicating intent to skip. Thus, for example, moving a cursor to the screen location where the “Skip” icon may soon appear may be deemed likely to indicate intent to skip when the option to do so appears. Any of the above remaining actions (eye gaze, button pressure, hand movement, controller movement, etc.) may also be deemed as indicating likelihood of intent to skip upcoming supplemental content. In some embodiments, the UI element provides a control to, for example, close a browser tab in which the content item is being displayed, select a different browser tab from the browser tab in which the content item is being displayed, select a different content item in a playlist, and/or mute audio of the content item or browser tab in which the content item is being displayed. Intent to skip may also be determined from past patterns of viewer behavior. More specifically, user information such as a user profile may be compiled, storing a user's past behavior as it relates to skip likelihood. This profile may then be retrieved from its storage and used to determine or help determine skip likelihood. User profile information may be any information that may indicate skip likelihood, such as the frequency users have skipped advertisements or other supplemental content in the past, types of supplemental content skipped and not skipped, metadata of previously viewed supplemental content that may indicate attributes such as product types whose ads were skipped, brands whose ads were skipped and not skipped, times of day ads were skipped, positions within displayed content at which ads were skipped or not skipped, and the like. In some embodiments of the disclosure, systems may react to a likelihood of skipping content by designating that content slot as skippable or non-skippable. Any designation is contemplated in response to determined skip likelihood. For example, ad creators, content distributors, or the like may wish to designate ad slots as non-skippable upon determining that the viewer wishes to skip an upcoming ad, thus increasing the likelihood that the viewer will see the ad. Conversely, viewers may wish that ad slots are designated as skippable anytime an ad is intended to be skipped, thus reducing the number of ads the viewer is forced to view, and improving the viewer's experience. BRIEF DESCRIPTION OF THE FIGURES The above and other objects and advantages of the disclosure will be apparent upon consideration of the following detailed description, taken in conjunction with the accompanying drawings, in which like reference characters refer to like parts throughout, and in which: 1 FIG. illustrates operation of an exemplary system for selecting supplemental content according to skip likelihood, in accordance with some embodiments of the disclosure; 2 FIG. illustrates one method for determining a likelihood of skipping supplemental content, in accordance with some embodiments of the disclosure; 3 FIG. is an embodiment of illustrative electronic computing devices constructed for use according to some embodiments of the disclosure; 4 FIG. is an embodiment of an illustrative system for selecting supplemental content according to skip likelihood, constructed for use according to some embodiments of the disclosure; 5 FIG. is an embodiment of an illustrative content server constructed for use according to some embodiments of the disclosure; 6 FIG. 7 FIG. and conceptually illustrate determination of a likelihood of skipping supplemental content, in accordance with some embodiments of the disclosure; 8 FIG. is a flowchart illustrating processing steps for selecting supplemental content according to skip likelihood, in accordance with some embodiments of the disclosure; 9 FIG. 10 FIG. and are flowcharts illustrating processing steps for reacting to a likelihood of skipping supplemental content, in accordance with some embodiments of the disclosure; 11 FIG. conceptually illustrates exemplary inputs for determining a likelihood of skipping supplemental content, in accordance with some embodiments of the disclosure; 12 FIG. illustrates exemplary actions performed in response to a likelihood of skipping supplemental content, in accordance with some embodiments of the disclosure; and 13 FIG. is a flowchart illustrating processing steps for reacting to a likelihood of skipping supplemental content, in accordance with further embodiments of the disclosure. DETAILED DESCRIPTION In one embodiment, the disclosure relates to systems and methods for a computer-based process that determines when a viewer is likely to skip over supplemental content, and adjusts supplemental content presentation to compensate. Systems of embodiments of the disclosure may utilize various inputs to determine the likelihood of skipping supplemental content, including cursor position at or near specified icons or other UI elements, as well as user actions such as gaze direction, various motions or actions, controller manipulations, and the like. Once a likelihood of skipping supplemental content is determined, various actions may be taken in response, including without limitation selection of supplemental content that conveys its intended message prior to skipping, playing of supplemental content at increased speed, and designation of supplemental content slots as skippable or non-skippable. 1 FIG. 100 110 120 110 100 110 1 110 2 120 140 110 1 110 2 140 120 140 130 150 160 100 100 140 140 140 140 150 150 140 160 160 150 140 100 160 140 100 160 140 160 100 120 150 100 100 160 160 150 illustrates operation of an exemplary system for selecting supplemental content according to skip likelihood, in accordance with some embodiments of the disclosure. Here, a display may display content from storage via processing unit . Storage may store content to be retrieved for display on display , as well as supplemental content - and -. Processing unit may determine whether viewer is likely to skip supplemental content such as advertisements and select alternative supplemental content from storages - or - for display to the viewer instead. Processor may determine the likelihood of viewer skipping supplemental content in any manner and using any inputs. In some embodiments of the disclosure, these inputs may include information retrieved from user profile storage , camera , controller , and display . For example, such inputs may include the position of a cursor rendered on display and controlled by viewer . Positioning of the cursor on a “Skip Ad” icon or other UI element (e.g., a UI element to close a browser tab in which the content item is being displayed, a UI element to select a different browser tab from the browser tab in which the content item is being displayed, a UI element to play a different content item in a playlist, or a UI element to mute audio of the content item or browser tab in which the content item is being displayed) by viewer may indicate an intent of viewer to skip or otherwise avoid viewing an ad, and/or to avoid playing audio of the ad. Further inputs may include viewer position or actions, as detected by camera . For instance, camera may capture images of viewer grasping controller , indicating an intent to use the controller to skip an advertisement and/or change a channel. Camera may also capture images of viewer performing certain gestures or actions characteristic of an intent to skip supplemental content, such as a waving of a hand, pointing to a portion of display corresponding to a “Skip Ad” icon, or the like. Controller may further include sensors to detect pressure applied by viewer upon any buttons, indicating an intent to press one of the buttons to command display to skip an advertisement and/or change a channel, for instance. These sensors may also detect motion of controller , indicating that it has been picked up by viewer to skip supplemental content. In some embodiments of the disclosure, controller may be any device capable of issuing any commands to display and/or processing unit directly or indirectly, such a dedicated remote controller, a smartphone, a laptop computer or other computing device, or the like. In some embodiments, camera may be physically separate from but communicatively coupled to display , or may be incorporated into any devices such as display , controller , and the like. As one example, controller may be a smartphone and camera may be an internal camera of this smartphone. 120 140 130 140 Processing unit may additionally retrieve information on viewer from user profile storage , where the retrieved information is indicative of a skip history of viewer . For example, retrieved information may include data such as the frequency users have skipped advertisements or other supplemental content in the past, and information on supplemental content skipped and not skipped, such as the lengths or other attributes of skipped and not skipped content. Information may also include metadata of previously viewed supplemental content that may indicate attributes such as product types whose ads were skipped and not skipped, brands whose ads were skipped and not skipped, skipped content genres or other subject matter, times of day ads were skipped and not skipped, positions within displayed content at which ads were skipped or not skipped, and the like. Stored user information may include any type and quantity of information that may help indicate a skip likelihood. 120 140 120 140 110 1 100 100 1 140 120 120 140 110 2 100 110 2 Once processing unit has determined whether viewer is likely to skip supplemental content, it may select supplemental content accordingly. For example, if processing unit determines that viewer is unlikely to skip upcoming supplemental content, it may select content from first supplemental content storage - for display on display . In this example, first supplemental content storage - may contain the supplemental content originally intended for display in that particular content slot. That is, if viewer is not deemed likely to skip supplemental content, processing unit may proceed with display of the supplemental content as originally intended. Conversely, if processing unit determines that viewer is likely to skip upcoming supplemental content, it may instead select content from second supplemental content storage - for display on display . As above, second supplemental content storage - may contain supplemental content tailored to viewers likely to skip such content. This content may, for example, be supplemental content that conveys its intended message rapidly, such as short ads, still frames of a product, ads with product information and/or appealing images in its first few frames, or the like. 120 140 140 200 210 240 220 2 FIG. Processing unit may determine the likelihood that viewer will skip supplemental content, in any manner. conceptually illustrates one method for determining a likelihood of skipping supplemental content, in accordance with embodiments of the disclosure. In particular, movement of a cursor over an ad insertion countdown, a position of a “Skip Ad” icon prior to the icon being “active” or able to be clicked on, and/or other UI element shortly before or after an ad is played may indicate that the viewer is preparing to and likely to skip or otherwise avoid viewing/listening to the upcoming ad. Here, display projects or displays content, such as a movie or show, and may also concurrently or upon selection display one or more of an ad insertion countdown, a skip ad countdown, or a “Skip Ad” button in a section of the display area, as well as a progress bar in section of the display area. 230 1 140 230 2 140 230 1 140 140 230 2 140 In some embodiments, a “Skip Ad” button or other UI element may appear in a lower right-hand portion - of the display area, which is grayed out and not selectable but which informs the viewer that it will soon be possible to skip an upcoming ad. At a predetermined time, for instance, after an ad insertion countdown has expired, the preselected ad begins to be played, and a skip ad countdown has expired, the “Skip Ad” button becomes active so that portion - of the display changes to a “Skip Now” or other button that allows users to select it in order to skip the currently playing ad. Thus, if viewer moves a cursor over the “Skip Ad” button of portion - prior to it becoming active, this may indicate that the viewer intends to skip the upcoming supplemental content. Similarly, if viewer moves his or her cursor over the “Skip Now” button of portion - once it is active, this may also indicate that viewer intends to skip the supplemental content that is currently being played. 140 In some embodiments, the “Skip Now” button may include or be preceded by a skip countdown timer, such as “Skip in X” where X may be, e.g., any predetermined number of seconds until a “Skip Now” button is shown and made active. This alerts viewers to an upcoming time at which they may skip supplemental content. As with the “Skip Ad” button, if viewer moves a cursor over the “Skip in X” button, this may indicate intent to press or select the “Skip Now” button when it becomes active, thus skipping the supplemental content. In some embodiments of the disclosure, an ad insertion countdown may be displayed, the preselected ad begins to be played at the expiration of the ad insertion countdown, a skip countdown timer may then be displayed, followed by a “Skip Now” or other button once the skip countdown timer expires, where skip likelihood may be determined during the ad insertion countdown and/or the skip countdown period. In some embodiments of the disclosure, the skip countdown timer may not be displayed, with only a “Skip Now” or other button being shown once skip operations are permitted. In some other embodiments, neither the skip countdown timer nor a “Skip Now” or other button may be displayed, with play of the supplemental content simply beginning at the commencement of the supplemental content time slot, such as when non-skippable supplemental content is played. In further embodiments of the disclosure, some supplemental content slots may be automatically skipped, in which case a notice such as a countdown may be displayed indicating a countdown until an automatic skip occurs. This notice of an upcoming automatic skip may be displayed for viewers to see, or alternatively no notice may be displayed and supplemental content may simply be skipped automatically. In some embodiments, automatic skipping may occur at times in response to a determination of skip likelihood. That is, systems of embodiments of the disclosure may automatically skip supplemental content for a user when it is deemed that he or she is going to skip the supplemental content anyway. 240 240 1 240 2 210 210 240 1 240 2 In some embodiments, progress bar may be displayed, and may also indicate the slots within content at which ads will be played. In this case, as play of content approaches either ad slot - or -, movement of a cursor near the portion of the display area at which the “Skip Now” icon will appear may indicate intent to skip the upcoming supplemental content. That is, movement of a cursor into portion when a supplemental content slot - or - is approaching, even if no “Skip Ad” icon has appeared yet, may be used to indicate likely intent to skip upcoming supplemental content. 3 FIG. 3 FIG. 300 100 120 300 302 302 304 306 308 304 302 302 304 306 shows an embodiment of an illustrative user equipment device that may serve as a display and/or processing unit . User equipment device may receive content and data via input/output (hereinafter “I/O”) path . I/O path may provide content (e.g., broadcast programming, on-demand programming, Internet content, content available over a local area network (LAN) or wide area network (WAN), and/or other content) and data to control circuitry , which includes processing circuitry and storage . Control circuitry may be used to send and receive commands, requests, and other suitable data using I/O path . I/O path may connect control circuitry (and specifically processing circuitry ) to one or more communications paths (described below). I/O functions may be provided by one or more of these communications paths but are shown as a single path in to avoid overcomplicating the drawing. 304 306 304 312 Control circuitry may be based on any suitable processing circuitry such as processing circuitry . As referred to herein, processing circuitry should be understood to mean circuitry based on one or more microprocessors, microcontrollers, digital signal processors, programmable logic devices, field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), etc., and may include a multi-core processor (e.g., dual-core, quad-core, hexa-core, or any suitable number of cores). In some embodiments, processing circuitry may be distributed across multiple separate processors or processing units, for example, multiple of the same type of processing units (e.g., two Intel Core i7 processors) or multiple different processors (e.g., an Intel Core i5 processor and an Intel Core i7 processor). In some embodiments, control circuitry executes instructions for receiving streamed content and executing its display, such as executing application programs that provide interfaces for content providers to stream and display content on display . 304 220 230 Control circuitry may thus include communications circuitry suitable for communicating with trailer generation server , content server , or any other networks or servers. Communications circuitry may include a cable modem, an integrated services digital network (ISDN) modem, a digital subscriber line (DSL) modem, a telephone modem, an Ethernet card, or a wireless modem for communication with other equipment, or any other suitable communications circuitry. Such communications may involve the Internet or any other suitable communications networks or paths. In addition, communications circuitry may include circuitry that enables peer-to-peer communication of user equipment devices, or communication of user equipment devices in locations remote from each other. 308 304 308 308 308 Memory may be an electronic storage device provided as storage , which is part of control circuitry . As referred to herein, the phrase “electronic storage device” or “storage device” should be understood to mean any device for storing electronic data, computer software, or firmware, such as random-access memory, read-only memory, hard drives, optical drives, digital video disc (DVD) recorders, compact disc (CD) recorders, BLU-RAY disc (BD) recorders, BLU-RAY 3D disc recorders, digital video recorders (DVRs, sometimes called personal video recorders, or PVRs), solid state devices, quantum storage devices, gaming consoles, gaming media, or any other suitable fixed or removable storage devices, and/or any combination of the same. Storage may be used to store various types of content described herein as well as media guidance data described above. Nonvolatile memory may also be used (e.g., to launch a boot-up routine and other instructions). Cloud-based storage may be used to supplement storage or instead of storage . 308 306 308 312 Storage may also store instructions or code for an operating system and any number of application programs to be executed by the operating system. In operation, processing circuitry retrieves and executes the instructions stored in storage , to run both the operating system and any application programs started by the user. The application programs can include one or more content display applications that implement an interface allowing users to select and display content on display or another display. 304 304 300 304 308 300 308 Control circuitry may include video generating circuitry and tuning circuitry, such as one or more analog tuners, one or more MPEG-2 decoders or other digital decoding circuitry, high-definition tuners, or any other suitable tuning or video circuits or combinations of such circuits. Encoding circuitry (e.g., for converting over-the-air, analog, or digital signals to MPEG signals for storage) may also be included. Control circuitry may also include scaler circuitry for upconverting and downconverting content into the preferred output format of the user equipment . Circuitry may also include digital-to-analog converter circuitry and analog-to-digital converter circuitry for converting between digital and analog signals. The tuning and encoding circuitry may be used by the user equipment device to receive and to display, to play, or to record content. The tuning and encoding circuitry may also be used to receive guidance data. The circuitry described herein, including for example, the tuning, video generating, encoding, decoding, encrypting, decrypting, scaler, and analog/digital circuitry, may be implemented using software running on one or more general-purpose or specialized processors. Multiple tuners may be provided to handle simultaneous tuning functions (e.g., watch and record functions, picture-in-picture (PIP) functions, multiple-tuner recording, etc.). If storage is provided as a separate device from user equipment , the tuning and encoding circuitry (including multiple tuners) may be associated with storage . 304 310 310 312 300 312 310 312 312 A user may send instructions to control circuitry using user input interface . User input interface may be any suitable user interface, such as a remote control, mouse, trackball, keypad, keyboard, touch-screen, touchpad, stylus input, joystick, voice recognition interface, or other user input interfaces. Display may be provided as a stand-alone device or integrated with other elements of user equipment device . For example, display may be a touchscreen or touch-sensitive display. In such circumstances, user input interface may be integrated with or combined with display . Display may be one or more of a monitor, a television, a liquid crystal display (LCD) for a mobile device, amorphous silicon display, low temperature poly silicon display, electronic ink display, electrophoretic display, active matrix display, electro-wetting display, electrofluidic display, cathode ray tube display, light-emitting diode display, electroluminescent display, plasma display panel, high-performance addressing display, thin-film transistor display, organic light-emitting diode display, surface-conduction electron-emitter display (SED), laser television, carbon nanotubes, quantum dot display, interferometric modulator display, or any other suitable equipment for displaying visual images. 312 312 312 304 304 314 300 312 314 314 In some embodiments, display may be HDTV-capable. In some embodiments, display may be a 3D display, and the interactive media guidance application and any suitable content may be displayed in 3D. A video card or graphics card may generate the output to the display . The video card may offer various functions such as accelerated rendering of 3D scenes and 2D graphics, MPEG-2/MPEG-4 decoding, TV output, or the ability to connect multiple monitors. The video card may be any processing circuitry described above in relation to control circuitry . The video card may be integrated with the control circuitry . Speakers may be provided as integrated with other elements of user equipment device or may be stand-alone units. The audio component of videos and other content displayed on display may be played through speakers . In some embodiments, the audio may be distributed to a receiver (not shown), which processes and outputs the audio via speakers . 4 FIG. 3 FIG. 4 FIG. 300 400 402 404 406 300 100 402 is an embodiment of an illustrative system for selecting supplemental content according to skip likelihood, constructed for use according to embodiments of the disclosure. Device of can be implemented in system of as user television equipment , user computer equipment , a wireless user communications device , or any other type of user equipment suitable for determining skip likelihood and selecting supplemental content accordingly. For example, device may be incorporated into display , e.g., television . User equipment devices may be part of a network of devices. Various network configurations of devices may be implemented and are discussed in more detail below. 400 4 FIG. In system , there is typically more than one of each type of user equipment device but only one of each is shown in to avoid overcomplicating the drawing. In addition, each user may utilize more than one type of user equipment device and more than one of each type of user equipment device. 414 402 404 406 414 408 410 412 414 408 410 412 412 408 410 4 FIG. 4 FIG. The user equipment devices may be coupled to communications network . Namely, user television equipment , user computer equipment , and wireless user communications device are coupled to communications network via communications paths , , and , respectively. Communications network may be one or more networks including the Internet, a mobile phone network, mobile voice or data network (e.g., a 4G or LTE network), cable network, public switched telephone network, or other types of communications network or combinations of communications networks. Paths , , and may separately or together include one or more communications paths, such as, a satellite path, a fiber-optic path, a cable path, a path that supports Internet communications (e.g., IPTV), free-space connections (e.g., for broadcast or other wireless signals), or any other suitable wired or wireless communications path or combination of such paths. Path is drawn with dotted lines to indicate that in the exemplary embodiment shown in it is a wireless path and paths and are drawn as solid lines to indicate they are wired paths (although these paths may be wireless paths, if desired). Communications with the user equipment devices may be provided by one or more of these communications paths, but are shown as a single path in to avoid overcomplicating the drawing. 408 410 412 414 Although communications paths are not drawn between user equipment devices, these devices may communicate directly with each other via communication paths, such as those described above in connection with paths , , and , as well as other short-range point-to-point communication paths, such as USB cables, IEEE 1494 cables, wireless paths (e.g., Bluetooth, infrared, IEEE 802-11x, etc.), or other short-range communication via wired or wireless paths. BLUETOOTH is a certification mark owned by Bluetooth SIG, INC. The user equipment devices may also communicate with each other directly through an indirect path via communications network . 400 416 418 416 416 110 1 110 2 130 1 FIG. System also includes content source , and content presentation server . The content source represents any computer-accessible source of content, such as a storage for the movies, advertisements, and metadata. The content source may be or include the supplemental content storages - and - as well as user profile storage of . 418 The content presentation server may store and execute various software modules for implementing the skip likelihood determination and supplemental content selection functionality described herein. 5 FIG. 230 500 500 502 504 502 200 220 504 240 400 500 506 508 510 506 508 510 400 is an embodiment of an illustrative content server constructed for use according to some embodiments of the disclosure. Here, device may serve as a content server. Device may receive content and data via I/O paths and . I/O path may provide content and data to the various devices and/or server , while I/O path may provide data to, and receive content from, content database . Like the device , the device has control circuitry , which includes processing circuitry and storage . The control circuitry , processing circuitry , and storage may be constructed, and may operate, in a similar manner to the respective components of device . 510 508 510 512 514 516 512 200 Storage is a memory that stores a number of programs for execution by processing circuitry . In particular, storage may store a number of device interfaces , a skip intent prediction module for determining the likelihood of viewers skipping supplemental content, and content selection module for selecting supplemental content upon determination of skip likelihood. The device interfaces are interface programs for handling the exchange of commands and data with the various devices . 100 418 Any of the various modules and functions described herein may reside on any one or more devices. For example, skip intent prediction functionality may reside on display , or a remote server such as content presentation server . 6 FIG. 7 FIG. 6 FIG. 610 100 600 418 620 620 418 100 610 620 620 610 160 150 620 140 610 140 610 600 600 600 610 140 and conceptually illustrate determination of a likelihood of skipping supplemental content, in accordance with embodiments of the disclosure. illustrates skip likelihood determination prior to display of supplemental content such as an advertisement. Here, a video player such as display may communicate with a supplemental content server such as content presentation server , and with a skip ad intent predictor . Skip ad intent predictor may be a module residing on content presentation server , or may reside in any other device such as display . Video player may signal to skip supplemental content intent predictor that a supplemental content item is upcoming at a specified time, perhaps in conjunction with display of a “Skip Ad” or other icon or UI element. The skip supplemental content intent predictor receives inputs such as user actions, cursor locations from video player or another device such as controller , user motion or position information from camera , or any other suitable inputs. From these inputs, skip supplemental content intent predictor determines skip supplemental content likelihood, or whether the viewer is likely to skip the upcoming supplemental content. The resulting prediction is transmitted to video player , which may request a supplemental content item according to whether the viewer is likely to attempt to skip the supplemental content or not. The video player may, for example, transmit a request to supplemental content server for a particular supplemental content, for a supplemental content item meeting certain criteria, or may simply transmit the skip likelihood if the ad server is configured to select supplemental content accordingly. Supplemental content server may then return a suitable supplemental content item to video player , for display to the viewer . 7 FIG. 6 FIG. 610 620 620 610 160 150 620 140 610 140 610 600 600 600 610 140 600 610 140 610 610 illustrates skip likelihood determination after display of supplemental content such as an advertisement has already begun. Here, video player may signal to skip supplemental content intent predictor that supplemental content has already begun, perhaps in conjunction with display of an active “Skip Ad” or other icon or UI element. As in , skip supplemental content intent predictor also receives inputs such as user actions, cursor locations from video player or another device such as controller , user motion or position information from camera , or any other suitable inputs. From these inputs, skip supplemental content intent predictor determines skip supplemental content likelihood, or whether the viewer is likely to skip the currently playing supplemental content. The resulting prediction is transmitted to video player , which may continue to play the currently playing supplemental content, or request another supplemental content item according to whether the viewer is likely to attempt to skip the supplemental content or not. The video player may, for example, transmit a request to supplemental content server for a particular supplemental content item or for supplemental content meeting certain criteria, or may simply transmit the skip likelihood if the supplemental content server is configured to select supplemental content accordingly. Supplemental content server may then return a suitable supplemental content item to video player , for display to the viewer . As an example, supplemental content server may return a short supplemental content item, supplemental content whose message is conveyed in its initial frames, or supplemental content suitable for a non-skippable time slot. This latter case may occur when, for instance, the video player designates the supplemental content slot as non-skippable in response to a determination that the user is likely to attempt to skip the supplemental content, preventing the viewer from skipping the supplemental content. As another example, video player may buffer the currently playing supplemental content and, upon skip likelihood, may show key frames of the buffered supplemental content instead of the full supplemental content, skip to the end of the supplemental content, or the like. As a further example, video player may retrieve or buffer two supplemental content items, one suitable for a likely skip, and switch to playback of this skip-suitable supplemental content item once skip likelihood is determined. 8 FIG. 418 100 800 100 418 810 418 820 is a flowchart illustrating processing steps for selecting supplemental content according to skip likelihood, in accordance with some embodiments of the disclosure. Here, the process begins with the content presentation server transmitting a content item such as a movie for display on, e.g., display (Step ). During display of the content item, supplemental content slots may be designated for breaks in display of the content item, e.g., movie, and corresponding display of supplemental content. Display and/or content presentation server may accordingly identify a time when display of the content item is to be interrupted by display of at least one supplemental content item (Step ). Content presentation server may then determine the likelihood of receiving a command to skip play of the supplemental content item (Step ). 100 418 140 150 150 140 160 160 150 140 100 418 150 418 As above, skip likelihood may be determined in any manner, from any inputs. As one example, display may transmit cursor position to content presentation server , and positioning of the cursor on a “Skip Ad” icon or UI element may indicate intent to skip. Further inputs may include viewer position or actions, as detected by camera . For instance, camera may capture images of viewer grasping controller , indicating an intent to use the controller to skip supplemental content. Camera may also capture images of viewer performing certain gestures or actions characteristic of an intent to skip supplemental content, such as a waving of a hand, pointing to a portion of display corresponding to a “Skip Ad” icon, or the like. Content presentation server may receive these images from camera and be programmed to recognize these gestures, actions, or motions. Recognition of gestures, actions, motions, and the like may be accomplished in any manner, such as by comparison of input mages to a database of labeled images of such gestures, actions, and motions. Alternatively, server may execute one or more machine learning models such as convolutional neural networks or the like, which are trained to recognize input images or video as constituting certain gestures, actions, or motions. Such machine learning models are known. Training of such models may be performed by input of images and/or video labeled as corresponding to specific gestures, motions, or actions. 418 140 100 418 Content presentation server may also receive voice or other input from viewer , such as via microphones of display or another device, containing commands to skip upcoming supplemental content. Server may execute one or more known natural language processing modules to convert input speech to text, and to recognize skip commands in this text. 418 160 160 140 100 160 140 Content presentation server may also receive input from controllers such as controller , where certain such inputs may indicate skip likelihood. For example, controller detection of pressure applied by viewer upon any buttons may indicate an intent to press one of the buttons to command display to skip supplemental content. Detected controller motion may also indicate that it has been picked up by viewer to skip supplemental content. 418 140 130 140 Server may additionally retrieve information on viewer from user profile storage , where the retrieved information is indicative of a skip history of viewer . For example, retrieved information may include data such as the frequency with which users have skipped advertisements or other supplemental content in the past, and information on supplemental content skipped and not skipped, such as the lengths or other attributes of skipped content. Information may also include metadata of previously viewed supplemental content that may indicate attributes such as product types whose ads were skipped, brands whose ads were skipped and not skipped, skipped content genres or other subject matter, times of day ads were skipped, positions within displayed content at which ads were skipped or not skipped, and the like. Stored user information may include any type and quantity of information that may help indicate a skip likelihood. Skip likelihood may be determined in any manner from the above inputs. As one example, skip likelihood may be a binary quantity (skip likely/not likely), with the above inputs contributing to determination of this binary quantity in any manner. In some embodiments, skip likelihood may be found if the presence of any of the above inputs occurs, or if more than a predetermined number of inputs occur. In other embodiments, each input may be assigned a numerical value, and the values of any inputs present at a given time may be summed. When this sum exceeds some predetermined value, skip likelihood is found. As another example, skip likelihood may be a numerical value such as a percentage rather than a binary quantity, with each input assigned a numerical value such as a percentage. Accordingly, the sum of the values for any inputs present at a given time may represent the aggregate percentage skip likelihood. Skip likelihood may be found when this aggregate percentage exceeds some predetermined threshold value, e.g., >50% or >60%. Any suitable threshold value may be used. Additionally, any of the above quantities may have any suitable numerical value. 418 140 830 418 418 840 418 418 850 418 418 860 Once skip likelihood is determined, server determines whether it is likely that the viewer intends to skip supplemental content (Step ), such as upon a determination of binary skip likely, or a skip likelihood value that exceeds some threshold value, e.g., 50%. If server finds that the user likely does not intend to skip content (skip not likely), server transmits supplemental content for display at the identified time (Step ). That is, for example, server transmits its ad as originally intended, at the intended time. If on the other hand skip likelihood is found, server may select one of a plurality of supplemental content items based on this likelihood (Step ). As above, server may select supplemental content that conveys its intended message rapidly, such as short supplemental content, still frames of a product, supplemental content with product information and/or appealing images in its first few frames, or the like. Server may then transmit this selected supplemental content for display (Step ). 418 418 900 910 418 920 140 930 940 9 FIG. 10 FIG. 9 FIG. In addition to selecting certain supplemental content based on skip likelihood, server may also designate supplemental content slots as skippable or non-skippable according to skip likelihood. and are flowcharts illustrating processing steps for reacting to a likelihood of skipping supplemental content, in accordance with some embodiments of the disclosure. illustrates reaction to skip likelihood that may be preferred by a supplemental content creator or distributor, for example. More specifically, server may determine an intent to skip supplemental content (Step ), such as by determining a skip likelihood exceeding a threshold value, for a particular supplemental content slot. A check may be made for the determined intent to skip (Step ). If a likely intent to skip is found, for instance, by monitoring input(s) (e.g., cursor position, eye gaze, button pressure, hand movement, controller movement, etc.) during an ad insertion countdown and/or at other times, server may designate this supplemental content slot as non-skippable (Step ), preventing the viewer from skipping the supplemental content even if he or she may wish, and intend, to do so. Supplemental content may then be selected for this non-skippable slot (Step ), and transmitted for display (Step ). 418 950 140 418 960 970 Conversely, if a likelihood of no skip is found, server may designate the supplemental content slot as skippable (Step ), perhaps aiding in fulfilling a quota or requirement for designating a certain number of content slots as skippable while still maintaining a likelihood that the viewer views the supplemental content. Server may then select supplemental content for this slot (Step ), where this supplemental content may be selected with knowledge that a skip is unlikely, allowing for selection of supplemental content as desired. The selected supplemental content may then be transmitted for display, along with an interface element presenting the viewer with, e.g., a skip supplemental content countdown followed by the option to skip (Step ). Thus, from the perspective of a supplemental content creator or distributor who may wish for their content to be viewed, supplemental content slots may be designated non-skippable when viewers are likely to attempt to skip that content, so as to force viewers to view the supplemental content. Similarly, supplemental content slots may be designated as skippable when viewers are unlikely to attempt to skip that content, as they are likely to view (i.e., unlikely to skip) the content anyway. 10 FIG. 418 1000 1010 418 1020 1030 1040 illustrates reaction to skip likelihood that may be preferred by viewers, for example. Here, server may determine an intent to skip supplemental content (Step ), such as by determining a skip likelihood exceeding a threshold value, for a particular supplemental content slot. A check may be made for the determined intent to skip (Step ). If a likely intent to skip is found, for instance, by monitoring input(s) (e.g., cursor position, eye gaze, button pressure, hand movement, controller movement, etc.) during an ad insertion countdown and/or at other times, server may designate this supplemental content slot as skippable (Step ), consistent with the user's wish and intent to skip the content. Supplemental content may then be selected for this skippable slot (Step ), such as by selection of supplemental content that conveys its message rapidly, due to the likelihood of the supplemental content being skipped at least in part. The selected supplemental content may then be transmitted for display, perhaps along with an interface element presenting the viewer with a skip supplemental content countdown followed by the option to skip (Step ). 418 1050 1060 1070 Conversely, if a likelihood of no skip is found, server may designate the supplemental content slot as non-skippable (Step ). In this manner, viewers who typically wish for minimal exposure to supplemental content (e.g., few ads) are allowed to skip supplemental content when they express a desire for doing so, and may also view this supplemental content when they wish to do so. Supplemental content may then be selected for this non-skippable slot (Step ) and transmitted for display (Step ), where selection may occur in any manner to select any desired content, as it will likely be viewed in its entirety. 11 FIG. 100 160 illustrates exemplary inputs for determining a likelihood of skipping supplemental content, in accordance with some embodiments of the disclosure. Any inputs may be employed to determine skip likelihood. In some embodiments, these inputs may include cursor position on display , eye gaze, controller and sensors therein, and user information. As above, cursor position may indicate skip likelihood, such as when the cursor is placed over a “Skip Ad” icon prior to or after its activation, or placed at the location the “Skip Ad” icon will soon appear. Eye gaze directed at the “Skip Ad” icon prior to or after its activation, or directed at the location the “Skip Ad” icon will soon appear, may similarly indicate skip likelihood. In some embodiments, the cursor position may indicate skip likelihood, such as when the cursor is placed over some other UI element that provides a control to, for example, close a browser tab in which the content item is being displayed, select a different browser tab from the browser tab in which the content item is being displayed, select a different content item in a playlist, and/or mute audio of the content item or browser tab in which the content item is being displayed. 160 160 160 Inputs from controller may also indicate skip likelihood, with some illustrative examples being button pressure, e.g., a user partially or fully depressing any controller button, and controller movement, both indicating preparation to use the controller to skip supplemental content. Additionally, user information describing past skip behavior of a user may indicate skip likelihood in that, for example, current user behavior consistent with past behavior that led to a skip command may indicate skip likelihood. Thus, for example, skip likelihood may be found when users are presented with the same genre ad, or ads for the same product, that they have consistently skipped in the past. Any one or more of these inputs may be used to determine skip likelihood, in any manner. 11 FIG. 140 As above, skip likelihood may be determined in any manner from any one or more of the factors listed in . As one example, each factor may be assigned a numerical likelihood value, which may be any function of user actions, and which may be any values. For instance, a 20% likelihood may be assigned to every 2-second span at which a cursor is placed over a “Skip Ad” button or other UI element. Thus, placing a cursor over a “Skip Ad” button for longer than 10 seconds may generate a 100% likelihood that the viewer intends to skip upcoming supplemental content. Similarly, placing a cursor over the “Skip Ad” button for only 2 seconds may generate a 20% likelihood of skip. The same values may be assigned to eye gaze duration directed at the “Skip Ad” button or its position on the display area. Thus, for instance, moving the cursor over the “Skip Ad” button for 4 seconds, with corresponding eye gaze also directed at the area of the “Skip Ad” button for the same 4 seconds, may result in a determination of 40%+40%=80% skip likelihood. Similarly, controller movement and button pressure may each be assigned, e.g., 50% if a “Skip” button is partially depressed, 30% if any other button is partially depressed, and 20% upon controller movement once the “Skip Ad” button or other notice appears. In this case, if a user picks up a controller and partially depresses a button other than a skip button (e.g., when the controller has no skip button) within the time the user is notified of an upcoming skip opportunity, a 20%+30%=50% skip likelihood may be assigned, whereas if the controller has a skip button and it is partially depressed, a 20%+50%=70% likelihood of skip is assigned. User information may also be assigned skip likelihood values, e.g., 20% if upcoming supplemental content matches a genre that is often (e.g., >50% of the time historically) skipped by the viewer, or 30% if the upcoming supplemental content concerns a product or brand that is often (e.g., >50%) skipped by the viewer. Thus, for example, if a viewer picks up his or her controller once a skip opportunity appears, and the upcoming supplemental content relates to a product that is often skipped by the viewer, a 20%+50%=70% skip likelihood may be determined. 12 FIG. 9 10 FIGS. and 12 FIG. 12 FIG. illustrates exemplary actions performed in response to a likelihood of skipping supplemental content, in accordance with some embodiments of the disclosure. Similar to , illustrates re-designation of supplemental content slots as skippable or non-skippable responsive to a determination of skip likelihood. In the example of , a content item is assigned four different ad slots, at 5, 10, 15, and 25 minutes into the content, respectively. These four ad slots are initially designated as skippable, non-skippable, skippable, and non-skippable, respectively. That is, viewers are allowed to skip the first and third ad slots, but are unable to skip either of the second or fourth ad slots. 4 55 418 418 At the : mark, server may determine a low skip likelihood for the upcoming 5-minute ad slot. Server may accordingly modify the ad slot schedule as follows: the four different ad slots are re-designated, in order, as non-skippable, skippable, skippable, and non-skippable. In this manner, the first (5-minute) ad slot is redesignated as non-skippable as the ad slot is likely to be viewed anyway, which is beneficial from the perspective of a viewer, as it preserves skippable ad slots for later slots that the viewer may wish to skip. In this manner, ad slots may be re-designated as skippable or non-skippable on the fly according to user skip intent, to better serve the interests of various parties as desired. 13 FIG. 9 10 FIGS. - 8 FIG. 13 FIG. 418 100 1300 100 418 100 418 1310 100 418 418 1320 is a flowchart illustrating processing steps for reacting to a likelihood of skipping supplemental content, in accordance with further embodiments of the disclosure. As in , skip likelihood may be employed to determine whether a supplemental content slot is to be designated skippable or non-skippable. Similar to the process of , the process of begins with the content presentation server transmitting a content item such as a movie for display on display (Step ), for example. During display of the content item, supplemental content slots may be designated for breaks in display of the content item, e.g., movie, and corresponding display of supplemental content. Display and/or content presentation server may accordingly identify a time when display of the content item is to be interrupted by display of at least one supplemental content item. More specifically, display and/or server may receive an indication of a first time slot designated for interruption of the displayed content and play of skippable supplemental content, as well as an indication of a second time slot designated for interruption of the displayed content and play of a non-skippable supplemental content item (Step ). That is, display and/or server may receive indications of an upcoming skippable time slot and a later non-skippable time slot, e.g., two future supplemental content slots, one skippable and one non-skippable. Content presentation server may then determine the likelihood of receiving a command to skip play of the supplemental content item (Step ). 1330 418 418 418 9 10 FIGS. - If a skip is deemed likely (Step ), server may designate the upcoming supplemental content slot as either skippable or non-skippable, in response to the determined likelihood of receiving a command to skip supplemental content. As in , a supplemental content slot may be designated as either skippable or non-skippable as desired. If, e.g., the interests of content creators or distributors are to be considered, server may designate the upcoming supplemental content slot as non-skippable upon determination of skip likelihood, to prevent users from their intended skipping of supplemental content and force them to view it. In contrast, if viewer interests are to be considered, server may instead designate the upcoming supplemental content slot as skippable upon determination of skip likelihood, to allow users to keep viewing their desired content and increase viewer interest and engagement. Accordingly, embodiments of the disclosure allow upcoming supplemental content slots to be designated, or re-designated, as skippable or non-skippable as desired. 418 1340 In particular, if a skip is deemed likely, server may re-designate the first or nearest upcoming time slot as non-skippable, and re-designate the next time slot as skippable (Step ). That is, the upcoming skippable time slot may be re-designated as skippable in response to the viewer intending to skip its supplemental content. To keep the number of skippable and non-skippable time slots generally constant, the following time slot is then re-designated as skippable, although this step is optional. 418 1350 1360 1360 418 After designation or re-designation of the upcoming supplemental content slot as either skippable or non-skippable, server may transmit for display a supplemental content item for play during the supplemental content slot (Step ). The process may then terminate (Step ). If a skip is deemed not likely, the process may instead skip to Step without re-designation of any time slots from their original skippable or non-skippable designations. As above, server may select this supplemental content according to determined skip likelihood, such as by selecting content that conveys its intended message quickly, still images, and the like, if skip likelihood is high and the supplemental content slot has been designated skippable. Conversely, if skip likelihood is deemed low or the supplemental content slot has been designated as non-skippable, supplemental content may be selected in any other manner, without regard to the speed at which an intended image or message is conveyed. The foregoing description, for purposes of explanation, used specific nomenclature to provide a thorough understanding of the disclosure. However, it will be apparent to one skilled in the art that the specific details are not required to practice the methods and systems of the disclosure. Thus, the foregoing descriptions of specific embodiments of the present invention are presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. For example, skip likelihood may be determined in any manner from any inputs or combinations thereof. Any supplemental content may be selected in response to any determined skip likelihood, and any responsive actions may be taken. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications, to thereby enable others skilled in the art to best utilize the methods and systems of the disclosure and various embodiments with various modifications as are suited to the particular use contemplated. Additionally, different features of the various embodiments, disclosed or otherwise, can be mixed and matched or otherwise combined so as to create further embodiments contemplated by the disclosure.
3 Ways Product Managers Survive a Black Hole Backlog I speak with close to 100 Product Managers at companies of all sizes every week. And no matter how big each company is, I hear a common theme in these discussions: Their product backlogs have devolved into black holes. Features are put in — but most never come back out as implemented. What once was a crisp, manageable backlog quickly becomes an unwieldy list of features — some good, many bad. Most Product Managers have no way to distinguish features that matter. It is easy to see how this happens. I have been there, too. My colleagues in sales, marketing, and engineering had meaningful features that they thought would make the product better. So, they threw them over the fence at me. I then added them to the proverbial back burner (aka backlog). They had to go somewhere, right? “Sure, let me add it to the backlog” might as well be an automated answer. Without a strategy to vet features based on business impact, it is easier to add all of them to an endless backlog that never seems to shrink. Then release planning rolls around. And it becomes too easy for Product Managers to only focus on what is top of mind today — which keeps previously requested features a distant memory. The result? An abandoned backlog, where features that could be game-changing get buried beneath the new, new thing. But here is the good news: Your own backlog does not have to suffer the same fate. Leading product myself and working with thousands of product leaders has taught me three tricks to trim even the longest product backlogs: Set your strategy It is impossible to groom your product backlog if your product strategy does not come first. You need to know where you are going before you can figure out how to get there. Putting features into a release without knowing how they will enhance your product is like getting into your car and driving until you run out of gas — hoping you end up somewhere good along the way. That’s why the most essential step to managing a black hole backlog takes place before you touch it. Start by reaffirming your strategy for the product as well clearly defining the goals that will achieve this strategy. Your goals do not need to be complicated. Many of the Product Managers I speak with believe that thinking about product goals is a luxury they do not have time for. “Our company is pushing forward too quickly,” they say. “No one has time to focus on strategy right now.” Then they tell me what they must achieve within the next 12-18 months: “We want to bring in 1,000 new customers this year.” “We want to grow our existing customer use of our product by 20 percent.” So, these Product Managers are setting goals for their products. And those goals align to high-level product strategies. They just do not realize it. Score your features Once you have defined (or revisited) your product goals, the next step is relatively easy. You need to rank each feature in your product backlog based on its quantitative business value. And you can achieve this by using your product’s core metrics to build a feature scorecard. Use your scorecard to quantify the value of features against the three to five metrics that matter most to your business. Then, rank all features within your product backlog using this scorecard. Your scoring system can be very simple. Just take each of your product’s goals and rank each feature against them on a scale of one to 10. Of course, you can create much more complex scoring systems if you chose to — but I am a fan of simplicity and I recommend keeping it simple to start. For example: Does this feature nail your goals to bring in new customers and grow existing customer use with low effort required to implement it? Awesome — prioritize it. Does that other feature fail to drive more business or help existing customers? Great — you just framed your conversation for explaining to the features’s requestor why it will not be prioritized. Organize your backlog Your new feature scorecard gives you a system to prioritize your backlog. And that means your backlog does not have to be so dark and scary. It empowers you to quickly comb through the list and find the new features that will have the biggest impact on your product. The result? You can say “yea” or nay” to each feature more quickly than you might think possible. Many Product Managers feel torn between daily execution and ongoing backlog management. But the truth is that you can’t separate the two. The solution to a black hole product backlog is not to stop encouraging submission of new features. Managing your backlog — and the features in it — should be an ongoing process. It matters just as much as release and strategy planning.
https://www.aha.io/blog/3-ways-product-managers-survive-a-black-hole-backlog
PROBLEM TO BE SOLVED: To provide a silicone compound capable of efficiently dewatering textile products in their dewatering steps to significantly cut their drying time, and to provide a dewatering method using the silicone compound. 1 2 SOLUTION: The specific silicone compound is represented by the formula(1) (wherein, R is a monovalent hydrocarbon group; R is H or a monovalent hydrocarbon group; X is a 2-4C oxyalkylene group; Y is a group selected from a monovalent hydrocarbon group, acyl group and hydrogen atom; (a) is a positive number of 1-100; b is 0 or a positive number of 1-200; c is 0 or a positive number of 1-10; and d is a positive number of 2-100). The dewatering method comprises carrying out a washing step and/or a rinsing step in the presence of the silicone compound followed by a dewatering step. COPYRIGHT: (C)2005,JPO&NCIPI
... 3 4 5 6 7 8 9 10 11 12 13 14 » »» Silk Road: Lanzhou to Dunhuang March 30th 2013 The Silk Road has always fascinated me. A few years back I attended the Smithsonian's Folklife festival on the Washington Mall which featured the Silk Road cultures, mostly food, dance, and story telling, for each country along the Silk Road from Turkey to China. It was inspirational! It is difficult to date when long distance trade between the early civilizations of Mesopotamia, India and China ... read more Asia » China » Gansu » Dunhuang For centuries China stood as a leading civilization, outpacing the rest of the world in the arts and sciences, but in the 19th and early 20th centuries, the country was beset by civil unrest, major famines, military defeats, and foreign occupation. A... ... read more «« « ... 3 4 5 6 7 8 9 10 11 12 13 14 » »» Advertisement Message Subscribe Follow ( 147 ) Comment Bob Carlsen Home and Away I started blogging on Travelblog in 2009, entering trips beginning in 2007. My intention has always been to document my years of travel since my birth...how else was I to save all the slides my Dad took of my childhood! Most bloggers write about their trips away from home. Mine, and eventually Linda's, homes have usually been where people travel to. So the idea eventually percolated to name our blog "Home and Away." The blog is not intended to be an autobiography, although I spent my entire life traveling, with Linda joining me after college, it may appear to be that way. As this... full info Joined March 26th 2009 Trips 20 Last Login October 6th 2022 Followers 147 Status BLOGGER Follows 248 Blogs 260 Guestbook 1,010 Photos 7,605 Forum Posts 2227 [photo=7448203] [blogger=115111] [photo=7448203] [blog=778666] [blogger=115111] Top Photos Trips (20) Blog Map Advertisement 4th May 2013 Dancing Dave David Hooper DUNHUANG Magic shot this one. I was away camping when you did this part of your trip so only shared it in patches so glad to take time to enjoy it now. Dunhuang and Mogao Grottoes have always been on our wish list. I've been to 4 of the Top 5 Grotto sites in China and Mogao is said to be amazing. Tell me...were you able to take photos in the grottoes...or if forbidden did others do so? At Maiji Shan in Gansu it was forbidden but I noticed the Chinese tourists did so anyway without penalty. I was paranoid my camera would be confiscated so only took pics surreptitiously! 14th April 2015 The Travel Camel Shane Dallas Amazing What an incredible looking place with those dramatic dunes as a backdrop! 15th April 2015 Home and Away Bob Carlsen That's where we rode the Bactrian camels... I was hoping to inspire you as I have yet to see you riding a camel...and Bactrian camels are the easiest to ride.
https://www.travelblog.org/Photos/7448203
Sir arthur conan doyle, the father of the great sherlock holmes, was born is edinburgh, scotland in 1856 the third of 10 children to a rich irish-catholic family even though his family was well-reputed within the small irish community there, his father was an alcoholic, which made for a turbulent family life. Though sir arthur conan doyle's name is recognized the world over, for decades the man himself has been overshadowed by his better understood creation, sherlock holmes, who has become one of literature's most enduring characters based on thousands of previously unavailable documents, andrew lycett. Welcome to the official site of sir arthur conan doyle, creator of the most famous detective of all time over 125 years after his creation, sherlock holmes remains the most popular fictional detective in history. A panoply of visions: female archetypes in the life and selected early fiction of arthur conan doyle (order no 9232822) cooke, m l (2010) fear of and fascination with the foreign in arthur conan doyle's sherlock holmes adventures (order no 1487698. The conan doyle estate ltd consists of eight people, all but one of whom are the beneficiaries of the will of dame jean conan doyle, the youngest child of sir arthur conan doyle most of us are relations of sir arthur, by marriage or blood. About sir arthur conan doyle and his early life the world-class writer was born on 22 may 1859, as arthur ignatius conan doyle, in edinburgh, scotland born into an affluent family, he was introduced to the beauty of literature early on in life. Sir arthur conan doyle popularised the detective novel with his sherlock holmes stories photograph: popperfoto sir arthur conan doyle, who died yesterday, will be for ever associated with the. Sir arthur conan doyle was born on may 22, 1859, in edinburghhe studied medicine at the university of edinburgh and began to write stories while he was a student over his life he produced more than 30 books, 150 short stories, poems, plays, and essays across a wide range of genres. The life of sir arthur conan doyle conan doyle was born in 1859 and died in 1930 what he did with those years was truly amazing here you can find links to articles about his belief in spiritualism, real-life mystery cases solved by conan doyle, his medical career and so much more. Sherlock holmes author sir arthur conan doyle, who was born in 1859 and died in 1930 a fascinating series of real-life murder cases believed to have inspired sir arthur conan doyle to write. A comprehensive biography of scottish author sir arthur conan doyle that explores his childhood, family life, medical practice, authorship of nonfiction works, inspiration for sherlock holmes, social acquaintances, marriages, interest in spiritualism, and much more. Sir arthur ignatius conan doyle kstj dl (22 may 1859 – 7 july 1930) was a british writer best known for his detective fiction featuring the character sherlock holmes originally a physician , in 1887 he published a study in scarlet , the first of four novels about holmes and dr watson. Arthur conan doyle, in full sir arthur ignatius conan doyle, (born may 22, 1859, edinburgh, scotland—died july 7, 1930, crowborough, sussex, england), scottish writer best known for his creation of the detective sherlock holmes—one of the most vivid and enduring characters in english fiction. The book i had brought with me was the life of sir arthur conan doyle, the edgar award-winning biography by the american detective writer john dickson carr, first published in 1949 toward the end of the volume, almost as an aside, carr writes of his subject, “in 1912 he set out to solve a murder mystery and set free an innocent man. As the first biographer to gain access to arthur conan doyle’s newly released personal archive, andrew lycett draws a captivating picture of the complex man who created the brilliant, egotistical, and scientifically minded sherlock holmes. Sir arthur conan doyle, born on 22 may 1859 in edinburg, scotland, is best known as the man who created sherlock holmes consequently, he provided a platform for the production of several movies and tv series on the iconic fictional detective. Conan doyle’s personal investigation of that case is the subject of a spate of nonfiction books and inspired julian barnes’ acclaimed 2005 novel, arthur and george. Sir arthur conan doyle penned 56 short stories and four novels featuring detective sherlock holmes few literary characters have staying power that can match sherlock holmes. The life and death of sherlock holmes review – a history of holmes appreciation from the guardian archive sir arthur conan doyle changes his mind about irish home rule it's time for. Sir arthur conan doyle was born on may 22, 1859, and even though it was not known to his parents, on that day one of the greatest writers of his time was born doyle was born in edinburgh, scotland, to mary and charles doyle. The adventures of sherlock holmes is a collection of twelve stories by arthur conan doyle, featuring his famous detective they were originally published in the strand magazine from july 1891 to june 1892. Author arthur conan doyle wrote 60 mystery stories featuring the wildly popular detective character sherlock holmes and his loyal assistant watson on may 22, 1859, arthur conan doyle was born in. 2018.
http://viessayanyr.alisher.info/the-life-and-times-of-sir-arthur-conan-doyle.html
- In a small bowl, mix seasonings. In a 6-qt. slow cooker, combine vegetables, broth and 2 teaspoons seasoning mixture. Rub remaining seasoning mixture over roast; place over vegetables. Cook, covered, on low 3-1/2 to 4-1/2 hours or until meat and vegetables are tender (a thermometer inserted in roast should read at least 145°). - Remove roast from slow cooker; tent with foil. Let stand 15 minutes before slicing. Serve with vegetables. Nutrition Facts4 ounces cooked pork with 1/2 cup vegetables: 261 calories, 7g fat (2g saturated fat), 68mg cholesterol, 523mg sodium, 21g carbohydrate (3g sugars, 4g fiber), 29g protein. Diabetic Exchanges: 4 lean meat, 1-1/2 starch. Recommended Video Reviews - Oct 17, 2017 Great recipe. Loved the spice combination. Doubled the amount of squash and carrots and added siracha for additional spice.
https://www.tasteofhome.com/recipes/slow-cooker-curry-pork/
A battery management system requires precise current measurements in order to derive State of Charge (SOC) and other important battery state information. To date, commercial shunt measurement systems are limited in their accuracy by two main factors. First, accuracy is affected by changes in temperature caused either by the environment or by the self-heating of the shunt when high currents are flowing. Second, historically, due to limitations imposed by offset error and noise, lower-resistance shunts could not be used to meter small currents. Instead, a high-resistance shunt must be used. Such shunts dissipate considerable heat and must be physically large. Sendyne’s new SFP101 current, voltage and temperature measurement IC and board addresses these issues. The SPF101 is designed to achieve a maximum voltage offset error of less than ±150 nanovolts. It also provides user-definable automatic compensation for resistance dependence of the shunt on temperature from -40° C to +125° C, and is programmable to accommodate shunts with an output voltage from ±10 mV to ±300 mV. According to Sendyne, this means that the SFP101 can work with a shunt of essentially any resistance made from any material. For example, a copper busbar can have as large as a 35% error over temperature, making it unsuitable for use in exact current measurement. Due to the proprietary temperature compensation feature of the SFP101, that error is typically reduced to ±0.1%. Using a 25 micro-Ohm or 100 micro-Ohm shunt, the SFP101 typically achieves ±0.05% accuracy of current measurements. A 25 micro-Ohm shunt, such as those produced by Vishay, can now be used for high-power applications, handling continuous currents of 300 A and peak currents of 2000 A while resolving currents as small as 250 μA. The SFP101 also provides on-board calibration for both current and voltage, storing calibration values and applying them internally, and offers separate charge, discharge, and total Coulomb counters and measures multiple temperature points to ±1° C. Correction: A previous version of this post contained an image of a battery-pack system display made by Saint-Gobain Performance Plastics. The image is not directly related to this news of Sendyne’s products and it has been removed to avoid further confusion.
https://chargedevs.com/newswire/sendynes-new-sfp101-precisely-measures-current-using-shunt-of-any-material/
According to the latest forecast, parts of Northwest Australia, the Northern Territory and Queensland could see temperatures rise this week as heatwaves hit the country, the report said. Residents in areas where heat waves have been reported should stay updated with weather reports, including heat wave advisories. However, residents can expect cooler weather starting Friday. (Photo: William West/AFP via Getty Images) Australia heat wave Heat waves can occur due to breaks in the monsoons, allowing for drying and uncomfortable heat. A weather report published in the Guardian on December 6 showed parts of Australia could expect warmer temperatures as heatwaves continue to unfold. According to a report by the Guardian, meteorologist Domensino explained that he found that warm air formed in the northwest and continued to move east. This could lead to temperature concerns of 5C to 6C, the report added. Parts of the Northern Territory (NT) and Queensland can expect a possible heat wave, which could turn severe or extreme depending on conditions, the report added. Residents of affected areas should seek cooler areas to mitigate the effects of heat waves. The forecast said parts of northern WA could record 47C this weekend. Meanwhile, parts of the Northern Territory can expect 44C or 45C. read more: The UN and the Red Cross warn that heat waves could leave areas uninhabitable within decades On Sunday, residents of western Queensland could feel temperatures of up to 40C. Heat waves Many residents of northern Australia can expect uncomfortable heat waves and weather, with prolonged exposure to severe or extreme temperatures causing health concerns including stroke and fatigue. Monitoring the weather for the impact of heat waves is the best way to help you and your family prepare. As the Christmas season approaches, heatwaves in Australia may affect outdoor activities. Weather monitoring is always best for natural disasters and extreme heat waves. Here are some important reminders during heat wave events: Check the weather Severe heat waves can affect outdoor activities, including outdoor exercise. If you do plan on outdoor activities, it’s best to plan them during the day or when cooler weather finally returns. Prolonged exposure to hot weather can expose heat-related health risks. Stay hydrated at all times Heat waves can also cause fatigue. It is important to stay hydrated at all times during heat wave events. Wear comfortable clothes If you’re staying indoors, it’s a good idea to wear comfortable clothing to mitigate heat waves or hot temperatures. Go to cooler places You can travel or move to cooler areas to ease heat waves. If your home has an air conditioning system or air conditioning, it can help mitigate the impact of heat waves. Regularly checking fans and air conditioning is the best way to maintain your home’s heat wave equipment. Turn off the unnecessary device. Televisions and other gadgets can generate heat, which adds to the environment. If the device is not needed, you can turn it off to save electricity. Medical care Extreme heat waves can be dangerous for the elderly. If you find someone suffering from heatstroke, it is best to take them to the nearest hospital. read more: Scientists urge conservation and study of forests to combat climate change Related article: More than 2 billion children could be exposed to extreme heat waves by 2050, UNICEF report reveals For more stories like this, don’t forget to follow Natural World News. © 2022 NatureWorldNews.com All Rights Reserved. Do not reproduce without permission.
https://bottonews.com/temperatures-are-expected-to-rise-in-northern-australia-due-to-heatwaves-bot-to-news/
How to make yourself the plan of his house? In the Server Management Server tool, click on Database Request. You can also open an existing query and display an assessment process by clicking Open a file in the toolbar and selecting an existing query. How to make a simple plan ? First, you design the wall to create each room, that is, the kitchen, the living room, the bathroom and the bedroom. Import diagrams and constructions from the cadastre to build your plan automatically. This may interest you: . You can also arrange a living room or a top list to paint your walls. How to create a program ? How to make a 5-step plan - Identify and design the house plan yourself in 5 stages:… - Step 1: Explain the plan. … - Step 2: collect all restrictions. … - Step 3: Design your home. … - Step 4: Connect the facade. … - Step 5 of the cleaning program. How to do your own house design? Start by drawing a plan for each level (basement, ground floor, floor) showing the exterior walls, doors, porch doors and windows. Then separate each room by drawing partitions and interior doors. Determine the location of stairs, the surface, etc. What is an implantation plan ? The Web framework is a paper that represents the structure of the product in a shelf with the exact size of each space allocated in the shelf. See article : . What are the different types of locations ? 6 Types of three geometric shapes: a. a place: postal address; b. landscape: road surface; vs landscape: ZNIEFF (Faunistic and Floristic Interest Places) Sc € “Screen catches extracted from Géoportail 13 of IGN What is the plan of the house ? Outsourcing is a function that allows you to view the exact position of the next building on the website and do so using logical markers. Which is the site map ? Before building permits, districts generally need a web-based system. The surveyor then developed a plan describing the condition of the proposed building in relation to the property limit. Who makes the plans ? As part of his work, the architect provides a variety of designs (drawings, drawings, visions, etc.) that accompany their clients in their thinking and are the source of work for the contractor in charge of the project. This may interest you: . Who plans the house plans? An artist or architect can give them the skills to design the best house design. Most importantly, a home program is a file that contains important information or information vital to the development of your business. Who can make me a great system ? The architect is a professional architect to ensure the great design of your work or to expand your work. In addition, all parts of the building permit file must be signed by a qualified architect. How to build a plan ? To develop a final plan, the best way is to list ideas and documents, using a simple summary (an article on each main subject). There is no longer any need to worry about organizing things the right way. To see also : . First, we were happy to find the equipment. How to plan the work? The full program allows you to organize your thoughts, to answer a question (like a problem). It’s about building your development, before the final text. The full program allows you to complete (and document) all of your work quickly, without typing. How to write a text ? Your plan must clearly describe the content of the presentation, the development (major and secondary ideas) and the end of the project that you will be writing soon. To write the project, see the “Reading and writing” section of this guide. What is the best free 3D drawing software ? Blender: the best free 3D software in the world The software is a vast collection of 3D models and delivery tools. See article : . What is the best 3D drawing software ? Materiliaze ko MakeXYZ. Blender had to be of good quality, it has open source software for 3D modeling. Google SketchUp is a program dedicated to 3D modeling. Google sketchup is known for its free services and software capabilities. How do you call the one who draws the house plans ? A manufacturer is a person who is responsible for the design or design of buildings of all kinds. Read also : . He is the right-wing builder and will understand the work as a whole, then put it on paper. Where the house is designed ? Homebuilder.net. A home builder.net allows you to organize your home using free online software. As on the Kozikaza site, you will first draw your 2D plan Who draws the plans ? Architectural design: freedom and loneliness On your orders, she draws your plans and you have complete freedom to choose materials and equipment. Who plans the authorization process? To implement your construction plans for building permits, you can call on a manufacturer, a design office or a manufacturer. As soon as your area or footprint of your building exceeds 150 m², you must find a solution for the architect. How to plan your house for free ? SketchUp. SketchUp is software that allows you to create your own house design for free. But above all it is 3D modeling software. Read too : . According to their website, they are “very user-friendly”. What is a house plan ? Housing plans allow the presentation of property by design and may include architectural design, landscaping and artistic design. Read also : . What is a behavioral plan ? The scenography is the main architectural design. It is a large view that represents the space system in the building, like a map, for the floor of the building. What is a house plan? Home design, for example, is a simple design that reflects the design of the parts and is an excellent starting point. The manufacturer needs a complete design or design including technical details that you will not find on most house plans. How to make a house plan without an architect ? Do work without an architect If you plan to build a house without hiring an engineer, it is recommended to obtain a building permit. This document protects you during mediation. To see also : . Designing the house is one of the requirements when applying for a building permit. How to make a plan to enlarge the house? However, many elements must be taken into account in a house expansion plan. First of all, we must consider the terrain and its characteristics: area, slope, risk, alignment, etc. Second, you have to consider how all departments will communicate with each other. What court can we build without an architect ? Less than 150 m² You do not need to call on the architect when you build other than an agricultural building, such as a single-family house, including the floor area: A team to calculate the building blocks used to supply with city authorization. plane less than or equal to 150 m². What is the mass plan ? Landscaping is a holistic approach designed by a construction engineer that allows work of global vision. It contains a lot of information. This may interest you: . It is an annex program that calculates the area of the house. It describes the area to be built and provides information on soil conditions. What is the status of the metro ? Landscaping is a document that must be provided when applying for a building permit or prior work permit. … Carried out in accordance with regulatory requirements, the main program allows the project management to verify the quality of the work from an urban point of view. What is a website plan? The entire floor plan and website structure represent an image of your work or modify your work. They make it possible to identify and understand the project when looking for a building permit (PC) file or a preliminary mission notice (DP). How to get a system on the ground? For this, you can call the researcher or contact your cadastral plan at www.cadastre.gouv.fr. The cadastre system is not, however, an act of opposition.
http://dailyhousedesign.com/how-to-make-yourself-the-plan-of-his-house/
To do this simply run a hot wash with a cup of bleach. For shoes that are machine washable, repeat the same steps as clothing. For shoes that are not machine washable, while wearing gloves apply a mixture of hot water and laundry soap with a sponge. Discard the sponge and allow shoes to dry. Can poison ivy spread on bed sheets? Myth: Poison ivy can be transmitted from person to person. Fact: Poison ivy can’t be caught from other people. However, oils can stay on clothes, gardening gloves, equipment, tools, shoes, pets, and other items. Touching items with the oils can produce the same skin rash as touching the poison ivy plant directly. What happens if poison ivy touches your clothes? Unwashed clothing, shoes, and other items that are contaminated with urushiol can cause allergic reactions for one year or longer. 1 The only way to get rid of the toxic oil is with a thorough washing with detergent and water. Can poison ivy survive the washing machine? You’ll need to put your washer on the hottest setting for the largest load setting, for the longest time setting. This sounds pretty wasteful, but it’s the most efficient way besides dry cleaning to remove poison ivy from clothing. Be sure you use a full scoop of detergent, and don’t fill the washing machine up. Can poison ivy live on clothes? Clothing. Just like animal fur, clothing fibers can transfer poison ivy oils. If you don’t wash an article of clothing with soap and water after wearing it, you can potentially get a rash of poison ivy again. The same is true for coming in contact with other people’s clothing that also has the poison ivy oils on it. Should I wash my sheets if I have poison ivy? Wash the affected items separately with ordinary laundry detergent at the highest recommended water temperature, for the longest cycle, and, if possible, on the largest load setting. Washing the items separately will prevent the poison from spreading to other garments. What cures poison ivy fast? Clean the area with soap and water for at least 10 minutes. Take a cool bath. Apply calamine or another anti-itching lotion three to four times a day to relieve itching. Soothe inflamed areas with oatmeal products or 1 percent hydrocortisone cream. Is poison ivy contagious after a shower? False. Perspiration won’t spread the rash, if the resin (urushiol) has been washed off. Hot showers spread poison ivy. Does toothpaste work on poison ivy? Just one ounce diluted in one quart of water can help dry up the poison ivy and speed the recovery process along. Toothpaste. For a quick fix, rub a little toothpaste onto the rash to stop the itching. What is the liquid that comes out of poison ivy blisters? Poison ivy, poison oak, and poison sumac plants contain a compound called urushiol, which is a light, colorless oil that is found in the fruit, leaves, stem, roots, and sap of the plant. Why does poison ivy keep popping up? Poison ivy rash often appears in a straight line because of the way the plant brushes against your skin. But if you develop a rash after touching a piece of clothing or pet fur that has urushiol on it, the rash may be more spread out. You can also transfer the oil to other parts of your body with your fingers. How long does poison ivy last on surfaces? Specimens of urushiol several centuries old have been found to cause dermatitis in sensitive people. 1 to 5 years is normal for urushiol oil to stay active on any surface including dead plants. The name urushiol is derived from urushi, Japanese name for lacquer. Poison Ivy rash is contagious. What will neutralize urushiol? The best treatment for exposure to urushiol is rubbing alcohol (in a pinch vodka or gin works, but only if you rub on, not drink it), which is a solvent that neutralizes the urushiol. If used within four hours of exposure, it will leach urushiol out of the skin. Will rain wash away urushiol? Fact: The oil does not travel through rain. It can be present, however, in lakes and river water near where the plants grow, or where poison ivy or oak leaves, vines or roots trail into the water. Urushiol can easily be retained on rain gear exposed to the plants. Is sunlight good for poison ivy? The rash usually resolves on its own within a few days, although the condition can occur again. In the meantime, limit sun exposure and wear sun-protective clothing and sunscreen. An over-the-counter anti-itch cream, such as hydrocortisone cream, might help ease discomfort. Is hydrogen peroxide good for poison ivy? 3% hydrogen peroxide in a spray bottle and spray the affected areas and allow to air dry. Helps to treat symptoms as well as to dry the rash.
https://whoatwherewhy.com/how-do-you-remove-poison-ivy-from-touched-clothes/
This situation will occur if Inventory has been enabled for your organization. This causes all sales and purchase transactions(Invoices and Bills with Inventory items) to pass through an asset account called Inventory Asset. In the screenshot below, you can see an Invoice transaction with a Cost of Goods Sold account. This is done because, when you purchase goods, until you sell them, it is considered as your asset. Hence those goods are stored under Inventory Asset. However, only when you raise an invoice and sell them to a customer, will it be considered as an expense. When you are purchasing items from your vendor, you will see in the Journal Report that, Similarly, when you are selling items that you had initially bought, you will see in the Journal Report that, Books Online accounting software for small businesses.
https://www.zoho.com/us/books/kb/reports/net-journal-report.html
Christmas in a glass! Ingredients - 2 unwaxed oranges - 1 lemon, peel only - 150g caster sugar - 5 cloves, plus extra for garnish - 5 cardamom pods, bruised - 1 cinnamon stick - A pinch of freshly grated nutmeg - 2 bottles of fruity, unoaked red wine Serves 12 Method - Peel and juice 1 orange, and add to a large saucepan along with the lemon peel, sugar and spices. Add enough wine to just cover the sugar, and heat gently until the sugar has dissolved, stirring occasionally. Bring to the boil and cook for 5 – 8 minutes until you have a thick syrup. - Turn the heat down, and pour the rest of the wine into the saucepan. Gently heat through and serve with the orange segments as a garnish.
https://www.rangemaster.co.uk/cooking/recipes/christmas/mulled-wine
This property benefits from a private garden, with a large, south facing terrace and balcony. On the ground floor, there is one double bedroom and one twin bedroom, a shower room with WC and washbasin, a bathroom, also with a WC and washbasin and a games room, with a wood burner, pool table, dart board, table tennis and two large patio doors opening onto the patio area. Upstairs, there is a twin bedroom (additional 3rd bed for a child possible), with an adjoining bathroom, which has a bidet, bath and washbasin. There is a seperate WC and washbasin next to the bathroom. A double bedroom, with it's own en-suite shower room is also located on this floor. There is an open plan kitchen, living and dining room. The dining room has a small balcony offering the fantastic sea view, including the Islot de St Michel (as pictured below). A TV (UK Satellite)and DVD player are located in the living room, along with a wood burner. The kitchen and living rooms have two large patio doors, which open onto the large south facing balcony, which has it's own patio furniture. The well equipped kitchen has a fridge freezer, 5 plate gas hob, large electric oven, dishwasher, microwave, kettle and toaster. 4* Les Fours a Chaux, Dinan - sleeps 6 (FREE Wi-Fi) This newly renovated property has two double bedrooms upstairs (main with a king size bed), both with their own en-suite shower rooms. A twin bedroom is also located on this floor, along with the family bathroom. On the ground floor there is a well equipped kitchen, a laundry room with washing machine, hand basin and wc, a large lounge and a dining room with a log burner. The dining room table top lifts up to become a snooker / pool table. There is a large, plasma TV which has access to both French and English TV, including Sky and a DVD / Blu Ray player (Please note that the English TV is provided via the internet, so the quality of the reception cannot always be guaranteed). A cd player and ipod docking station can also be found in the dining room. Outside, there is a large gravel courtyard enclosed by the stone wall. The former lime kilns have been converted into a large sun room / conservatory. There is also a shower room with hand basin and wc in this building. There are several places to relax in the large gardens, including the sun terrace which has views over the ramparts and the viaduct. There is a large lawn above the former lime kilns and a hillside walk amongst the mature plants and shrubs .
http://www.westfranceholidayrentals.co.uk/index.asp?pageid=162026
BY David T. Hartgen, Adrian Moore, Ravi K. Karanam, M. Gregory Fields – We often hear the nation’s infrastructure is crumbling, but state highway conditions are the best they’ve been in 19 years, according to Reason Foundation’s 19th Annual Highway Report. Unfortunately, the recession is partly responsible for the improvement in road conditions: people are driving less which has helped slow pavement deterioration and reduced traffic congestion and fatalities. The annual Reason Foundation study measures the condition and cost-effectiveness of state-owned roads in 11 categories, including deficient bridges, urban traffic congestion, fatality rates, pavement condition on urban and rural Interstates and on major rural roads, and the number of unsafe narrow rural lanes. National performance in all of those key areas improved in 2008, the most recent year with complete data available. Drivers in California, Minnesota, Maryland, Michigan and Connecticut are stuck in the worst traffic. Over 65 percent of all urban Interstates are congested in each of those five states. But nationally, the percentage of urban Interstates that are congested fell below 50 percent for the first time since 2000, when congestion standards were revised. Motorists in California and Hawaii have to look out for the most potholes on urban Interstates. In those two states, 25 percent of urban interstate pavement is in poor condition. Alaska and Rhode Island have the bumpiest rural pavement, each with about 10 percent in poor condition. However, nationally, pavement conditions on urban Interstates are the best they’ve been since 1993, and rural primary roads are the smoothest they’ve been since 1993 also. Rhode Island has the most troubled bridges in the country, with over 53 percent of bridges deficient. For comparison, just 10 percent of top-ranked Nevada’s bridges are rated deficient. Across the country, over 141,000 (23.7 percent) of America’s bridges were structurally deficient or functionally obsolete in 2008, the lowest percentage since 1984. With the recession reducing driving, and engineering improving road design and car safety features, traffic fatalities have steadily fallen to the lowest levels since the 1960s. Massachusetts has the safest roads with just 0.67 fatalities per 100 million miles driven. Montana and Louisiana have the highest fatality rates, at 2.12 and 2.02 fatalities per million miles driven. Overall, North Dakota, Montana and Kansas have the most cost-effective state highway systems. Rhode Island, Alaska, California, Hawaii and New York have the least cost-effective roads. The full Annual Highway Report rankings are: 1. North Dakota 2. Montana 3. Kansas 4. New Mexico 5. Nebraska 6. South Carolina 7. Wyoming 8. Missouri 9. Georgia 10. Oregon 11. Delaware 12. South Dakota 13. Texas 14. Kentucky 15. Nevada 16. Mississippi 17. Idaho 18. Virginia 19. Tennessee 20. Alabama 21. North Carolina 22. Utah 23. Indiana 24. Ohio 25. Minnesota 26. Arizona 27. New Hampshire 28. Wisconsin 29. Arkansas 30. West Virginia 31. Iowa 32. Maine 33. Washington 34. Colorado 35. Michigan 36. Louisiana 37. Oklahoma 38. Pennsylvania 39. Florida 40. Illinois 41. Connecticut 42. Vermont 43. Maryland 44. Massachusetts 45. New Jersey 46. New York 47. Hawaii 48. California 49. Alaska 50. Rhode Island Over the last two years New Jersey has moved up from last to 45th in the overall rankings, but still spends dramatically more than every other state. New Jersey spends $1.1 million per mile on state roads. The second biggest spender, Florida, spends $671,000 per mile and California spends $545,000 per mile. South Carolina had the lowest expenses, spending just $34,000 per mile. California also squanders a massive amount of transportation money that never makes it onto roads, spending $93,464 in administrative costs for every mile of state road. New York ($89,194 per mile), Massachusetts ($71,982), and New Jersey ($62,748) also compare poorly to states like Texas ($6,529) and Virginia ($6,370) that spend dramatically less on administrative costs. “We’re seeing several factors combine to produce significant improvement in highway conditions,” said David T. Hartgen, author of the report and emeritus professor of transportation studies at the University of North Carolina at Charlotte. “Over the last several years, states invested a lot more money to improve pavement and bridges. Spending increased 8 percent from 2007 to 2008, and per-mile spending on state roads has almost tripled since 1984, so you’d hope and expect to see improved performance. As pavement gets better, roads are widened and bridges get repaired, you’d also expect safety to improve. And the significant reduction in vehicle miles traveled during the recession has also played a role in slowing system decay. But as the states run short of money and deal with large budget deficits, we’ll see if this progress can be continued.” HAWAII ROAD CONDITIONS DETAILED Hawaii ranks 47th in the nation in state highway performance and cost-effectiveness, falling one spot from last year’s report. Hawaii ranks 46th in total highway disbursements, 12th in fatalities, 48th in deficient or functionally obsolete bridges and 36th in urban Interstate congestion. Hawaii’s best rankings come in rural Interstate condition (1st), fatality rates (12th), and urban Interstate congestion (36th). Hawaii’s lowest rankings are in state-controlled highway miles (50th) and urban Interstate condition (50th). Hawaii’s complete results: Performance by Category in 2008 / Rank - State-Controlled Highway Miles 50 - State Highway Agency Miles 37 - Total Disbursements 46 - Capital and Bridge Disbursements 46 - Maintenance Disbursements 42 - Administrative Disbursements 45 - Rural Interstate Condition 1 - Rural Other Principal Arterial Condition 48 - Urban Interstate Condition 50 - Urban Interstate Congestion 36 - Deficient or Functionally Obsolete Bridges 48 - Fatality Rates 12 - Narrow Rural Lanes 46 The full 19th Annual Highway Report and other states’ rankings are here.
https://www.hawaiireporter.com/reasons-19th-annual-highway-report-hawaii-ranks-47th-worst-in-the-nation-in-state-highway-performance-and-cost-effectiveness/
Stubberfield, Jonathan (2018) The health benefits and risks of growing-your-own produce in an urban environment. PhD thesis, University of Nottingham. Abstract The practice of gardening and growing-your-own (GYO) produce in urban areas, has been associated with many potential benefits to health from increased fruit and vegetable consumption and exercise, but also health risks arising from exposure to potentially toxic elements (such as cadmium: Cd, and lead: Pb) in urban soils. However, the potential health benefits of gardening are currently overlooked by authorities during assessments of contaminated land, which may result in access to urban gardens and allotments being incorrectly restricted or removed because of concerns over the impact to human health. The trade-off between health benefits and risks is investigated in this thesis through: the sampling and analysis of the properties of allotment soils (chapter 2); a comparison of plant uptake models (chapter 3) verified using a pot experiment (chapter 4), and a questionnaire survey investigating the effect of gardeners’ routines on benefits and risks (chapter 5). The different areas of study are combined in the creation of a model framework developed to estimate health benefits and risks attributable to urban gardening (chapter 6).
https://eprints.nottingham.ac.uk/49345/
Q: Paraboloid (3D parabola) surface fitting python I am trying to fit this x data: [0.4,0.165,0.165,0.585,0.585], this y data: [.45, .22, .63, .22, .63], and this z data: [1, 0.99, 0.98,0.97,0.96] to a paraboloid. I am using scipy's curve_fit tool. Here is my code: doex = [0.4,0.165,0.165,0.585,0.585] doey = [.45, .22, .63, .22, .63] doez = np.array([1, .99, .98,.97,.96]) def paraBolEqn(data,a,b,c,d): if b < .16 or b > .58 or c < .22 or c >.63: return 1e6 else: return ((data[0,:]-b)**2/(a**2)+(data[1,:]-c)**2/(a**2)) data = np.vstack((doex,doey)) zdata = doez opt.curve_fit(paraBolEqn,data,zdata) I am trying to center the paraboloid between .16 and .58 (x axis) and between .22 and .63 (y axis). I am doing this by returning a large value if b or c are outside of this range. Unfortunately the fit is wayyy off and my popt values are all 1, and my pcov is inf. Any help would be great. Thank you A: Rather than forcing high return values for out-of range regions you need to provide a good initial guess. In addition, the mode lacks an offset parameter and the paraboloid has the wrong sign. Change the model to: def paraBolEqn(data,a,b,c,d): x,y = data return -(((x-b)/a)**2+((y-d)/c)**2)+1.0 I fixed the offset to 1.0 because if it were added as fit parameter the system would be underdetermined (fewer or equal number of data points than fit parameters). Call curve_fit with an initial guess like this: popt,pcov=opt.curve_fit(paraBolEqn,np.vstack((doex,doey)),doez,p0=[1.5,0.4,1.5,0.4]) This yields: [ 1.68293045 0.31074135 2.38822062 0.36205424] and a nice nice match to the data:
Local Media Association and Local Media Foundation have released a new report detailing lessons we learned from our work with industry collaboratives. We launched our first collaboration in March 2020, and to date, have launched or managed nine collaborations including geography-based and topic-based formats. Why do we believe in collaboration? Because the industry gets stronger when we learn from each other. By focusing our collaborations on the dual goals of producing great journalism and promoting sustainability through business transformation, we know participants gain knowledge they can apply in their companies. Why did we write this report? A few reasons. First, LMA is intensely focused on reinventing business models for news, and we believe collaboration is a big part of business transformation. Second, many news outlets still do not participate in collaboratives — or some media companies have worked on a single project with a similar media outlet, and consider that enough collaboration. In this report, we break down the various types of collaboration and what local media organizations can learn from each approach. It’s a menu of collaboration, if you will. The first type of collaboration is around content, when newsrooms identify an underreported topic in their community and agree to publish stories together. A key ingredient for this type of collaboration to be successful is for participating news organizations to break down old competitive habits to work together for the good of the journalism and the audience. Next, we share how collaboratives can go a step further through agreeing on a shared mission to accomplish the group’s goals. For collaborations to have a shared mission, they must have buy-in to participate in the group from top company executives. Otherwise, the time and resources spent will not be worth it. We also share about collaborations that work together to gain shared knowledge on a specific topic. The key ingredient for this type of collaboration is a strong emphasis on training, sharing of sources, and a goal for reporters to truly deepen their knowledge. Our fourth example explains collaboration through technology partnerships. This could be through a tech startup company or a funder, but the goal is to work together to solve a technology problem facing the industry or collaborative participants. The key ingredient for any collaboration among media organizations and technology companies is that all parties must be completely focused on solving the problem. Next we share examples of collaborations that are making strides in bringing in new revenue. This is arguably the most successful area of LMA’s collaboratives, and something we believe is essential for every collaborative. Collaborations must look at revenue for long-term sustainability, and also explore multiple sources of revenue. Having one funder doesn’t guarantee sustainability for the future. Last, we share examples of how collaborations have formed a new, standalone organization. While many collaboratives become nonprofits, we believe that for-profit entities can also grow out of collaborations. For collaborations looking to form a new organization, we have found that strategic planning is the key ingredient to chart a path forward. If you have questions or would like to learn more after reading this report, reach out to Penny Riordan at [email protected]. The Center for Cooperative Media at Montclair State University provides valuable resources, research, and community building for news collaborations. We at LMA/LMF consider them valued partners. Check out their website for extensive resources.
https://localmedia.org/2022/06/new-lma-report-what-weve-learned-from-9-journalism-industry-collaboratives/
What can we learn from the BRIC countries? Dr. Fatih Birol, International Energy Agency An overview of the 2010 WEO Reference Scenario to 2030 is provided, discussing: changes in primary energy demand; upstream oil and gas capital expenditures; oil production, issues and prices; and natural gas supply, transportation, prices and market trends. The 450 Scenario is also described in respect to how demand by fuel type needs to change and the abatement of CO2 emissions; along with some key facts relating to the EU, China and the Copenhagen Accord. Some key findings include: the financial crisis has halted the rise in global energy use, but its long-term upward path will resume, based on current policies; oil investment has fallen sharply, posing questions on medium term supply; a sizable glut of natural gas is looming; a 450 path will require massive investments but would bring substantial benefits; natural gas can play a key role as a bridge to a cleaner energy future; and the Copenhagen Accord takes significant steps forward on international climate policy but is not sufficient to limit temperature rise to 2 degrees. Jeffrey Currie, Goldman Sachs International There has been a commodity imbalance with oil, which has historically seen global output lower than global production capacity. However, this is starting to change due to a range of interrelated issues, such as commodities, prices, resource realignments between Emerging Markets and Developed Markets (EM, DM); as well as increasing macroeconomic correlations. These changing relationships are explored in relation to what is happening within oil markets and oil pricing, considering the changing relationships between DM and EM. It is suggested that as DM recovers following the financial crisis, it will push the oil market back towards its effective production capacity by 2011. In the long term, given that the commodity crisis is a supply problem, more than a demand problem, demand in the DM will have to contract going forward to make room for EM demand increases (due to supply constraints). Jim Watson, Sussex Energy Group China’s recent energy trends are described in terms of primary energy demand, energy intensity and power generation capacity, alongside environmental implications such as acid rain, total and per capita carbon emissions and attitudes to these. It is suggested that: per capita carbon emissions are low, but are rising from the production of goods for western consumers; there is a genuine desire to develop sustainably, but this is hindered by the financial crisis; and that significant progress in low carbon technologies is occurring, alongside improvements within energy efficiency, economic restructuring and innovation.
http://www.biee.org/download-tags/brics/
This Homemade Fire Cider Recipe makes a sweet & spicy tonic that boosts your immune system and stimulates digestion. It also helps you fight off colds and flu, improves circulation and acts as a natural decongestant. Scale Ingredients - 1/3 cup peeled fresh horseradish, diced - 1/2 cup peeled fresh ginger, diced or sliced - 10 whole garlic cloves, peeled - 1/2 cup diced yellow onion - 1 jalapeno, sliced - 2 cinnamon sticks - 2 star anise - 1 teaspoon whole black peppercorns - 1/2 large orange, sliced (with peel) - 1 lemon, sliced or cut into wedges - 2 rosemary sprigs - 5 to 7 thyme sprigs - 1/3 to 1/2 cup of raw honey - 2 cups raw unfiltered apple cider vinegar Instructions - To a large sealable jar, add horseradish, ginger, garlic, onion, jalapeno, cinnamon sticks, star anise, peppercorns, orange, lemon, rosemary and thyme. Gently press ingredients down. Cover with apple cider vinegar until everything is completely submerged. - Seal the jar (if you’re using a metal lid, place a piece of parchment paper between jar and lid to prevent a corrosive reaction with vinegar). - Store in a cool, dark place, shaking for a few seconds every day or two, at least 3 weeks and up to 6 weeks. - After 3 weeks, strain through a cheesecloth or a fine-mesh sieve. Throw the solids out. Then stir the honey in. Fire Cider should be stored in a sealed container in refrigerator up to 1 month.
https://www.joyfulhealthyeats.com/my-favorite-homemade-fire-cider-recipe/print/34816/
Praveen G.B., Technical Lead at PathPartner Technology, presents the “Creating a Computationally Efficient Embedded CNN Face Recognizer” tutorial at the May 2018 Embedded Vision Summit. Face recognition systems have made great progress thanks to availability of data, deep learning algorithms and better image sensors. Face recognition systems should be tolerant of variations in illumination, pose and occlusions, and should be scalable to large numbers of users with minimal need for capturing images during registration. Machine learning approaches are limited by their scalability. Existing deep learning approaches make use of either “too-deep” networks with increased computational complexity or customized layers that require large model files. In this talk, Praveen explores low-to-high complexity CNN architectures for face recognition and shows that with the right combination of training data and cost functions, you can indeed train a low-complexity CNN architecture (an AlexNet-like model, for example) that can achieve reasonably good accuracy compared with more-complex networks. He then explores system-level algorithmic customizations that will enable you to create a robust real-time embedded face recognition system using low-complexity CNN architectures.
https://www.edge-ai-vision.com/2018/07/creating-a-computationally-efficient-embedded-cnn-face-recognizer-a-presentation-from-pathpartner-technology/
In a season where a host of players have been lost for the season to injury the Bills will have one player a step closer to returning to the active roster. Second-year wide receiver James Hardy is expected to practice beginning Wednesday. "We anticipate that James will be ready to go (this) week," head coach Dick Jauron told Buffalobills.com. "And unless something unexpected occurs that he will begin practicing." At the beginning of the regular season Hardy was close to being back to full strength after tearing his ACL in last season's Week 14 game against the Jets. His knee was stable and healthy, but what was lacking was stamina. "It was just the consistency where I can perform day in and day out and practice at the speed and caliber that (my teammates) do," said Hardy. "I had to show I could do it for a long period of time." Hardy has spent the first six weeks of the season on the Reserve/PUP list which prohibits him from practicing in any way with the team during that time. He is permitted to attend practices, which he often did before conducting his own workouts after the team had left the field. Over the past month and a half Hardy has gone from working out on his surgically repaired knee for 20 to 25 minutes to much longer durations of time consistent with the length of a typical practice. He's also at a point now where he can practice day after day instead of giving his knee time to recover in between workouts. "They wanted it to be three or four days in a row without any soreness at all the next day," Hardy explained. Hardy is returning at the earliest possible time allowed under league rules for a player placed on Reserve/PUP, which is at the conclusion of Week 6. Provided he does in fact practice on Wednesday it will begin a 21-day window in which the coaching staff can evaluate his progress and readiness to return to the active roster. During the 21-day period the team does not have to make a roster move until the conclusion of Week 9 (Nov. 9). At that point the team has to make a decision to either activate Hardy to the 53-man roster or place him on injured reserve effectively ending his season. In all likelihood the Bills coaching staff will have him practice for a couple of weeks to see how his knee responds to the daily rigors of the practice before making a roster decision. But at any time during that three-week period if they feel he's ready he can be activated. Of course the Bills would then have to move another player off the roster to make room for Hardy. And that could prove to be a difficult decision. Buffalo currently has six receivers on the roster and adding Hardy would make it seven, an unusually high number at that position. At this point that decision is a little ways off, but to say Hardy is anxious to return to the roster would be an understatement. "Now we've got T.O. here and I want to go out there and practice and put everything that he's shown me in the film room onto the field," said Hardy. On Wednesday Hardy should get that chance.
https://www.buffalobills.com/news/hardy-to-practice-wednesday-831131
As part of their effort to catalog all plant life on Earth, Botanical Garden scientists named 81 new species of plants and fungi in 2011. They also established four new genera and two new orders of plants and fungi. Genera and orders are groupings of related species. Working in the field, laboratory, and research collections around the world, Garden scientists found or cataloged new species in a wide variety of familiar plant groups, including South American blueberry relatives and bromeliads; Southeast Asian mushrooms; a Mexican oak; and a Colombian cycad, one of a family of plants often referred to as "living fossils." "This impressive collection of new species from around the world that Garden scientists discovered and described in just one year is a testament of their dedication to one of our central goals—finding and cataloging all of the plant life on Earth," said James Miller, Ph.D., the Garden's Dean and Vice President for Science. "But this also shows how little we know about the plants on Earth and how far we still have to go to get a comprehensive catalog of them." The announcement of the new species discoveries comes little more than a month after another important development concerning the study and conservation of Earth's botanical biodiversity—the agreement by the Garden and three other leading botanical gardens to create the first online catalog of plants by 2020. The project, called the World Flora, will make comprehensive information about as many as 400,000 plant species—including the 81 newly discovered species—available to the international community. All of the new species, genera, and orders were either published by Garden scientists in scientific journals or books in 2011 or had been accepted for publication by the end of the year. In many cases, Garden scientists collaborated with researchers at other institutions. The discoveries highlight developments that are shaping botanists' research in the 21st century not only in the field and in plant collections but also increasingly in the laboratory. In addition, they call attention to the environmental risks that many plant species face. One of the most intriguing examples of these developments involves the species of palms that are used to make the distinctively wide, conical hats that many Vietnamese wear. Because of decades of war and isolation, scientists were unable to conduct significant field research in Vietnam for most of the second half of the 20th century. It was not until 2008 that Garden scientist Andrew Henderson, Ph.D., one of the world's leading palm experts, and his Vietnamese colleagues published a scientific description of the main palm species used in the hats. Based on its physical similarity to species in the genus Licuala, they assigned it to that genus, naming the species Licuala centralis (photo, left). However, analysis of DNA samples that Henderson collected during his field research revealed that the plant's genetic material was not similar to that of other Licuala species. In fact, it and several related species constituted a new genus. Henderson and his collaborator, Christine Bacon of Colorado State University, named the genus Lanonia, from the Vietnamese words for the plants—la non, or "hat palm." As a result of their work, Licuala centralis has been renamed Lanonia centralis, illustrating the way in which laboratory research has become a critical complement to work in the field and in plant collections. With its somewhat isolated location between the South China Sea to the east and highlands to the west, Vietnam is a center of palm species that are endemic, meaning they are found only there. That makes the country's palms especially interesting to researchers. However, as in other tropical countries, deforestation is reducing the country's forest cover, potentially threatening endemic species with extinction before they can be discovered and studied. "Vietnam is full of undescribed new species," said Dr. Henderson, who continues to work on identifying and cataloging the palms of Vietnam and the rest of Southeast Asia. "You can drive some places and look out the window and see new species, and the reason for that is because Vietnam was at war for so long. Biology and taxonomy were ignored." In addition to establishing Lanonia as a new genus, last year Dr. Henderson named 19 new species of palms based on extensive research in the Garden's William and Lynda Steere Herbarium and other herbaria in the United States, Central and South America, and Europe. Twelve of the new species are in the genus Geonoma, whose members are small- to medium-sized plants generally found in the understories of tropical forests in Central and South America. In the course of his research, Dr. Henderson examined nearly 5,000 Geonoma herbarium specimens. He scored each specimen on a series of nearly four dozen physical characteristics to discover similarities and differences among them. That allowed him to identify the new species. Garden botanists did not have to travel to distant countries to make significant discoveries in the last year. Of the 21 new species of lichens described by James Lendemer, Ph.D., he and his colleagues found 15 of them in the Great Smoky Mountains National Park in North Carolina and Tennessee, the most visited national park in the United States. That fact demonstrates that even in an area visited by eight to 10 million people a year, much biodiversity remains to be discovered. In fact, in the course of five field trips to the Smokies, Lendemer and his colleagues, including Garden curator Richard Harris, discovered that the Smokies were home to many more lichen species than had previously been known, increasing the number of recorded species there by 60 percent. Lichens are composite organisms consisting of a fungus and an alga or another organism capable of photosynthesis. They grow on a wide range of surfaces, including bare rock and the leaves and bark of trees. Many species are sensitive to pollution and are seen as indicators of environmental health. They also serve many important functions in a healthy ecosystem. "Lichens are critical components of terrestrial ecosystems," said Dr. Lendemer. "They're important in nutrient cycling. Animals and insects eat them and use them for shelter." Some lichen species may even be specifically adapted to grow on certain types of trees, including ones typically found in old-growth forests. Dr. Lendemer and his collaborators found one of the new species, Arthonia kermesina (photo, right), only on large, old spruces at high elevations in the Smokies. In addition, Brendan Hodkinson, Ph.D., who discovered two new orders of lichens—Sarrameanales and Trapeliales—in collaboration with Dr. Lendemer, noted that lichens most likely have importance to human life in ways that remain to be discovered, such as the apparent ability of some species to fix nitrogen, an important characteristic for making soil productive for food crops. "Since lichens produce so many different chemical compounds, there's a lot there that could be worked with," Dr. Hodkinson said. "There are definitely a lot of potential human applications that haven't been looked at." The value of lichens for both their known and potential uses makes it important to increase efforts to find and conserve them, according to Dr. Lendemer. One starting point for that work is to discover new species. "Without a name, a species can't be saved," he said. "Giving a species a name inserts it into a dialog, but if you don't describe it, you can't have that dialog." Among other new species discovered by Garden scientists in 2011, Benjamin Torke, Ph.D., and collaborators described five new species of Swartzia—a genus of tropical trees—found only in eastern Brazil. Dr. Torke and his colleagues are working to evaluate the new species on the conservation scale used by the International Union for the Conservation of Nature. Because these species are found in restricted habitat, much of which has been cleared for agriculture or development, Torke believes it is likely that they will be categorized as nearly threatened, if not vulnerable to extinction.In addition, Garden scientists made notable discoveries in several well-known or especially interesting families of plants: Paola Pedraza, Ph.D., and retired Garden scientist James Luteyn, Ph.D., described seven new species from Colombia in the genus Vaccinium, the genus that includes domestic blueberries; Roy Halling, Ph.D., and colleagues added five new species to the genus Phylloporus, a group of Southeast Asian mushrooms; Lawrence Kelly, Ph.D., and collaborators described a new oak species, Quercus delgadoana, found in Mexico; and Dennis Stevenson, Ph.D., and colleagues added Zamia tolimensis to the catalog of cycads, plants that are often called "living fossils" because they existed at the time of the dinosaurs. Dr. Miller notes that these discoveries come at a time when approximately 50,000 square miles of forest are being destroyed worldwide every year, threatening plant biodiversity. "A significant percentage of plant species are in serious decline, and probably a large number of them are species that we haven't discovered yet," he said. "We're working as quickly and as efficiently as we can to catalog these species, but it's a race against time." Stevenson Swanson | EurekAlert! Further information: http://www.nybg.org Further reports about: > Botanical Garden > Earth's magnetic field > Geonoma > Great Basin > Hodkinson > Licuala > Mountains > NYBG > Smokies > environmental risk > flowering plant > food crop > new species > plant life > plant species > tropical forest Colorectal cancer risk factors decrypted 13.07.2018 | Max-Planck-Institut für Stoffwechselforschung Algae Have Land Genes 13.07.2018 | Julius-Maximilians-Universität Würzburg For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the...
https://www.innovations-report.com/html/reports/life-sciences/nybg-scientists-identify-81-plant-fungus-species-196721.html
A Study of intimate partner violence among females attending a Teaching Hospital out-patient department Kuruppuarachchi, K.A.L.A. ; Wijeratne, L.T. ; Weerasinghe, G.D.S.S.K. ; Peiris, M.U.P.K. ; Williams, S.S. URI: http://repository.kln.ac.lk/handle/123456789/2025 Citation: Sri Lanka Journal of Psychiatry. 2010; 1(2): pp.60-63 Date: 2010 Abstract: BACKGROUND: Intimate partner violence (IPV) is considered a public health problem with physical and psychological consequences. AIMS: To describe the prevalence of IPV among married females attending the out-patient department of North Colombo Teaching Hospital and their attitude towards abuse. METHODS: A pre-tested self-administered questionnaire on physical, verbal, sexual and emotional abuse was given to the first 50 consenting married females attending the out-patient department on each day for five consecutive days. Confidentiality of responses was assured and adequate privacy was provided for the questionnaires to be completed. RESULTS: Of the 242 participants 98(40.5%) reported some form of abuse by their male partner. Prevalence of abuse reported was physical abuse 19%, verbal abuse 23%, emotional abuse 23% and sexual abuse 7%. A quarter (26.9%) of those inflicted physical violence sought medical treatment for the injuries but only two of them divulged the reason for the injury to medical staff. More than three quarters (79%) of those abused were in the relationship for more than ten years. The majority of the females surveyed believed that violence by the male partner should be tolerated. CONCLUSIONS: IPV is a common problem that is poorly divulged to medical personnel. Attitudes regarding IPV have to be changed in order to reduce abuse significantly. Show full item record Files in this item Files Size Format View There are no files associated with this item.
http://repository.kln.ac.lk/handle/123456789/2025
TECHNICAL FIELD BACKGROUND ART SUMMARY OF THE INVENTION BEST MODE FOR CARRYING OUT THE INVENTION First Embodiment Second Embodiment Third Embodiment Fourth Embodiment INDUSTRIAL APPLICABILITY This invention relates to an image encoding device that encodes image data in the form of a digital video signal by compression, and outputs image compression-encoded data, and to an image decoding device that restores a digital video signal by decoding image compression-encoded data output from an image encoding device. MPEG, ITU-T H.26x and other international standard video encoding methods employ a method that encodes by compressing units consisting of block data (to be referred to as “macro blocks”) combining 16×16 pixel luminance signals and 8×8 pixel color difference signals corresponding to the luminance signals based on motion search/compensation technology and orthogonal transformation/transformation coefficient quantization technology when encoding each frame of video signals (see, for example, Patent Document 1). In the case of decoding bit streams as well, processing is carried out in macro block units, and decoded images are ultimately output after having decoded all the macro blocks of a single image. In general, motion search in image encoding devices is carried out in the proximity of the macro block targeted for encoding. Consequently, the effective search region inevitably becomes small for those macro blocks located on the edges of a picture and the accuracy of motion compensation prediction unavoidably decreases for encoding of macro blocks at such locations as compared with macro blocks at other locations. Thus, the problem of image quality deterioration is known to occur in macro blocks targeted for encoding located along the edges of a picture. Therefore, in the image encoding device disclosed in Patent Document 1 indicated below, quantization parameters of macro blocks along the edges of a picture are adjusted in order to inhibit deterioration of image quality in those macro blocks located along the edges of the picture. Patent Document 1: Japanese Patent Application Laid-open No. 2000-059779 (FIG. 1) Since conventional image encoding devices are configured in the manner described above, although image quality deterioration in macro blocks along the edges of a picture can be prevented; however, there is a problem such that adjustment of quantization parameters of those macro blocks on the picture edges ends up increases the code quantities of those macro blocks in comparison with the code quantities of macro blocks of other areas, which leads to a decrease in compression ratio. The present invention is made to solve the foregoing problems, and an object of this invention is to provide an image encoding device capable of preventing deterioration of image quality in macro blocks along a picture edge without leading to a decrease in compression ratio. In addition, an object of this invention is to provide an image decoding device capable of restoring digital video signals by decoding image compression-encoded data output from an image encoding device like that described above. The image encoding device according to this invention is provided with: a motion vector derivation unit for deriving one or more motion vectors of a unit region targeted for encoding in a picture targeted for encoding, from motion vector of neighbouring encoded unit region, and from motion vector of unit region located at a previously encoded picture; a motion vector selection unit for, in the case where a unit region having, as a starting point thereof, a pixel location indicated by a motion vector derived by the motion vector derivation unit includes a region outside a picture, excluding that motion vector from vectors targeted for averaging; and a motion compensation predicted image generation unit for generating a motion compensation predicted image by obtaining pixel values of motion compensation predicted image for the unit region targeted for encoding, with one or more motion vectors determined by the motion vector selection unit, wherein an encoding unit determines a difference image between the picture to be encoded and a motion compensation predicted image generated by the motion compensation predicted image generation unit and encodes the difference image. According to this invention, the effect of preventing deterioration of image quality in macro blocks along a picture edge is demonstrated without leading to a decrease in compression ratio, as a result of employing a configuration in which are provided: the motion vector derivation unit for deriving one or more motion vectors of a unit region targeted for encoding in a picture targeted for encoding, from motion vector of neighbouring encoded unit region, and from motion vector of unit region located at a previously encoded picture that is stored in frame memory; the motion vector selection unit for, in the case where a unit region having, as a starting point thereof, a pixel location indicated by a motion vector derived by the motion vector derivation unit includes a region outside a picture, excluding that motion vector from vectors targeted for averaging; and the motion compensation predicted image generation unit for generating a motion compensation predicted image by obtaining pixel values of motion compensation predicted image for the unit region targeted for encoding, with one or more motion vectors determined by the motion vector selection unit, wherein the encoding unit determines a difference image between the picture to be encoded and a motion compensation predicted image generated by the motion compensation predicted image generation unit and encodes the difference image. The following provides an explanation of embodiments of the present invention in accordance with the appended drawings in order to provide a more detailed explanation of this invention. FIG. 1 is a block diagram showing the connection relationship between an image encoding device and an image decoding device according to a first embodiment of this invention. FIG. 1 1 2 In , an image encoding device is an encoding device that uses, for example, an H.264/AVC encoding method, and when image data (video image) of an image is input therein, a plurality of pictures that compose that image data are divided into prescribed unit regions, motion vectors are determined for each unit region, and that image data is encoded by compression using the motion vectors of each unit region to transmit a bit stream consisting of compression-encoded data of that image data to an image decoding device . 1 2 When the bit stream transmitted from the image encoding device is received, the image decoding device uses the motion vectors of each unit region to restore the image data (video signal) of the image by decoding that bit stream. 1 <Configuration of Image Encoding Device > FIG. 2 FIG. 3 FIG. 2 1 26 1 is a block diagram showing the image encoding device according to a first embodiment of this invention, while is a block diagram showing the interior of a motion compensation unit in the image encoding device of . 1 FIG. 2 The basic configuration in the image encoding device of is the same as that of an image encoding device typically used in an H.264/AVC encoder. 34 26 34 26 1 FIG. 3 FIG. 2 However, although a direct vector determination unit of is not arranged in the motion compensation unit in an H.264/AVC encoder, the direct vector determination unit is arranged in the motion compensation unit of the image encoding device of , thus making the two different with respect to this point. FIG. 2 11 23 13 In , a subtracter carries out processing that determines difference between image data and image data of an intra-predicted image generated by an intra-prediction compensation unit , and outputs data of that difference in the form of intra-difference data to an encoding mode determination unit . 12 26 13 A subtracter carries out processing that determines a difference between image data and image data of a motion compensation predicted image generated by the motion compensation unit , and outputs that difference data in the form of inter-difference data to the encoding mode determination unit . 13 11 12 19 28 26 16 13 11 14 13 12 14 The encoding mode determination unit carries out processing that compares intra-difference data output from the subtracter with inter-difference data output from the subtracter , determines whether an encoding mode that carries out compression based on an intra-prediction is to be employed or an encoding mode that carries out compression based on motion prediction is to be employed, and notifies switches and , the motion compensation unit and a variable length encoding unit of the encoding mode that has been determined. In addition, in the case where an encoding mode that carries out compression based on intra-prediction is employed, the encoding mode determination unit carries out processing that outputs intra-difference data output from the subtracter to a conversion unit , while in the case where an encoding mode that carries out compression based on motion prediction is employed, the encoding mode determination unit carries out processing that outputs inter-difference data output from the subtracter to the conversion unit . 14 13 15 The conversion unit carries out processing that integer converts intra-difference data or inter-difference data output from the encoding mode determination unit , and outputs that integer conversion data to a quantization unit . 15 14 16 17 The quantization unit carries out processing that quantizes integer conversion data output from the conversion unit , and outputs the quantized data to the variable length encoding unit and an inverse quantization unit . 16 15 13 27 28 2 The variable length encoding unit carries out processing consisting of carrying out variable length encoding on quantization data output from the quantization unit , the encoding mode determined by the encoding mode determination unit , and an intra-prediction mode or vector information (vector information relating to the optimum motion vector determined by a motion prediction unit ) output from the switch , and transmitting that variable length encoded data (compression encoded data) in the form of a bit stream to the image decoding device . 11 12 13 14 15 16 Furthermore, encoding unit is composed of the subtracters and , the encoding mode determination unit , the conversion unit , the quantization unit and the variable length encoding unit . 17 15 18 The inverse quantization unit carries out processing that inversely quantizes quantization data output from the quantization unit , and outputs the inversely quantized data to an inverse conversion unit . 18 17 20 The inverse conversion unit carries out processing that inversely integer converts inverse quantization data output from the inverse quantization unit , and outputs the inverse integer conversion data in the form of pixel domain difference data to an adder . 19 23 20 13 26 20 The switch carries out processing that outputs image data of the intra-predicted image generated by the intra-prediction compensation unit to the adder if the encoding mode determined by the encoding mode determination unit is an encoding mode that carries out compression based on intra-prediction, or outputs image data of the motion compensation predicted image generated by the motion compensation unit to the adder if the encoding mode is an encoding mode that carries out compression based on motion prediction. 20 19 18 The adder carries out processing that adds image data of the intra-predicted image or motion compensation predicted image output from the switch to pixel domain difference data output from the inverse conversion unit . 21 20 An intra-prediction memory is a memory that stores addition data output from the adder as image data of intra-predicted images. 22 21 An intra-prediction unit carries out processing that determines the optimum intra-prediction mode by comparing image data and image data of peripheral pixels stored in the intra-prediction memory (image data of intra-prediction images). 23 22 21 An intra-prediction compensation unit carries out processing that generates an intra-predicted image of the optimum intra-prediction mode determined by the intra-prediction unit from image data of peripheral pixels (image data of intra-prediction images) stored in the intra-prediction memory . 24 20 A loop filter carries out filtering processing that removes noise components and the like in a prediction loop contained in addition data output from the adder . 25 24 A frame memory is a memory that stores addition data following filtering processing by the loop filter as image data of reference images. 26 27 25 The motion compensation unit carries out processing that divides a plurality of pictures that compose image data into prescribed unit regions to predict one or more prediction vectors or direct vectors of each unit region, and also generates a motion compensation predicted image from one or more optimum motion vectors determined by the motion prediction unit and image data of reference images stored in the frame memory . 27 25 32 26 34 26 The motion prediction unit carries out processing that determines one or more optimum motion vectors from image data, image data of reference images stored in the frame memory , a prediction vector predicted by a prediction vector calculation unit of the motion compensation unit , and one or more direct vectors remaining as vectors targeted for averaging or arithmetic mean without being excluded by the direct vector determination unit of the motion compensation unit . For example, in the case of a motion vector of a P picture, a single motion vector is determined as the optimum motion vector, while in the case of a motion vector of a B picture, two motion vectors are determined as optimum motion vectors. 27 25 Namely, the motion prediction unit carries out processing that determines one or more optimum motion vectors according to a technology commonly referred to as R-D optimization (a technology for determining motion vectors in a form that additionally considers the code quantities of motion vectors instead of simply minimizing a difference between image data and image data of reference images stored in the frame memory ). 28 22 16 13 27 32 26 33 26 16 The switch carries out processing that outputs the optimum intra-prediction mode determined by the intra-prediction unit to the variable length encoding unit if the encoding mode determined by the encoding mode determination unit is an encoding mode that carries out compression based on intra-prediction, or outputs vector information relating to the optimum motion vector determined by the motion prediction unit (a difference vector indicating a difference between a motion vector and a prediction vector in the case where the optimum motion vector is determined from a prediction vector predicted by the prediction vector calculation unit of the motion compensation unit , or information indicating that the optimum motion vector has been determined from a direct vector in the case where the optimum motion vector is determined from a direct vector predicted by the direct vector calculation unit of the motion compensation unit ) to the variable length encoding unit if the encoding mode is an encoding mode that carries out compression based on motion prediction. FIG. 3 31 26 27 13 13 In , a vector map storage memory of the motion compensation unit is a memory that stores an optimum motion vector determined by the motion prediction unit , or in other words, a motion vector of a unit region that has been encoded in each picture. However, although storage of the motion vector continues if the encoding mode determined by the encoding mode determination unit is an encoding mode that carries out compression based on motion prediction, that motion vector is excluded from those vectors targeted for averaging if the encoding mode determined by the encoding mode determination unit is an encoding mode that carries out compression based on intra-prediction. 32 31 The prediction vector calculation unit carries out processing that predicts one or more prediction vectors based on prescribed rules by referring to motion vectors stored in the vector map storage memory . 33 31 33 A direct vector calculation unit carries out processing that predicts one or more motion vectors of a unit region targeted for encoding as direct vectors from motion vectors stored in the vector map storage memory , namely motion vectors of encoded unit regions present in proximity to the unit region targeted for encoding a picture targeted for encoding, and motion vectors of unit regions at the same location as the unit region in encoded pictures positioned chronologically before and after the picture. Furthermore, the direct vector calculation unit composes motion vector derivation unit. 34 27 33 34 The direct vector determination unit carries out processing that outputs a direct vector to the motion prediction unit if a unit region having, as a starting point thereof, a pixel location indicated by the direct vector predicted by the direct vector calculation unit does not include a region outside the picture, but excludes the direct vector from vectors targeted for averaging if the unit region includes a region outside the picture. Furthermore, the direct vector determination unit composes direct vector selection unit. 35 27 35 A motion compensation predicted image generation unit carries out processing that generates a motion compensation predicted image by determining an average of pixel values of unit regions having, as a starting point thereof, pixel locations indicated by one or more optimum motion vectors determined by the motion prediction unit . Furthermore, the motion compensation predicted image generation unit composes motion compensation predicted image generation unit. The following provides an explanation of operation. 34 26 1 34 FIG. 2 However, since processing of processing units other than the direct vector determination unit of the motion compensation unit in the image encoding device of is equivalent to processing typically used in H.264/AVC encoding, only brief explanations are provided regarding the operation of processing units other than the direct vector determination unit . 11 23 13 When image data of an image is input, the subtracter determines a difference between that image data and image data of an intra-predicted image generated by the intra-prediction compensation unit to be subsequently described, and outputs that difference data in the form of intra-difference data to the encoding mode determination unit . 12 26 13 In addition, when image data of an image is input, the subtracter determines a difference between that image data and image data of a motion compensation predicted image generated by the motion compensation unit to be subsequently described, and outputs that difference data in the form of inter-difference data to the encoding mode determination unit . 11 12 13 When intra-difference data is received from the subtracter and inter-difference data is received from the subtracter , the encoding mode determination unit compares the intra-difference data and the inter-difference data and determines whether an encoding mode that carries out compression based on intra-prediction or an encoding mode that carries out compression based on motion prediction is to be employed. However, the method for determining the encoding mode based on the comparison of intra-difference data and inter-difference data uses a technology typically referred to as R-D optimization (a technology for determining the encoding mode in a form that additionally considers code quantities instead of simply selecting the smaller difference). 13 19 28 26 16 When the encoding mode determination unit has determined the encoding mode, it notifies the switches and , the motion compensation unit and the variable length encoding unit of that encoding mode. 13 11 14 12 14 In addition, the encoding mode determination unit outputs the intra-difference data output from the subtracter to the conversion unit in the case where an encoding mode that carries out compression based on intra-prediction is employed, or outputs the inter-difference data output from the subtracter to the conversion unit in the case where an encoding mode that carries out compression based on motion prediction is employed. 13 14 15 When intra-difference data or inter-difference data has been received from the encoding mode determination unit , the conversion unit integer converts the intra-difference data or the inter-difference data, and outputs that integer conversion data to the quantization unit . 14 15 16 17 When the integer conversion data has been received from the conversion unit , the quantization unit quantizes the integer conversion data and outputs the quantized data to the variable length encoding unit and the inverse quantization unit . 16 15 13 27 28 2 The variable length encoding unit carries out variable length encoding on the quantized data output from the quantization unit , the encoding mode determined by the encoding mode determination unit , and the intra-prediction mode or vector information (vector information relating to an optimum motion vector determined by the motion prediction unit ) output from the switch to be subsequently described, and transmits that variable length encoded data in the form of a bit stream to the image decoding device . 15 17 18 When quantized data is received from the quantization unit , the inverse quantization unit carries out inverse quantization on that quantized data and outputs the inverse quantized data to the inverse conversion unit . 17 18 20 When inverse quantized data is received from the inverse quantization unit , the inverse conversion unit inversely integer converts the inverse quantized data, and outputs that inverse integer conversion data in the form of pixel domain difference data to the adder . 19 23 20 13 26 20 The switch outputs image data of the intra-predicted image generated by the intra-prediction compensation unit to be subsequently described to the adder if the encoding mode determined by the encoding mode determination mode is an encoding mode that carries out compression based on intra-prediction, or outputs image data of the motion compensation predicted image generated by the motion compensation unit to be subsequently described to the adder if the encoding mode carries out compression based on motion prediction. 20 19 18 21 24 The adder adds image data of the intra-predicted image or the motion compensation predicted image output from the switch and the pixel domain difference data output from the inverse conversion unit , and outputs that addition data to the intra-prediction memory and the loop filter . 22 21 The intra-prediction unit determines the optimum intra-prediction mode by comparing image data of an input image with image data of peripheral pixels stored in the intra-prediction memory (image data of intra-prediction images). Since the method for determining the optimum intra-prediction mode uses the technology typically referred to as R-D optimization, a detailed explanation thereof is omitted. 22 23 21 11 19 When the intra-prediction unit determines the optimum intra-prediction mode, the intra-prediction compensation unit generates an intra-predicted image of that intra-prediction mode from image data of peripheral pixels stored in the intra-prediction memory (image data of intra-prediction images), and outputs image data of the intra-predicted image to the subtracter and the switch . However, since the method for generating the intra-predicted image is disclosed in H.264/AVC, a detailed explanation thereof is omitted. 20 24 25 When addition data (image data of the motion compensation predicted image+pixel domain difference data) is received from the adder , the loop filter carries out filtering processing that removes noise components and the like in a prediction loop contained in that addition data, and stores the addition data following filtering processing in the frame memory as image data of reference images. 26 27 25 The motion compensation unit carries out processing that divides a plurality of pictures that compose image data into prescribed unit regions to predict one or more prediction vectors or direct vectors of each unit region, and also generates a motion compensation predicted image from one or more optimum motion vectors determined by the motion prediction unit and reference images stored in the frame memory . 26 The following provides a detailed explanation of the contents of processing of the motion compensation unit . 27 31 26 13 13 An optimum motion vector previously determined by the motion prediction unit , namely a motion vector of an encoded unit region in each picture, is stored in the vector map storage memory of the motion compensation unit . However, although the motion vector is continued to be stored if the encoding mode determined by the encoding mode determination unit is an encoding mode that carries out compression based on motion prediction, if the encoding mode determined by the encoding mode determination unit is an encoding mode that carries out compression based on intra-prediction, then the motion vector is excluded from motion vectors targeted for averaging. 32 26 31 The prediction vector calculation unit of the motion compensation unit calculates one or more prediction vectors based on prescribed rules by referring to a motion vector of an encoded unit region in each picture stored in the vector map storage memory . However, since the rules for calculating the prediction vector are disclosed in H.264/AVC, a detailed explanation thereof is omitted. 33 26 31 The direct vector calculation unit of the motion compensation unit predicts one or more motion vectors of the unit region targeted for encoding in a picture targeted for encoding, from motion vectors stored in the vector map storage memory , namely motion vectors of encoded unit regions present in proximity to unit regions targeted for encoding, and from motion vectors of unit regions at the same location as the unit region in encoded pictures positioned chronologically before and after the picture. FIGS. 4 to 7 33 Here, are explanatory drawings indicating the contents of processing of the direct vector calculation unit disclosed in H.264/AVC. FIGS. 4 to 7 A direct vector in H.264/AVC is a vector used in a B picture, and show an example of a time direct method. FIG. 7 33 In this example, two direct vectors (refer to the vectors of the B picture) as shown in are calculated by the direct vector calculation unit . 35 FIG. 8 Consequently, when the motion compensation predicted image generation unit to be subsequently described generates a motion compensation predicted image, it refers to an image location as shown in , and carries out a reference in which one of the direct vectors includes a region outside the picture (refer to the dotted line of the P picture). However, even if a leading end of the direct vector indicates an area within the picture, the direct vector indicates an area outside the picture in the case where the unit region having, as a starting point thereof, a pixel location indicated by the direct vector includes a region outside the picture. FIG. 9 A technology typically referred to as “picture edge expansion” is defined by standards in H.264/AVC. Namely, as shown in , this technology standardizes determination of outside picture pixels in a form so as to extend pixels along the edges of the picture to outside the picture. FIG. 9 35 As a result, since a direct mode prediction image is output for the gray portion shown in from the motion compensation predicted image generation unit as a portion of a motion compensation predicted image, this leads to a decrease in prediction efficiency. — Incidentally, in the case where the images shown in this example are encoded using the typical method of H.264/AVC, approximately 30 bits are required to encode that block (encoding is required using (CAVLC, B16×16_L0, motion vector (8.0, 8.0), no coefficient)). FIG. 10 In this first embodiment, determination of the direct vector is carried out with an algorithm as shown in in order to avoid the output of direct mode predicted images as described above. FIG. 10 34 The algorithm shown in is an algorithm that designates a direct vector indicating a region that includes an area outside the picture as not being used, and the subsequently described direct vector determination unit executes this algorithm. When a direct vector indicating a region that includes an area outside the picture is indicated as not being used, reference is made in only one direction, and since the direct mode predicted image coincides with the image targeted for encoding, prediction efficiency is improved considerably. In this example of the first embodiment, it is sufficient to encode B_Skip (although B_Skip constitutes variable length encoding, it is generally known to be an average of 1 bit or less). 33 34 26 27 In the case where the direct vector calculation unit predicts one or more direct vectors, although the direct vector determination unit of the motion compensation unit outputs that direct vector to the motion prediction unit if a unit region having, as a starting point thereof, a pixel location indicated by that direct vector does not include a region outside the picture; if the unit region having, as a starting point thereof, the pixel location indicated by that direct vector includes a region outside the picture, that direct vector is excluded from vectors targeted for averaging. 33 27 34 However, in the case where all direct vectors predicted by the direct vector calculation unit correspond to a direct vector that indicates a unit region that includes a region outside the picture, then those direct vectors are exceptionally output to the motion prediction unit without excluding from vectors targeted for averaging in the direct vector determination unit . 27 25 32 26 34 26 The motion prediction unit determines one or more optimum motion vectors from image data of an image, image data of reference images stored in the frame memory , a prediction vector predicted by the prediction vector calculation unit of the motion compensation unit , and one or more direct vectors that remain without being excluded from vectors targeted for averaging by the direct vector determination unit of the motion compensation unit . 25 For example, in the case of a motion vector of the P picture, a single motion vector is determined as the optimum motion vector, while in the case of a motion vector of the B picture, two motion vectors are determined as optimum motion vectors. However, the method for determining one or more optimum motion vectors consists of carrying out processing for determining one or more optimum motion vectors is carried out according to the technology typically referred to as R-D optimization (the technology for determining motion vectors in a form that additionally considers the code quantities of motion vectors instead of simply minimizing the difference between image data and image data of reference images stored in the frame memory ). 27 28 When an optimum motion vector has been determined, the motion prediction unit outputs vector information relating to that optimum motion vector to the switch . 27 32 26 28 Namely, if the motion prediction unit determines an optimum motion vector by using a prediction vector predicted by the prediction vector calculation unit of the motion compensation unit when determining the optimum motion vector, it outputs a difference vector indicating a difference between the motion vector and the prediction vector to the switch as vector information. 27 33 26 28 If the motion prediction unit determines the optimum motion vector by using a direct vector predicted by the direct vector calculation unit of the motion compensation unit when determining the optimum motion vector, it outputs information indicating that the optimum motion vector has been determined from a direct vector to the switch as vector information. 27 35 26 When the motion prediction unit has determined only one optimum motion vector, the motion compensation predicted image generation unit of the motion compensation unit generates a pixel value of the unit region having, as a starting point thereof, the pixel location indicated by that motion vector as a motion compensation predicted image. 27 35 In addition, when the motion compensation unit has determined two or more optimum motion vectors, the motion compensation predicted image generation unit generates a motion compensation predicted image by determining an average of pixel values of the unit regions having for starting points thereof the pixel locations indicated by the two or more optimum motion vectors. 34 26 35 FIG. 10 In this manner, as a result of the direct vector determination unit of the motion compensation unit excluding a direct vector that indicates a unit region that includes a region outside the picture from vectors targeted for averaging, the motion compensation predicted image generated by the motion compensation predicted image generation unit becomes as shown in . Consequently, although B_Skip cannot be encoded with H.264/AVC, for portions requiring codes of approximately 30 bits, B_Skip can be encoded in this first embodiment, thereby requiring only 1 bit of code and allowing the advantage of improved prediction efficiency to be obtained. 28 22 16 13 27 16 The switch outputs the optimum intra-prediction mode determined by the intra-prediction unit to the variable length encoding unit if the encoding mode determined by the encoding mode determination unit is an encoding mode that carries out compression based on intra-prediction, or outputs vector information relating to the optimum motion vector determined by the motion prediction unit to the variable length encoding unit if the encoding mode carries out compression based on motion prediction. 2 <Configuration of Image Decoding Device > FIG. 11 FIG. 12 FIG. 11 2 50 2 is a block diagram showing the image decoding device according to the first embodiment of this invention, while is a block diagram showing the interior of a motion compensation unit in the image decoding device of . 2 FIG. 11 The basic configuration of the image decoding device of is the same as the configuration of an image decoding device typically used in an H.264/AVC decoder. 66 50 66 50 2 FIG. 12 FIG. 11 However, although a direct vector determination unit of is not mounted in the motion compensation unit in an H.264/AVC decoder, the direct vector determination unit is mounted in the motion compensation unit of the image decoding device of , thus making the two different with respect to this point. FIG. 11 41 1 15 1 42 13 1 46 51 41 22 1 27 46 27 50 In , when a variable length decoding unit receives a bit stream transmitted from the image encoding device , it analyzes the syntax of the bit stream, outputs a prediction residual signal encoded data corresponding to quantized data output from the quantization unit of the image encoding device to an inverse quantization unit , and outputs the encoding mode determined by the encoding mode determination unit of the image encoding device to switches and . In addition, the variable length decoding unit carries out processing that outputs an intra-prediction mode output from the intra prediction unit of the image encoding device or vector information output from the motion prediction unit to the switch , and outputs vector information output from the motion prediction unit to the motion compensation unit . 42 41 43 The inverse quantization unit carries out processing that inversely quantizes prediction residual signal encoded data output from the variable length decoding unit , and outputs the inversely quantized data to an inverse conversion unit . 43 42 44 The inverse conversion unit carries out processing that inversely integer converts inversely quantized data output from the inverse quantization unit , and outputs the inverse integer conversion data in the form of a prediction residual signal decoded value to an adder . 44 51 43 The adder caries out processing that adds image data of an intra-predicted image or motion compensation predicted image output from the switch and the prediction residual signal decoded value output from the inverse conversion unit . 45 44 A loop filter carries out filtering processing that removes noise components and the like in a prediction loop contained in that addition data output from the adder , and outputs addition data following filtering processing as image data of a decoded image (image). 41 42 43 44 45 Furthermore, decoding unit is composed of the variable length decoding unit , the inverse quantization unit , the inverse conversion unit , the adder and the loop filter . 46 41 48 41 41 50 The switch carries out processing that outputs an intra-prediction mode output from the variable length decoding unit to an intra-prediction compensation unit if the encoding mode output from the variable length decoding unit is an encoding mode that carries out compression based on intra-prediction, or outputs vector information output from the variable length decoding unit to the motion compensation unit if the encoding mode carries out compression based on motion prediction. 47 44 An intra-prediction memory is a memory that stores addition data output from the adder as image data of intra-prediction images. 48 46 47 The intra-prediction compensation unit carries out processing that generates an intra-predicted image of the intra-prediction mode output by the switch from image data of peripheral pixels (image data of intra-prediction images) stored in the intra-prediction memory . 49 45 A frame memory is a memory that stores image data output from the loop filter as image data of reference images. 50 49 The motion compensation unit carries out processing that divides a plurality of pictures that compose image data into prescribed unit regions to predict one or more prediction vectors or direct vectors of each unit region, and also generates a motion compensation predicted image from image data of reference images stored in the frame memory . 51 48 44 41 50 44 The switch carries out processing that outputs image data of an intra-predicted image generated by the intra-prediction compensation unit to the adder if the encoding mode output from the variable length decoding unit is an encoding mode that carries out compression based on intra-prediction, or outputs image data of a motion compensation predicted image generated by the motion compensation unit to the adder if the encoding mode carries out compression based on motion prediction. FIG. 12 61 50 67 In , a vector map storage memory of the motion compensation unit is a memory that stores a motion vector output from a switch , namely a motion vector of a decoded unit region in each picture. 62 63 41 65 A switch carries out processing that initiates a prediction vector calculation unit if vector information output from the variable length decoding unit corresponds to a difference vector, or initiates a direct vector calculation unit if the vector information indicates that the optimum motion vector has been determined from a direct vector. 63 61 The prediction vector calculation unit carries out processing that refers to a motion vector stored in the vector map storage memory , and predicts one or more prediction vectors based on prescribed rules. 64 63 41 41 63 67 An adder carries out processing that adds a prediction vector predicted by the prediction vector calculation unit to a difference vector output from the variable length decoding unit (vector information output from the variable length decoding unit corresponds to a difference vector in the situations in which the prediction vector calculation unit has been initiated), and outputs the addition result in the form of a motion vector to the switch . 65 61 65 The direct vector calculation unit carries out processing that predicts one or more motion vectors of a unit region targeted for decoding in a picture targeted for decoding, from motion vectors stored in the vector map storage memory , namely motion vectors of decoded unit regions present in proximity to the unit region targeted for decoding, and from motion vectors of unit regions at the same location as the unit region in decoded pictures positioned chronologically before and after the picture. Furthermore, the direct vector calculation unit composes motion vector derivation unit. 66 67 65 66 The direct vector determination unit carries out processing that outputs the direct vector to the switch if a unit region having, as a starting point thereof, a pixel location indicated by the direct vector predicted by the direct vector calculation unit does not include a region outside the picture, but excludes the direct vector from vectors targeted for averaging in the case of including a region outside the picture. Furthermore, the direct vector determination unit composes motion vector selection unit. 67 64 68 61 41 66 68 61 The switch carries out processing that outputs a motion vector output from the adder to a motion compensation predicted image generation unit and the vector map storage memory if vector information output from the variable length decoding unit corresponds to a difference vector, or outputs a direct vector that is a motion vector output from the direct vector determination unit to the motion compensation predicted image generation unit and the vector map storage memory if the vector information indicates that an optimum motion vector has been determined from the direct vector. 68 67 68 The motion compensation predicted image generation unit carries out processing that generates a motion compensation predicted image by determining an average of pixel values of a unit region having, as a starting point thereof, a pixel location indicated by one or more motion vectors output from the switch . Furthermore, the motion compensation predicted image generation unit composes motion compensation predicted image generation unit. The following provides an explanation of operation. 41 1 When the variable length decoding unit receives a bit stream transmitted from the animated image encoding device , it analyzes the syntax of that bit stream. 15 1 42 13 1 46 51 As a result, it outputs prediction residual signal encoded data corresponding to quantized data output from the quantization unit of the image encoding unit to the inverse quantization unit , and outputs an encoding mode determined by the encoding mode determination unit of the image encoding device to the switches and . 41 22 1 27 46 27 50 In addition, the variable length decoding unit outputs an intra-prediction mode output from the intra-prediction unit of the image encoding device or a difference vector (vector information) output from the motion prediction unit to the switch , and outputs the vector information output from the motion prediction unit to the motion compensation unit . 41 42 43 When prediction residual signal encoded data has been received from the variable length decoding unit , the inverse quantization unit inversely quantizes the prediction residual signal encoded data and outputs that inversely quantized data to the inverse conversion unit . 42 43 44 When inversely quantized data is received from the inverse quantization unit , the inverse conversion unit inversely integer converts the inversely quantized data and outputs that inverse integer conversion data in the form of a prediction residual signal decoded value to the adder . 46 41 48 41 41 50 The switch outputs an intra-prediction mode output from the variable length decoding unit to the intra-prediction compensation unit if the encoding mode output from the variable length decoding unit is an encoding mode that carries out compression based on intra-prediction, or outputs vector information from the variable length decoding unit to the motion compensation unit if the encoding mode carries out compression based on motion prediction. 46 48 47 51 When an intra-prediction mode is received from the switch , the intra-prediction compensation unit generates an intra-predicted image of that intra-prediction mode from image data of peripheral pixels (image data of intra-prediction images) stored in the intra-prediction memory , and outputs image data of that intra-predicted image to the switch . However, since the method for generating the intra-predicted image is disclosed in H.264/AVC, a detailed explanation thereof is omitted. 46 50 49 When vector information is received from the switch , the motion compensation unit divides a plurality of pictures that compose image data into prescribed unit regions to thereby predict one or more prediction vectors or direct vectors of each unit region, and also generates a motion compensation predicted image from the image data of reference images stored in the frame memory . 50 The following provides a detailed explanation of the contents of processing of the motion compensation unit . 61 50 A previously calculated motion vector, namely a motion vector of a decoded unit region in each picture, is stored in the vector map storage memory of the motion compensation unit . 41 62 50 When vector information is received from the variable length decoding unit , the switch of the motion compensation unit determines whether the vector information corresponds to a difference vector or the vector information is information indicating that an optimum motion vector has been determined from a direct vector. 62 63 65 The switch initiates the prediction vector calculation unit if the vector information corresponds to a difference vector, or initiates the direct vector calculation unit if the vector information is information indicating that an optimum motion vector has been determined from a direct vector. 62 63 50 61 When an initiation command is received from the switch , the prediction vector calculation unit of the motion compensation unit calculates one or more prediction vectors based on prescribed rules by referring to a motion vector of a decoded unit region in each picture stored in the vector map storage memory . However, since the method for calculating the prediction vector is disclosed in H.264/AVC, a detailed explanation thereof is omitted. 63 64 50 41 41 63 67 When one or more prediction vectors are received from the prediction vector calculation unit , the adder of the motion compensation unit adds each prediction vector to a difference vector output from the variable length decoding unit (vector information output from the variable length decoding unit corresponds to a difference vector in the situations in which the prediction vector calculation unit has been initiated), and outputs the addition result in the form of a motion vector to the switch . 62 65 50 61 When an initiation command is received from the switch , the direct vector calculation unit of the motion compensation unit predicts one or more motion vectors as direct vectors of a unit region targeted for decoding in a picture targeted for decoding, from motion vectors stored in the vector map storage memory , namely motion vectors of decoded unit regions present in proximity to the unit region targeted for decoding, and from motion vectors of unit regions at the same location as the unit region in decoded pictures positioned chronologically before and after the picture. 65 33 FIG. 3 FIGS. 4 to 7 Furthermore, since the contents of processing of the direct vector calculation unit are similar to the contents of processing of the direct vector calculation unit of , a detailed explanation thereof is omitted (see ). 65 66 50 67 When one or more direct vectors are predicted by the direct vector calculation unit , the direct vector determination unit of the motion compensation unit outputs the direct vector to the switch if a unit region having, as a starting point thereof, a pixel location indicated by that direct vector does not include a region outside the picture, but excludes the direct vector from vectors targeted for averaging in the case where the unit region having, as a starting point thereof, a pixel location indicated by that direct vector includes a region outside the picture. 65 67 66 However, in the case where all direct vectors predicted by the direct vector calculation unit correspond to a direct vector that indicates a unit region that includes a region outside the picture, then those direct vectors are exceptionally output to the switch without excluding from vectors targeted for averaging in the direct vector determination unit . 66 34 FIG. 3 Furthermore, the contents of processing of the direct vector determination unit are similar to the contents of processing of the direct vector determination unit of . 67 50 41 The switch of the motion compensation unit determines whether vector information output from the variable length decoding unit corresponds to a difference vector, or that vector information is information indicating that an optimum motion vector has been determined from a direct vector. 67 64 68 61 66 68 61 The switch outputs a motion vector output from the adder to the motion compensation predicted image generation unit and the vector map storage memory if the vector information corresponds to a difference vector, or outputs a direct vector that is a motion vector output from the direct vector determination unit to the motion compensation predicted image generation unit and the vector map storage memory if the vector information indicates that an optimum motion vector has been determined from the direct vector. 67 68 50 When only one motion vector is received from the switch , the motion compensation predicted image generation unit of the motion compensation unit generates a pixel value of the unit region having, as a starting point thereof, the pixel location indicated by that motion vector as a motion compensation predicted image. 67 68 In addition, when two or more motion vectors are received from the switch , the motion compensation predicted image generation unit generates a motion compensation predicted image by determining an average of pixel values of the unit regions having for starting points thereof the pixel locations indicated by the two or more optimum motion vectors. 68 35 FIG. 3 Furthermore, the contents of processing of the motion compensation predicted image generation unit are similar to the contents of processing of the motion compensation predicted image generation unit of . 66 50 68 FIG. 10 In this manner, as a result of the direct vector determination unit of the motion compensation unit excluding a direct vector that indicates a unit region that includes a region outside the picture from vectors targeted for averaging, the motion compensation predicted image generated by the motion compensation predicted image generation unit becomes as shown in . Consequently, although B_Skip cannot be encoded with H.264/AVC, for portions requiring codes of approximately 30 bits, B_Skip can be encoded in this first embodiment, thereby requiring only 1 bit of code and allowing the advantage of improved prediction efficiency to be obtained. 51 48 44 41 50 44 The switch outputs image data of an intra-predicted image generated by the intra-prediction compensation unit to the adder if the encoding mode output from the variable length decoding unit is an encoding mode that carries out compression based on intra-prediction, or outputs image data of a motion compensation predicted image generated by the motion compensation unit to the adder if the encoding mode carries out compression based on motion prediction. 43 51 44 45 When a prediction residual signal decoded value is received from the inverse conversion unit and image data of an intra-predicted image or motion compensation predicted image is received from the switch , the adder adds that prediction residual signal decoded value and image data of the intra-predicted image or motion compensation predicted image, and outputs the addition data to the loop filter . 44 47 In addition, the adder stores that addition data in the intra-prediction memory as image data of intra-prediction images. 44 45 When addition data is received from the adder , the loop filter carries out filtering processing that removes noise components and the like in a prediction loop contained in that addition data, and outputs the addition data following filtering processing as image data of a decoded image (image). 45 49 In addition, the loop filter stores the image data of a decoded image in the frame memory as image data of reference images. 1 33 34 33 35 34 1 35 As is clear from the previous explanation, according to this first embodiment, since the image encoding device is provided with the direct vector calculation unit , which predicts one or more motion vectors as director vectors of the unit region targeted for encoding in a picture targeted for encoding, from motion vectors of encoded unit regions present in proximity to unit regions targeted for encoding, and from motion vectors of unit regions at the same location as the unit region in encoded pictures positioned chronologically before and after the picture, the direct vector determination unit , which excludes a direct vector from vectors targeted for averaging in the case where a unit region having, as a starting point thereof, a pixel location indicated by the direct vector predicted by the direct vector calculation unit includes a region outside the picture, and the motion compensation predicted image generator , which generates a motion compensation predicted image by determining an average of pixel values of unit regions having, as a starting point thereof, pixel locations indicated by one or more direct vectors that remain without being excluded from vectors targeted for averaging by the direct vector determination unit , and since the image encoding device is configured so as to determine a difference image between a motion compensation predicted image generated by the motion compensation predicted image generation unit and an image and encode that difference image, the effect is demonstrated of being able to prevent deterioration of image quality in macro blocks along the edge of a picture without leading to a decrease in compression ratio. 2 65 66 65 68 66 2 68 1 FIG. 2 In addition, since the image decoding device is provided with the direct vector calculation unit , which predicts one or more motion vectors as direct vectors of a unit region targeted for decoding in a picture targeted for decoding, from motion vectors of decoded unit regions present in proximity to the unit region targeted for decoding, and motion vectors of unit regions at the same location as the unit region in decoded pictures positioned chronologically before and after the picture, the direct vector determination unit , which excludes a direct vector from vectors targeted for averaging in the case where a unit region having, as a starting point thereof, a pixel location indicated by the direct vector predicted by the direct vector calculation unit includes a region outside the picture, and the motion compensation predicted image generation unit , which generates a motion compensation predicted image by determining an average of pixel values of unit regions having, as a starting point thereof, pixel locations indicated by one or more direct vectors that remain without being excluded from vectors targeted for averaging by the direct vector determination unit , and since the image decoding device is configured so as to decode a prediction residual signal from compression-encoded data of an image, and add the prediction residual signal decoded value and the motion compensation predicted image generated by the motion compensation predicted image generation unit , the effect is demonstrated of being able restore image data of images by decoding a bit stream output from the image encoding device of . Furthermore, although this first embodiment indicated the example of using H.264/AVC for the video encoding method, the first embodiment can be similarly applied to other encoding methods similar to H.264/AVC (such as MPEG-2, MPEG-4 Visual or SMPTE VC-1). 1 <Configuration of Image Encoding Device > FIG. 13 FIG. 2 1 is a block diagram showing the image encoding device according to a second embodiment of this invention, and in this drawing, the same reference symbols are used to indicate those portions that are identical or equivalent to those of , and an explanation thereof is omitted. FIG. 14 FIG. 13 FIG. 3 71 1 In addition, is a block diagram showing the interior of a motion compensation unit in the image encoding device of , and in this drawing as well, the same reference symbols are used to indicate those portions that are identical or equivalent to those of , and an explanation thereof is omitted. FIGS. 13 and 14 71 72 25 In , the motion compensation unit carries out processing that divides a plurality of pictures that compose image data into prescribed unit regions to predict one or more prediction vectors or direct vectors of each unit region, and also generates a motion compensation predicted image from one or more optimum motion vectors determined by a motion prediction unit and image data of reference images stored in the frame memory . 71 26 33 72 34 FIG. 2 However, the motion compensation unit differs from the motion compensation unit of in that, all direct vectors predicted by the internal direct vector calculation unit are output to the motion prediction unit instead of only one or more direct vectors remaining as vectors targeted for averaging without being excluded by the internal direct vector determination unit . 72 27 33 71 34 FIG. 2 Although the motion prediction unit determines an optimum motion vector by using a direct vector or motion vector in the same manner as the motion prediction unit of , since it receives all direct vectors predicted by the direct vector calculation unit from the motion compensation unit instead of one or more direct vectors remaining as vectors targeted for averaging without being excluded by the direct vector determination unit , those direct vectors near the edges of the picture having a higher prediction efficiency are selected. 72 28 In addition, the motion prediction unit outputs information indicating which direct vector has been selected to the switch by including in vector information. Next, an explanation is provided of operation. 71 32 72 34 72 The motion compensation unit outputs one or more prediction vectors predicted by the internal prediction vector calculation unit to the motion prediction unit , and outputs one or more direct vectors (to be referred to as “direct vector A”) remaining as vectors targeted for averaging without being excluded by the internal direct vector determination unit to the motion prediction unit . 71 33 72 In addition, the motion compensation unit outputs all direct vectors (to be referred to as “direct vectors B”) predicted by the internal direct vector calculation unit to the motion prediction unit . 72 27 71 71 FIG. 2 Although the motion prediction unit determines an optimum motion vector in the same manner as the motion prediction unit of when a direct vector and prediction vector are received from the motion compensation unit , since the direct vectors B are also received from the motion compensation unit in addition to the direct vector A, direct vector A or direct vectors B are selected after determining which of the direct vectors results in higher prediction efficiency near the edges of the picture. Since prediction efficiency decreases in the case where a unit region having, as a starting point thereof, a pixel location indicated by a direct vector includes a region outside the picture as previously explained in the first embodiment, although the use of the direct vector A rather than the direct vectors B yields higher prediction efficiency near the edges of the picture, in the case where, for example, the area of a region outside the picture included in a unit region is extremely small, use of the direct vectors B may yield higher prediction efficiency near the edges of the picture. Furthermore, the method for selecting a direct vector yielding the highest prediction efficiency uses the technology typically referred to as R-D optimization, and processing is carried out for determining the optimum direct vector. 72 28 When an optimum motion vector has been determined, the motion prediction unit outputs vector information relating to that optimum motion vector to the switch . 32 71 72 28 Namely, when determining an optimum motion vector, if the optimum motion vector is determined using a prediction vector predicted by the prediction vector calculation unit of the motion compensation unit , the motion prediction unit outputs a difference vector indicating a difference between that motion vector and the prediction vector to the switch as vector information. 34 71 72 34 28 When determining an optimum motion vector, if the optimum motion vector is determined using the direct vector A output from the direct vector determination unit of the motion compensation unit , the motion prediction unit outputs information indicating that the optimum motion vector has been determined from a direct vector, and information indicating that the direct vector A output from the direct vector determination unit has been selected, to the switch as vector information. 33 71 72 33 28 When determining an optimum motion vector, if the optimum motion vector is determined using the direct vectors B output from the direct vector calculation unit of the motion compensation unit , the motion prediction unit outputs information indicating that the optimum motion vector has been determined from a direct vector, and information indicating that the direct vectors B output from the direct vector calculation unit have been selected, to the switch as vector information. 2 <Configuration of Image Decoding Device > FIG. 15 FIG. 11 2 is a block diagram showing the image decoding device according to a second embodiment of this invention, and in this drawing, the same reference symbols are used to indicate those portions that are identical or equivalent to those of , and an explanation thereof is omitted. FIG. 16 FIG. 15 FIG. 12 80 2 In addition, is a block diagram showing the interior of a motion compensation unit in the image decoding device of , and in this drawing as well, the same reference symbols are used to indicate those portions that are identical or equivalent to those of , and an explanation thereof is omitted. FIGS. 15 and 16 80 49 In , the motion compensation unit carries out processing that divides a plurality of pictures that compose image data into prescribed unit regions to predict one or more prediction vectors or direct vectors of each unit region, and also generates a motion compensation predicted image from image data of reference images stored in the frame memory . 80 50 66 65 41 FIG. 11 However, the motion compensation unit differs from the motion compensation unit of in that, a direct vector output from the internal direct vector determination unit or the direct vector calculation unit is selected in accordance with selection information of the direct vector A or the direct vectors B included in vector information output from the variable length decoding unit . 81 80 66 67 41 65 67 A switch of the motion compensation unit selects a direct vector output from the direct vector determination unit and outputs that direct vector to the switch if direct vector selection information included in vector information output from the variable length decoding unit indicates that the direct vector A has been selected, or selects a direct vector output from the direct vector calculation unit and outputs that direct vector to the switch if the direct vector selection information indicates that the direct vectors B have been selected. The following provides an explanation of operation. 80 49 50 FIG. 11 The motion compensation unit divides a plurality of pictures that compose image data into prescribed unit regions to predict one or more prediction vectors or direct vectors of each unit region, and also generates a motion compensation predicted image from image data of reference images stored in the frame memory in the same manner the motion compensation unit of . 50 80 66 65 41 FIG. 11 However, differing from the motion compensation unit of , the motion compensation unit selects a direct vector output from the internal direct vector determination unit or the direct vector calculation unit in accordance with selection information of the direct vector A or the direct vectors B included in vector information output from the variable length decoding unit . 41 81 80 66 67 65 67 Namely, when vector information is received from the variable length decoding unit , the switch of the motion compensation unit selects a direct vector output from the direct vector determination unit and outputs that direct vector to the switch if direct vector selection information included in that vector information indicates that the direct vector A has been selected, or selects a direct vector output from the direct vector calculation unit and outputs that direct vector to the switch if the direct vector selection information indicates that the direct vectors B have been selected. As is clear from the previous explanation, according to this second embodiment, since a motion compensation predicted image is generated by selecting the direct vector A or the direct vectors B, the effect is demonstrated of enhancing the possibility of improving prediction efficiency near the edges of the picture. 1 2 Furthermore, it goes without saying that various types of encoding units (for each block targeted for encoding, slice (collections of blocks targeted for encoding) units, picture units or sequence (collection of pictures) units) can be considered for units that encode the above-mentioned vector information. As a result of encoding vector information as one parameter of each of the encoding units described above and encoding in a bit stream, direct vector selection results intended by the image encoding device can be conveyed to the image decoding device . 34 1 33 33 34 Although it was indicated in the previously described first and second embodiments that the direct vector determination unit in the image encoding device excludes a direct vector from vectors targeted for averaging in the case where a unit region having, as a starting point thereof, a pixel location indicated by that direct vector predicted by the direct vector calculation unit includes a region outside the picture, in the case where a unit region having, as a starting point thereof, a pixel location indicated by a direct vector predicted by the direct vector calculation unit includes a region outside the picture, the direct vector determination unit may determine whether or not the unit region having, as a starting point thereof, a pixel location indicated by the direct vector includes a region outside a tolerance region adjacent to the picture, and if the unit region having, as a starting point thereof, a pixel location indicated by the direct vector does not include a region outside that tolerance region, the direct vector may not be excluded from vectors targeted for averaging, while if the unit region having, as a starting point thereof, a pixel location indicated by the direct vector includes a region outside that tolerance region, that direct vector may be excluded from vectors targeted for averaging. FIG. 17 34 is an explanatory drawing indicating the contents of processing of the direct vector determination unit . 34 The following provides a detailed explanation of the contents of processing the direct vector determination unit . 34 FIG. 17 A tolerance region (region adjacent to picture) is preset in the direct vector determination unit as shown in . 33 34 In the case where a unit region having, as a starting point thereof, a pixel location indicated by a direct vector predicted by the direct vector calculation unit includes a region outside the picture, the direct vector determination unit determines whether or not the unit region having, as a starting point thereof, a pixel location indicated by that direct vector includes a region outside the tolerance region. FIG. 17B 34 27 72 If the unit region having, as a starting point thereof, a pixel location indicated by that direct vector does not include a region outside the tolerance region as shown in (if the pixel location indicated by the direct vector is within the tolerance region), the direct vector determination unit outputs that direct vector to the motion prediction unit (or ) without excluding the direct vector from vectors targeted for averaging. FIG. 17C 34 If the unit region having, as a starting point thereof, a pixel location indicated by that direct vector includes a region outside the tolerance region as shown in (if the pixel location indicated by the direct vector is outside the tolerance region), the direct vector determination unit excludes that direct vector from vectors targeted for averaging. 66 2 65 65 66 Although it was indicated in the previously described first and second embodiments that the direct vector determination unit in the image decoding device excludes a direct vector from vectors targeted for averaging in the case where a unit region having, as a starting point thereof, a pixel location indicated by that direct vector predicted by the direct vector calculation unit includes a region outside the picture, in the case where a unit region having, as a starting point thereof, a pixel location indicated by a direct vector predicted by the direct vector calculation unit includes a region outside the picture, the direct vector determination unit may determine whether or not the unit region having, as a starting point thereof, a pixel location indicated by the direct vector includes a region outside a tolerance region adjacent to the picture, and if the unit region having, as a starting point thereof, a pixel location indicated by the direct vector does not include a region outside that tolerance region, the direct vector may not be excluded from vectors targeted for averaging, while if the unit region having, as a starting point thereof, a pixel location indicated by the direct vector includes a region outside that tolerance region, that direct vector may be excluded from vectors targeted for averaging. 66 The following provides a detailed explanation of the contents of processing the direct vector determination unit . 34 1 66 The same tolerance region as that of the direct vector determination unit of the image encoding unit is preset in the direct vector determination unit . 65 66 In the case where a unit region having, as a starting point thereof, a pixel location indicated by a direct vector derived by the direct vector calculation unit includes a region outside the picture, the direct vector determination unit determines whether or not the unit region having, as a starting point thereof, a pixel location indicated by that direct vector includes a region outside the tolerance region. FIG. 17B 66 67 81 If the unit region having, as a starting point thereof, a pixel location indicated by that direct vector does not include a region outside the tolerance region as shown in (if the pixel location indicated by the direct vector is within the tolerance region), the direct vector determination unit outputs that direct vector to the switch (or ) without excluding the direct vector from vectors targeted for averaging. FIG. 17C 66 If the unit region having, as a starting point thereof, a pixel location indicated by that direct vector includes a region outside the tolerance region as shown in (if the pixel location indicated by the direct vector is outside the tolerance region), the direct vector determination unit excludes that direct vector from vectors targeted for averaging. As is clear from the previous explanation, according to this third embodiment, since a configuration is employed in which a direct vector is not excluded from vectors targeted for averaging if a unit region having, as a starting point thereof, a pixel location indicated by that direct vector does not include a region outside a tolerance region, but excludes that direct vector from vectors targeted for averaging if the unit region having, as a starting point thereof, a pixel location indicated by the direct vector includes a region outside the tolerance region, the effect is demonstrated of being able to enhance the possibility of improving prediction efficiency near edges of the picture. 34 1 66 2 34 1 2 In this third embodiment, although the direct vector determination unit of the image encoding device and the direct vector determination unit of the image decoding device are indicated as being preset with the same tolerance region, information indicating the tolerance region set by the direct vector determination unit of the image encoding device may be encoded, and that encoded data may be transmitted to the image decoding device by including in a bit stream. 66 2 34 1 As a result, the direct vector determination unit of the image decoding device is able to use the same tolerance region as the tolerance region set in the direct vector determination unit of the image encoding device . 1 2 Furthermore, it goes without saying that various types of encoding units (for each block targeted for encoding, slice (collections of blocks targeted for encoding) units, picture units or sequence (collection of pictures) units) can be considered for units that encode information indicating a tolerance region. As a result of encoding information indicating a tolerance region as one parameter of each of the encoding units described above and encoding in a bit stream, a tolerance region intended by the image encoding device can be conveyed to the image decoding device . 34 1 33 34 27 72 33 27 72 Although it was indicated in the previously described first and second embodiments that the direct vector determination unit in the image encoding device excludes a direct vector from vectors targeted for averaging in the case where a unit region having, as a starting point thereof, a pixel location indicated by that direct vector derived by the direct vector calculation unit includes a region outside the picture, the direct vector determination unit may compose motion vector correction unit, and may output a direct vector to the motion prediction unit (or ) if a unit region having, as a starting point thereof, a pixel location indicated by the direct vector predicted by the direct vector calculation unit does not include a region outside the picture, or may correct a unit region having, as a starting point thereof, a pixel location indicated by that direct vector to a region within the picture and output the direct vector following correction to the motion prediction unit (or ) if the unit region having, as a starting point thereof, a pixel location indicated by that direct vector includes a region outside the picture. FIG. 18 34 is an explanatory drawing indicating the contents of processing of the direct vector determination unit . 34 The following provides a detailed explanation of the contents of processing of the direct vector determination unit . 34 33 The direct vector determination unit determines whether or not a unit region having, as a starting point thereof, a pixel location indicated by a direct vector predicted by the direct vector calculation unit includes a region outside the picture. 34 27 72 The direct vector determination unit outputs a direct vector to the motion prediction unit (or ) if a unit region having, as a starting point thereof, a pixel location indicated by the direct vector does not include a region outside the picture in the same manner as the previously described first and second embodiments. FIG. 18A FIGS. 18B and 18C 34 27 72 If a unit region having, as a starting point thereof, a pixel location indicated by a direct vector includes a region outside the picture (case of the direct vector indicating a region outside the picture) as indicated in , the direct vector determination unit corrects the unit region having, as a starting point thereof, a pixel location indicated by that direct vector to a region within the picture as shown in , and outputs the direct vector after correction to the motion prediction unit (or ). FIG. 18B FIG. 18C Furthermore, indicates an example of independently correcting each horizontal and vertical component to be within the picture, while indicates an example of correcting each horizontal and vertical component to be within the picture while maintaining their orientation. 66 2 65 66 67 81 65 67 81 Although it is indicated in the previously described first and second embodiments that the direct vector determination unit in the image decoding device excludes a direct vector from vectors targeted for averaging in the case where a unit region having, as a starting point thereof, a pixel location indicated by that direct vector predicted by the direct vector calculation unit includes a region outside the picture, the direct vector determination unit may compose motion vector correction unit, and may output a direct vector to the switch (or ) if a unit region having, as a starting point thereof, a pixel location indicated by the direct vector predicted by the direct vector calculation unit does not include a region outside the picture, or may correct a unit region having, as a starting point thereof, a pixel location indicated by that direct vector to a region within the picture and output the direct vector following correction to the switch (or ) if the unit region having, as a starting point thereof, a pixel location indicated by that direct vector includes a region outside the picture. 66 The following provides a detailed explanation of the contents of processing of the direct vector determination unit . 66 65 The direct vector determination unit determines whether or not a unit region having, as a starting point thereof, a pixel location indicated by a direct vector predicted by the direct vector calculation unit includes a region outside the picture. 66 67 81 The direct vector determination unit outputs a direct vector to the switch (or ) if a unit region having, as a starting point thereof, a pixel location indicated by the direct vector does not include a region outside the picture in the same manner as the previously described first and second embodiments. FIG. 18A FIGS. 18B and 18C 66 34 1 67 81 If a unit region having, as a starting point thereof, a pixel location indicated by a direct vector includes a region outside the picture (case of the direct vector indicating a region outside the picture) as indicated in , the direct vector determination unit corrects the unit region having, as a starting point thereof, a pixel location indicated by that direct vector to a region within the picture as shown in using the same correction method as the correction method of the direct vector determination unit in the image encoding device , and outputs the direct vector after correction to the switch (or ). As is clear from the previous explanation, according to this fourth embodiment, since a configuration is employed such that a unit region having, as a starting point thereof, a pixel location indicated by a direct vector is corrected to a region within the picture if the unit region having, as a starting point thereof, a pixel location indicated by that direct vector includes a region outside the picture, the effect is demonstrated of being able to enhance the possibility of improving prediction efficiency near edges of the picture. 34 1 66 2 34 1 2 In this fourth embodiment, although the direct vector determination unit of the image encoding device and the direct vector determination unit of the image decoding device are indicated as correcting a direct vector by using the same correction method, information indicating the correction method used by the direct vector determination unit of the image encoding device may be encoded, and that encoded data may be transmitted to the image decoding device by including in a bit stream. 66 2 34 1 As a result, the direct vector determination unit of the image decoding device is able to use the same correction method as the correction method used by the direct vector determination unit of the image encoding device . 1 2 Furthermore, it goes without saying that various types of encoding units (for each block targeted for encoding, slice (collections of blocks targeted for encoding) units, picture units or sequence (collection of pictures) units) can be considered for units that encode information indicating the vector correction method described above. As a result of encoding information indicating a vector correction method as one parameter of each of the encoding units described above and encoding in a bit stream, a vector correction method intended by the image encoding device can be conveyed to the image decoding device . Since the image encoding device and image decoding device according to this invention are able to prevent deterioration of image quality in macro blocks along edges of a picture without leading to a decrease in compression ratio, it is suitable for use as, for example, an image encoding device that compresses and encodes digital video signals in the form of image data and outputs image compression-encoded data, or an image decoding device that decodes image compression-encoded data output from an image encoding device and restores the data to digital video signals. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a block diagram showing a connection relationship between an image encoding device and an image decoding device according to a first embodiment of this invention; FIG. 2 1 is a block diagram showing an image encoding device in the first embodiment of this invention; FIG. 3 FIG. 2 26 1 is a block diagram showing the interior of a motion compensation unit in the image encoding device of ; FIG. 4 33 is an explanatory drawing indicating the contents of processing of a direct vector calculation unit disclosed in H.264/AVC; FIG. 5 33 is an explanatory drawing indicating the contents of processing of the direct vector calculation unit disclosed in H.264/AVC; FIG. 6 33 is an explanatory drawing indicating the contents of processing of the direct vector calculation unit disclosed in H.264/AVC; FIG. 7 33 is an explanatory drawing indicating the contents of processing of the direct vector calculation unit disclosed in H.264/AVC; FIG. 8 is an explanatory drawing indicating a case in which a leading end of a direct vector is indicating a region outside a picture; FIG. 9 is an explanatory drawing indicating a technology referred to as “picture edge expansion” that extends pixels on the edges of a picture to outside the picture; FIG. 10 35 is an explanatory drawing indicating a motion compensation predicted image generated by a motion compensation predicted image generation unit by excluding a direct vector indicating a unit region that includes an outside picture region from vectors targeted for averaging; FIG. 11 2 is a block diagram showing an image decoding device according to the first embodiment of this invention; FIG. 12 FIG. 11 50 2 is a block diagram showing the interior of a motion compensation unit in the image decoding device of ; FIG. 13 1 is a block diagram showing the image encoding device according to a second embodiment of this invention; FIG. 14 FIG. 13 71 1 is a block diagram showing the interior of a motion compensation unit in the image encoding device of ; FIG. 15 2 is a block diagram showing the image decoding device according to the second embodiment of this invention; FIG. 16 FIG. 15 80 2 is a block diagram showing the interior of a motion compensation unit in the image decoding device of ; FIG. 17 34 is an explanatory drawing indicating the contents of processing of a direct vector determination unit ; and FIG. 18 34 is an explanatory drawing indicating the contents of processing of the direct vector determination unit .
Person and human are both terms used to refer to us. While they do indeed refer to the same thing, the usage actually varies based on context. Here’s how person and human are different from each other. A person is a human – a being with a life, soul, and the capability for conscious thought. In other words, a person is a sentient creature. Certain qualities make up the entirety of a person, and these qualities are what makes them defined as such. These qualities vary with location, belief, and culture. Demographics, history, traditions and many other things contribute to the variables that can allow an individual to be referred to as a person. “Person” is also used to describe a human being in a philosophical manner. A human is also a person. The word itself, however, is used to generally refer to the members of the Homo sapiens species, characterized by bipedalism, an erect posture, and complex brains that allow reason and the ability to form societies. “Human” can be said to be the scientific term to describe a human being.
https://whyunlike.com/difference-between-person-and-human/
In July of 2014, I landed at the London Heathrow Airport, rented a standard transmission car, yes standard, and attempted to drive myself in heavy London traffic out to the English countryside to meet friend and colleague, Susan Slocum. Sue and I were interviewing farm shop owners, which are plentiful in the UK, as part of a multi-year research and outreach project on agritourism. Sue is an associate professor of Tourism and Events Management at George Mason University and one of my former graduate research assistants. Sue and I spent several days interviewing farm shop owners and tourism organizations. I bravely drove without incident, but many close calls, the left stick shift car on the left side of the road in heavy traffic and on single track rural roads. Sue, a former UK resident, did the hard part, she contacted the farm shop owners or managers to schedule our interviews. During our research efforts we discovered that farm shops provide many benefits to farmers and consumers alike. For farmers, market diversification and a year-long revenue stream are highlights, and for consumers, access to local products and direct communication with farmers is their take-away. However, we never imagined that replacing lost markets (restaurants, schools, etc.) for direct market farms during a pandemic would be one of them. But that is exactly what happened. Farm shops, in many varieties and colors have expanded around the country. A farm shop is a permanent or semi-permanent structure where farm products from a specific farm or multiple farms, both fresh and processed (such as jams, honey and cheese), are offered for direct sale to consumers. Shops are normally open to the public year-round and may be located on a farm or in nearby towns or cities. Many farm shops feel like elaborate farm stands and may even have the word stand in their name. In Utah, several farm shops were up and running prior to COVID-19, but since the pandemic we have seen others pop up. Why? Access to new direct-to-consumer markets and control. COVID-19 has led to uncertainty regarding market access and future regulations. As growers seek to replace lost markets they can take control by opening farm shops or stands on their own or in cooperation with other growers. Also, as you are aware, consumer interest in local foods has increased during the pandemic and farm shops cater well to consumers seeking to buy local in a more “familiar” retail environment (There's a Big Appetite for Farm-to-Consumer Shopping). Farm shops are open daily and their products are promoted to consumers via websites and social media, thus they are very convenient for shoppers (For more information on social media marketing see social media marketing). For those of you who may be considering opening a farm shop or working with a current shop, here are a few suggestions. - Select a good location – The best location for a farm shop is one close to a busy road or intersection as long as the speed limit isn’t too high. Select a site near corners or other small businesses like bakeries. Easy access with plenty of parking space is important, and a building with lots of open space or an open-air section is ideal and helpful. The shop must be visible and easy to identify, but any signage should be in accordance with county or city regulations. - Source products carefully – When possible, the grower should sell their own produce or products. Products provided by others may be priced high, resulting in lower profits. If outside products need to be sourced, use local sources whenever possible. Local suppliers respond more rapidly and innovatively than their conventional counterparts, especially in situations where there is sudden high demand or other unforeseen circumstances. Also, delivery times are more predictable, and the probability of product recalls is significantly lower. Provide a variety of products to customers, focusing on value-added products such as cheese, jam, jelly, honey, juice, salsa, etc. This broadens product offerings to foods that are ready to eat, and convenience is key! Be aware, however, of codes and regulations regarding these products. Overall, remember that product quality is crucial. Frequent product testing, farm visits and safety inspections will not only result in high-quality products but will earn customer trust. - Become educated and connected – When building a farm shops business, start small and build up over time. Education is key. Attend conferences, extension programs, DOA workshops, etc. Visit farmers’ markets, farms, talk with growers and invite them to visit. Sourcing items from farmers’ markets is a good way to establish relationships with local food artisans, highlight local products and learn more about local preferences. Building relationships with local providers allows farm shop owners to provide input on what products or varieties should be grown or processed in the future. It will also help customers recognize the business as a supporter of the community and local economy. People want to be educated about the food they eat and will have more respect and trust for businesses that are transparent about where their products come from. Allow consumers and farmers to connect with one another. Use websites, social media, and face-to-face meetings to help customers reach out to farmers when they have questions or comments. Don’t be afraid to promote! Let your customers know what’s going on and why it’s important. - Diversify – Find ways to diversify and set the farm shop apart from other offerings in the community. Selling produce with interesting names and colors is a great way to start. Consider adding a restaurant, café, bakery, or butcher’s shop to expand the array of available goods and services. Those elements provide additional incentive for customers and add an element of uniqueness to the farm shop. Be a pick-up-point for CSA baskets. This will attract customers to the shop while showing them where they can get additional local foods. Join local labeling programs such as Utah’s Own. This allows the community to recognize the shop as part of the local economy. Work with local chamber of commerce and visitors’ bureaus to expand the shops promotional opportunities. Conduct tasting events, dinners, and tours as opportunities for the community to sample products and start a club rewards program to encourage repeat business and gather customer feedback. - Developing a food or agricultural tourism business at https://extension.usu.edu/apec/agribusiness-food/FoodAgritourism - Extended season marketing opportunities (like farm shops) at https://extension.usu.edu/apec/agribusiness-food/extendedseasonopps - Farm Shops Extension publication at https://diverseag.org/files-ou/FS6Farm_Shops-4.2format5-14.pdf That’s all for now. If you have a topic you would like to see covered in a future blog, feel free to contact me. Stay healthy! Kynda Curtis, USU Extension Ag and Food Marketing Specialist [email protected] Online workshops, courses, webinars, and podcasts: - Financial Health For Tribal Producers Webinar Series: Financial Health - Idaho AgBiz Webinars: https://www.uidaho.edu/cals/idaho-agbiz - Cultivating Success Webinars: https://www.cultivatingsuccess.org/home Resources: - USU Extension COVID-19 Resources: https://extension.usu.edu/covid-19/ - Marketing in Motion Blog Posts: https://extension.usu.edu/apec/news/ - UDAF Utah’s Own Program: https://www.utahsown.org - UDAF COVID-19 Resources: https://ag.utah.gov/covid-19-news-and-updates/ - Taxes and Federal Programs: https://ruraltax.org Disclaimer: This blog is for information purposes only. USU Extension does not endorse any specific product or service mentioned here in.
https://extension.usu.edu/apec/blog/blog-10farmshop
RCA 32V432T Hi, TV is 19mos old and has never skipped a beat. Last night I set the sleep timer for 60 mins, then my husband got up (he sleeps with it on), so without thinking I just went ahead and hit the off button on the remote, without adjusting the sleep timer. Tonight, it powers on, the standby light goes off and sometimes you can just catch the audio coming through when it shuts down. FIRST THING TO TRY IS RESET-unplug AC cord for a few minutes,hours or overnight==let me know if you need more help.T. In addition to a Sleep Timer, many TVs have an Off Timer, too. Check for other automatic settings and timer settings. If you use the TV as a monitor for a computer, the sleep setting on the computer might affect the operation of the TV. 1. Press the Menu button on the monitor front panel to display the OSD Menu. 3. Press the OK button to select Management. 4. Scroll down and highlight and select Sleep Timer > Set Current Time. NOTE: You must set the current local time before you reset the time for Sleep Time or On Time. Note that the time is displayed in a 24-hour clock format. For example, 1:15 p.m. is displayed as 13 hours 15 minutes. A power failure or loss of power to the monitor will cause the timer to reset to 00:00. If this occurs, you will need to reset the sleep timer mode. 5. Press the OK button once to enter the adjustment mode for hours. 6. Press the - (Minus) or + (Plus) button to adjust the hour. 7. Press the OK button again to enter the time for minutes. 8. Press the - (Minus) or + (Plus) button to adjust the minutes. 9. Press the OK button to lock in the time chosen. 10. After setting the current time, the highlight automatically skips to Set Sleep Time. Repeat steps 6 through 9 to set Sleep Time. 11. If you do not want to set Sleep Time, press the OK button twice, then select Save and Return to exit the menu. 12. After setting Sleep Time, the highlight automatically skips to Set On Time. Repeat steps 6 through 9 to set On Time. 13. Set the Timer mode to On to activate the Sleep Timer settings. 14. When you are finished, select Save and Return to exit the menu. until the next On Time activates or a monitor button is pressed. It may be a sleep function you could have accidentally set. Try browsing through your menu and find the sleep timer. How do I set the sleep timer? We got the tv from a friend and it had no manuel. Press and release the TV key. Press and hold the SLEEP key for three seconds. The illuminated component name turns on. Using the number keys, enter the desired time in minutes (from 1-99 minutes). To set the timer to an umber under 10 minutes, first press 0 and then the desired number (e.g.,05 for five minutes). The illuminated component name blinks with each key press. Once the second number is entered,the illuminated component name turns off. Tip Any key press other than the number keys is ignored. If you don't enter the sleep time within 10 seconds after pressing the SLEEP key, you must start over at step 1. The illuminated component name blinks four times and then turns off, indicating your attempt to program the Sleep Timer has been unsuccessful. The Sleep Timer is now set. Leave remote aimed at the TV. Once the Sleep Timer is set, you can continue to use the remote without affecting the Sleep Timer. However, because the timing mechanism for the Sleep Timer is built into the remote itself, the remote must be in TV Mode and pointed at the TV to activate the Sleep Timer. Note: If the ON·OFF key is pressed, the Sleep Timer is canceled. Check the remote button said SLEEP in not on.Like it on a timer like 30,60,120 mins,turn it all off.Ur tv might be setting on a timer setup,check ur tv menu too. try unplugging the tv from all imputs and power and let set over night then plug everything back in. Your sleep timer is on. Hit menu, and poke through that and find the sleep timer settings. Sleep timer is something that turns the TV off after an amount of time that your do not interact with it. Go in ur tv menu go to SETUP.Check for Sleep timer might be on.This timer have setting like 30 60 120 mins.Turn off the sleep timer tv well not auto shut off.Check ur remote control might have a button said sleep. Setting the Sleep Timer The sleep timer allows your LCD TV to automatically turn OFF after a given amount of time. You can set the sleep timer using your remote control or through the onscreen display (OSD). the top of your screen. 2. Press the SLEEP button one or more times to select the time you want. 3. Press the EXIT button to hide the sleep timer display. Your sleep timer is now running in the background. The sleep timer will be hidden after 10 seconds. Turning On the LCD TV. and your LCD TV is ready to turn on. remote control. The Status LED on the front turns green.
http://www.fixya.com/support/t178337-rca_32v432t
Meet artist Deepa Koshaley! This vibrant artist creates paintings that are both colorful yet serene. Deepa showcases her unique artwork across various galleries in Dallas and beyond, including showcasing many large scale works of art. Deepa’s approach to painting is mindful and balanced, and her works highlight her perspective on the landscape of humanity. Read on to learn more about Deepa from our recent interview. Jaquin: How did you get started creating abstract artwork? Deepa: I started my artistic career as a traditional landscape painter. Since an early age, my favorite thing to do was doodling/ sketching, especially during my classes in the corner of notebooks. These doodles were abstracts. My first understanding of abstract creation was in my 1st year of architecture during the basic design class. My professor Vasu who is a product designer in India taught principles of basic design, which was the foundation for abstract art education. Jacquin: How would you describe your style of painting? Deepa: My abstract paintings are a close-up and meditative view of nature and humanity. My creative focus is non-representational artworks that evoke inner peace and outer self-expression. Jacquin: What has been your favorite artistic creation thus far? Deepa: “Red Bloom I” and “Human Landscape” (paintings shown below) – The essence of life is to bloom into our full potential. A flower does not think of competing with the flower next to it, it just blooms. “Red Bloom” is a metaphor for a deeper appreciation for the preciousness of life and to recognize our beauty. The painting “Human Landscape” metaphorically illustrates human evolution and the earth’s transformation since the beginning of time. We are not the body, not the mind. Our souls are ancient, and we are pure consciousness ~ an essence of being. Painting Title: Red Bloom I, Media: (High quality) Acrylic on Canvas, Finish: Very Smooth, Size: 48 (h) x 36 (w) x 1.5 (d) inches Painting Title: Human Landscape, Media: (High quality) Acrylic on Canvas, Finish: Very Smooth, Size: 48 (h) x 36 (w) x 1.5 (d) inches Jacquin: Who is your favorite artist right now and why? Deepa: Joan Miro, Cy Trombly, Helen Frankensteiner, Agnes Martin. When I first came across their work, the distinction was instantaneous. I felt alive, and it was evident that I was witnessing beyond just a beautiful painting. After reading their philosophy towards art and perspectives, it was clear why I felt a connection. Jacquin: What would be your dream project to work on? Deepa: I would love to build a larger than life-size sculpture related to my abstract work with the theme of the human landscape in the heart of the city. Jacquin: How has being an artist brightened your life? Deepa: I feel fortunate to be a catalyst for love, peace, and harmony through art and words. The process of unexpected results I get through washes of colors and marks helps me in letting go even in real life. I learn life lessons while creating my work as they are the meditative conversations; I reflect on the canvas.
https://interiorsbyjacquin.com/artist-spotlight-deepa-koshaley/
A guide dog visited a Lowestoft bingo hall – to meet the people whose generosity paid for his training. Back in August 2011, The Journal reported on the fund-raising efforts of Simon Waters, who raised more than £6,000 for Guide Dogs for the Blind after walking 10 miles along the town's seafront wearing a blindfold. His efforts – along with the proceeds of charity draws and donations by players at the Beacon Bingo Club in Lowestoft where he works – led to enough monies being raised to pay for the first year of training for a guide dog, which was named William after a competition at the club in Battery Green Road. On Saturday, there was a special treat for the fund-raisers as William made the journey to the club from his home in Essex along with representatives from the Guide Dogs for the Blind. He was centre of the attention as he made his way around the hall with his puppy walker Michele Green. You may also want to watch: Addressing the bingo players, and thanking them all for their fantastic support, Mr Waters said: 'William was born last March and he has nearly finished his first year's training with Michele.' Mrs Green echoed his comments and thanked him and the bingo players for doing 'an amazing job'. Most Read - 1 Lowestoft man kicked victim in head as he lay on the ground - 2 Scheme unveiled for former pub and butchers on town's High Street - 3 Air ambulance responds after man in 30s suffers emergency in Kessingland - 4 Why a strange light was seen flying over Lowestoft - 5 'Enough to go around' - Drivers urged not to panic-buy at petrol pumps - 6 Key workers share 'frustrating' impact of panic-buying of fuel - 7 Hunt continues for two men involved in assault in Lowestoft - 8 Man's death 'remains unexplained' after body found in Lowestoft - 9 Lowestoft family feature on Channel 5 show with Nick Knowles - 10 Hunt for three men 'ongoing' after victim hit during Lowestoft assault She said that with William approaching his first birthday next Thursday, he would soon be ready to play his part in the next generation of guide dogs. She told the fund-raisers: 'William is destined, at the moment, to become a stud dog, which means he will go on to father lots and lots of lovely puppies. 'He just have to have some medical tests now before this is confirmed, but he has been such a good boy in his training. As he is so outstanding, his breeding stock is just as important. So not only will your monies go to just one generation of guide dogs, hopefully it will be going to many generations to come.' Speaking afterwards, Mr Waters, 42, of Myloden Road, Lowestoft, said: 'It is lovely to see the end product. We raised £6,030 back in 2011 and another £230 today through a collection and raffle – it's marvellous.
https://www.lowestoftjournal.co.uk/news/guide-dog-makes-thank-you-visit-to-lowestoft-bingo-hall-297952
Family fun: Library crafting programs drawing people together At the Austin Public Library, families huddled around tables inside the Programming Room on Tuesday night for some festive crafting. Jingle bells, tinsel and pinecones littered the tabletops and hot glue guns squirted globs of adhesive onto the various sweaters brought by the families. From cable knit sweaters to long-sleeved cotton shirts, each piece of clothing received a touch of the Christmas spirit. Bringing her granddaughters out for a night of fun, Cathy Hemann-Winsky of Austin busied herself with hot gluing a star ornament to the front of her sweatshirt. She giggled alongside Kadince Winsky, 11, Aubree Miles, 9, and Mea Miles, 6, who laughed at their grandmother’s creation. “I’m gonna put big eyes on mine!” Aubree said while twirling around with her pink sweater. It had pom poms and stickers running up and down the sleeves. “Grandma, can we take a picture?” This was the type of event that Hemann-Winsky felt was lacking in Austin. Although there are kid-friendly events, there seemed to be a shortage of opportunities for pre-teens. “Either they’re too old to be a part of the program, or they’re too young to join the bigger kids,” Hemann-Winsky admitted. “It’s a lot of fun attending these events. This is a very kid-friendly library. It’s a nice, short little thing we can do together. I love doing this fun stuff with my grandkids.” Although this is the first time that APL hosted an ugly sweater craft night, this isn’t their first time bringing in the public for some crafting opportunities. Last time, they created wreaths for the holidays and there was a Harry Potter themed night where the children got to be sorted into different houses and created potions. “That one was really popular,” said Jessica Lind, youth librarian. “We try to make all the events accessible to all ages. We try to get them all together at least once a month.” When hosting a craft night, the library staff pulls up Pinterest on their laptops and iPads so that their groups can get some inspiration for their creations. There was even a time when the staff created pumpkins out of wine corks for Halloween, creating wind chimes out of old CDs and made new things out of water bottles. During these crafting events, there’s the goal of building and creating art out of things that are recyclable, said Courtney Wyant, adult services librarian. She noted that the overall purpose of hosting these craft nights was to bring the community together and do something that’s free and accessible to everyone. “We want people to socialize and meet each other,” Wyant said. “It’s definitely a stress reliever for some and get a sense of accomplishment from doing something. These kids don’t get as much art in school and it’s kind of forgotten about. We’re trying to fill that need.” The number of participants vary from event to event. Lind shared that during the summer, they’d easily get around 2,500 visitors throughout the season. Some events brought in around 86 people such as their gingerbread house building program last year, and other times, it would be more intimate with maybe around 10 people attending events. “It really depends on what we do,” she said. “We like to do things for everyone. It’s really amazing to see how creative everyone can get. We host different programs every month. I love seeing the kids get excited and pumped for arts and crafts. They’re really amazing and their excitement is contagious.” This feeling seems to be shared by the attendees at the event. Mea and Aubree ran to get their photos taken in their sweaters. County Administrator Trish Harren also pulled on her ugly sweater made with tinsel and pom poms. She felt especially festive taking photos in front of the library’s fireplace. When asked whether she’d wear her sweater to a Mower County Board meeting Harren laughed.
https://m.austindailyherald.com/2019/11/family-fun-library-crafting-programs-drawing-people-together/
What makes a solar eclipse? Thank you! 2 Answers Explanation: This happens when the Sun, moon, and earth aligned, in the respective order. I hope that helps! See details bellow.... Explanation: A solar eclipse occurs when the moon moves in a line directly between earth and the sun, casting a shadow on earth. This produces a solar eclipse. This situation occurs during new-moon phases. The moon is eclipsed when it moves within Earth’s shadow, producing a lunar eclipse. This situation occurs during full- moon phases. (Source: http://www.fccj.us/gly1001/tests/10Ch21L.htm) During a total solar eclipse, the moon casts a circular shadow that is never wider than 275 kilometers, about the length of South Carolina. Anyone observing in this region will see the moon slowly block the sun from view and the sky darken. When the eclipse is almost com- plete, the temperature sharply drops a few degrees. The solar disk is completely blocked for seven minutes at the most. This happens because the moon’s shadow is so small. Then one edge of the solar disk reappears. When the eclipse is complete, the dark moon is seen covering the complete solar disk. Only the sun’s brilliant white outer atmosphere is visible. Total solar eclipses are visible only to people in the dark part of the moon’s shadow known as the umbra. A partial eclipse is seen by those in the light portion of the shadow, known as the penumbra. A total solar eclipse is a rare event at any location. The next one that will be visible from the United States will take place on August 21, 2017. It will sweep southeast across the country from Oregon to South Carolina.
https://socratic.org/questions/what-makes-a-solar-eclipse-thank-you#557138
Gold prices have posted strong gains in Wednesday’s North American session. Currently, the spot price for an ounce of gold is $1348.04, up 1.41% on the day. On the release front, January consumer inflation beat expectations. CPI jumped 0.5%, above the estimate of 0.3%. Core CPI remained steady at 0.3%, edging above the forecast of 0.2%. Consumer spending reports in January were dismal. Retail Sales was flat at 0.0%, short of the estimate of 0.5%. Core Retail Sales declined 0.3%, well off the forecast of +0.2%. A strong CPI release for January has sent the US dollar lower against the major currencies, and gold has jumped on the bandwagon. Concerns of high inflation was a catalyst for the market sell-off last week, and fears of a resumption in the downward spiral are weighing on the dollar. If investors react negatively and ditch the markets yet again, safe-haven assets like gold will likely be the big winners. Gold prices were down in the first half of February, but gold has recovered these losses, after posting strong gains of 2.4% this week. What about the Federal Reserve? Currently, the Fed is planning three hikes this year, but that could change to four or even five hikes, if inflation continues to head upwards and the robust US economy maintains its strong expansion. The new head of the Federal Reserve, Jerome Powell, received a rude welcome from the stock markets, as he started his new position last week. Powell sought to send a reassuring message on Tuesday, saying that the Fed is on alert to any risks to financial stability. However, it is clear that the Fed’s hand is limited when it comes to stock markets moves, and the volatility which we saw last week could resume at any time. XAU/USD Fundamentals Wednesday (February 14) - 8:30 US CPI. Estimate 0.3%. Actual 0.5% - 8:30 US Core CPI. Estimate 0.2%. Actual 0.3% - 8:30 US Core Retail Sales. Estimate 0.5%. Actual 0.0% - 8:30 US Retail Sales. Estimate +0.2%. Actual -0.3% - 10:00 US Business Inventories. Estimate 0.3%. Actual 0.4% - 10:30 US Crude Oil Inventories. Estimate 2.8M. Actual 1.8M Thursday (February 15) - 8:30 US PPI. Estimate 0.4% - 8:30 US Empire State Manufacturing Index. Estimate 17.7 - 8:30 US Philly Fed Manufacturing Index. Estimate 21.5 - 8:30 US Unemployment Claims. Estimate 229K - 10:00 US Business Inventories. Estimate 0.3%. Actual 0.4%. - 10:30 US Crude Oil Inventories. Estimate 2.8M. Actual 1.8M *All release times are EST *Key events are in bold XAU/USD for Wednesday, February 14, 2018 XAU/USD February 14 at 12:25 EST Open: 1329.36 High: 1349.44 Low: 1318.00 Close: 1348.04 XAU/USD Technical |S3||S2||S1||R1||R2||R3| |1285||1307||1337||1375||1416||1433| - XAU/USD posted slight gains in the Asian session but gave up most of these gains in European trade. The pair has posted sharp gains in North American trade - 1337 has switched to support following strong gains by gold on Wednesday - 1375 is the next resistance line - Current range: 1337 to 1375 Further levels in both directions: - Below: 1337, 1307, 1285 and 1260 - Above: 1375, 1416 and 1433 OANDA’s Open Positions Ratio XAU/USD ratio is showing little movement in the Wednesday session. Currently, long and short positions are almost evenly split. This is indicative of a lack of trader bias as to what direction XAU/USD takes next. This article is for general information purposes only. It is not investment advice or a solution to buy or sell securities. Opinions are the authors; not necessarily that of OANDA Corporation or any of its affiliates, subsidiaries, officers or directors. Leveraged trading is high risk and not suitable for all. You could lose all of your deposited funds.
https://www.marketpulse.com/20180214/gold-jumps-as-strong-cpi-and-weak-retail-sales-spook-markets/
3021 Carleton St is a multi-family home in San Diego, CA 92106. This 1,418 square foot multi-family home sits on a 5,000 square foot lot and features 8 bedrooms and 14 bathrooms. This property was built in 1925 and last sold for $450,000. Nearby schools include Cabrillo Elementary School, Dana Elementary School and High Technical High International High School. The closest grocery stores are Stars & Stripes Mart, Ralphs and Neighborhood Market. Nearby coffee shops include Technology Outfitters, Point Loma Coffee and Northside Shack. Nearby restaurants include Sushi Lounge Point Loma, La Perla #3 and Sushi Lounge. 3021 Carleton St is near Shoreline Park, Point Loma Community Park and Point Loma Nazarene University. There are minimal bike lanes and the terrain is flat as a pancake. 3021 Carleton St is bikeable, there is some bike infrastructure. This address can also be written as 3021 Carleton Street, San Diego, California 92106.
https://www.acropolisdev.com/copy-of-1928-howard-ave
With a Terrible Fate was honored to present a panel at PAX Australia 2016 entitled “Press X to Scream: Horror Storytelling in Video Games.” In the months since our presentation, we’ve been publishing our work from the panel in argument form, for the benefit of those viewers who were unable to attend. Now that all of the PAX Aus content has been published, we’ve aggregated it all in once place so that you can experience our entire presentation in written form. gamedesign Nudgy Controls, Part II -by Nathan Randall, Featured Author. Introduction In Part I of this series, I discussed some examples of types of games that benefit from the lack of what I’ve termed “nudges,” which is an instance of some player input X that typically yields output Y instead yielding output Z, where Y would potentially undermine narrative consistency and Z preserves narrative consistency. For clarification on this term’s formal definition I would suggest reading the introduction to Part I before reading this article. And I would definitely suggest reading Part I before reading this article if you have yet to do so, as this article will assume knowledge of the ideas covered in Part I. In Part II I will discuss games that have narratives that benefit from nudgy gameplay. There are two principal ways to think about how a game’s narrative may incorporate nudges. First, it may incorporate nudges that help the player, allowing them to perform feats that are potentially outside of their skillset without the helpful nudge. I will term these sorts of nudges “player aids.” Second, a game may incorporate nudges that cause the player to perform worse than they would on their own. I will term these sorts of nudges “player hindrances.” Importantly, a player hindrance is not simply a lack of a nudge. It is an active change in output from what the player expects that makes the player perform worse. It is not like the examples of Banjo Kazooie or Dark Souls given in the previous article, in which the player likely fails frequently exclusively as a result of their actions, rather than the corrective measures of the game engine. A nudge can be either a player aid or a player hindrance. I’ll start with a discussion of games with player aids and then move on to a discussion of games with player hindrances. Games with Player Aids Player aids exist to make certain potentially difficult aspects or portions of a game easier for the player to accomplish. They are most effective when a task that might be difficult for the average player is not difficult for the avatar the player is controlling. The player aid turns this task into something trivial to accomplish, maintaining the narrative consistency of a game by continually establishing the competency of the character. There are many games that have done this over the past years, notably the Batman Arkham games as well as the Assassin’s Creed games, so many readers are likely familiar with the gameplay I’ll be describing. I will go over two examples of player aids, and then discuss an example of something that potentially looks like a player aid, but is not. The first example of a player aid hearkens back to the introduction of Part I, discussing the antics of bridge-crossing between Banjo Kazooie and Assassin’s Creed. I’d like to take a moment to look at a related set of circumstances in Assassin’s Creed: whenever the player is making Altair jump off of a building. Usually, the city of an Assassin’s Creed game is such that there is a convenient building to jump onto, or an even more convenient cart full of hay to dive into (and somehow stay completely uninjured, but we’ll ignore that complaint for now). For the sake of example, let’s imagine that the only safe landing space when jumping off a building is one cart full of hay on the ground. If the player runs directly toward the cart, Altair will reliably jump off of the building and land in the cart. However, if the player misses the mark slightly, Altair will jump off of the building and somehow steer his course, mid-fall, toward the cart, even though by the laws of physics in the real world he should have missed and landed with a nice splat on the hard ground. Each of these instances in which the player misses the path toward the cart of hay is a player aid: an enforcement on the part of the game mechanics of Altair’s status as an expert assassin who could not have made such silly mistakes—otherwise, he would have been dead long ago. One will note that the pattern I described in the previous paragraph holds for an uncooperative player as well as for a less-than-competent player. If the player intentionally attempts to miss the safe landing, the game’s engine corrects the player’s actions to be more narratively consistent. I have personally attempted to cause disasters in Assassin’s Creed, and can note from experience that one must actively attempt to cause harm to Altair in order to do so, as the game liberally aids an uncooperative player to a safer output than the one she was attempting to incur. In this case, input X is forcibly shifted from output Y, the output in which Altair is hurt, to output Z, in which Altair is not injured, even though the player did not want this to occur. Another example of a player aid is seen frequently across shooters on consoles: aim assist, which is any instance of a game engine helping the player to shoot at enemies, rather than shooting into thin air. While aim assist often exists simply for the purposes of making multiplayer shooting games balanced across skill levels, or just making a shooter game more approachable for beginners, aim assist (lack thereof) often serves an important purpose in narrative consistency as well. To see how aim assist can act as a player aid, first note that it fits the mechanical model described in Part I. The player can try to move her targeting in any particular direction, and when an enemy target is not on screen, the engine consistently moves the targeting in the direction of the player’s input. However, when an enemy target is onscreen, the game engine aids the player by making an output that differs from the direction of the player’s input, so as to make the player aim at the enemy target. In this way, in some circumstances input X, which often yields output Y, yields output Z instead. What we need now to see how aim assist can be a player aid is motivation for why aim assist may preserve narrative consistency. Rather than point out a particular game for which this is the case, I will construct a category of games in which aim assist preserves narrative consistency. Imagine any game in which the protagonist is a well trained, expert marksman. For any game in which this is the case, aim assist will preserve narrative consistency, because expert marksmen rarely, if ever, miss. Aim assist works to prevent, to a degree, an incompetent or uncooperative player from undermining the expert status of the marksman. In contrast, if a game features a protagonist with little-to-no training with guns, it would not make sense narratively to include aim assist. Aim assist would actually make the protagonist too competent, and would thereby undermine narrative consistency. To further understand what player aids are, it will help to see an example of something that one might initially think is a player aid, but actually is not. Many games with action-filled cutscenes, such as Resident Evil 4, Uncharted, and even Final Fantasy XIII-2, have sections that demand user input in the form of action commands. These are sections of gameplay in which the player acts by pressing a button in response to a visual input on-screen. In response to a single button press, a player may run up the arm of a goliath while dodging bullets, do a backflip over an Indiana-Jones-style boulder rolling down a hill, or deliver a finishing blow to an enemy. These sections are usually designed to allow for player involvement during sections of gameplay in which the actions being performed by the protagonist are too actiony and cinematic for normal gameplay. Initially these seem like they may be player aids, in the sense that the game engine is making it almost atrociously easy for the player to perform incredible feats. However, cutscenes with action commands do not thereby contain player aids, because these sections always have one specific output for the player’s input. If a player presses ‘B’ in response to some prompt, for instance, this button press is mapped to a specific output, there is no potential other output that might occur. Because of the one-to-one mapping of player input to game output, there is no nudge taking place. A nudge requires a shifting of output that is not occurring in this case. Simply making some complicated avatar action easier for a player to accomplish is not equivalent to a player aid. A player aid fundamentally changes how a player controls her avatar by shifting the output of some input to something that better fits the narrative than the usual output. To use an analogy, one could think of player aids as a proofreading system akin to error-correcting on a smartphone. A game with player aids corrects the player’s output to what is more correct for the story, rather than simply making it easier to give the correct input to yield said output. Games with Player Hindrances A player hindrance exists to disrupt a player’s actions, making simpler tasks more difficult to complete. A game may include a player hindrance to show that a character has difficulty with or is unable to do something, regardless of player ability. They are most effective when the player is controlling a character who is in some way less able than some standard (as defined by the game) regardless of player ability. There is a variety of potential reasons for the gap in ability, usually having to do the current bodily status of the avatar—in particular, when a character is inebriated, in some way physically injured, or close to death. The difficulty in diagnosing a player hindrance, then, comes in correctly identifying what standard it is that the character is failing to live up to. I will go over two examples of a player hindrance, both from NieR: Automata, in which the standard being compared to is the normal functioning state of the avatar. Then I will go over one example from Resident Evil 4 that is more difficult to diagnose. Finally, I’ll discuss one crucial example of a situation that initially appears to be a player hindrance, but actually is not. At several points in NieR: Automata (a game with multiple avatars), the player’s avatar, an android, is hacked, EMP’d, or injected with a computer virus. When these events occur, various capacities of the avatar get removed, from the ability to attack, to the ability to jump, to the ability to see shapes with edge detection. While there are several instances of this throughout the game, I will focus on one in particular: when 2B, one of the avatars in the game, is infected by a virus that is threatening to control her entirely, leaving her unable to operate normally, on the verge of death. Thus player hindrances are warranted in order to make clear that 2B is no longer able to control her own body sufficiently, regardless of the actions of the player. In particular, when attempting to walk in a straight line, 2B will suddenly stop in her tracks, and sometimes when attempting to stop, 2B instead just keeps running forward. In this way, the player’s usual input can yield one of two outputs, either stopping or continuing moving forward, in a way that is not predictable to the player. The simple task of moving from one spot on the map to another becomes significantly more difficult, regardless of player ability, and so we can say that this section of the game contains player hindrances so as to preserve the narrative of 2B losing control of her body. Again in NieR: Automata, the avatar is at times a robot with only very limited maneuverability, in contrast to the usual android avatar, who is very agile. The agency of the robot is much less than that of the android, evidenced by the robots’ slow movement speed, simple attack patterns, and a camera angle close to the robot that doesn’t allow for much peripheral vision. While the player is “hindered” in that she is less able to act through the avatar than before, these are not player hindrances: they are simply instances of the player being given fewer options, or simply fewer effective options, in accomplishing any particular task. They are akin to an avatar getting into a car: the control scheme and abilities of the avatar change, but that does not constitute a nudge in the gameplay. Changes in control scheme are not instances of player hindrance. One particular way in which the player will be hindered by the gameplay when playing as the robot is when attempting to carry a bucket of oil. Usually, the robot can walk over pipes on the ground without falling over, but this action causes the robot to fall over when carrying a bucket full of oil on its head. In this way the output for the player’s input has shifted, meeting the first requirement to call this gameplay a player hindrance. Initially, the shifting output is surprising for players, who do not expect carrying a bucket of oil to be sufficient reason for tripping and falling over a pipe. But, the gameplay reinforces the narrative conceit that many of the robots are weak and relatively incapable individually. In this way the shift in output is narratively impactful: it shows that carrying a bucket of oil is a sufficient hindrance for the robot that even skilled player inputs cannot lead to success at walking over a pipe. The robot’s status as a pathetic being is at least maintained, if not more forcefully asserted, by this moment. In both of the examples given above, the avatar is not able to operate at their usual standard, in the case of 2B because of her near-death state, and in the case of the robot because of carrying a bucket on its head. But the “standard” that a character is not living up to does not actually have to be inherent to the character themselves. To see this let’s consider another example. Those who have played Resident Evil 4 may remember that the protagonist Leon Kennedy’s aim with a gun is often not great. When the player pulls out a firearm, even when giving no input, the location that Leon is aiming can move in any direction: up, down, left, right, and any diagonal mixture of these. So one can see that the first part of the definition of a nudge has been met: when the user is giving an input (in the form of no input), any of many directional movements of the gun is possible. There are three potential ways in which this gameplay could maintain narrative consistency. One might initially think that perhaps Leon is not trained in using a firearm, and thus it would not make sense for him to have rock-steady aim. But this theory does not seem correct, since Leon was trained first as a police officer and then as a special forces agent. So his aim should in theory be very good. One might then be tempted to think that the explanation for his terrible aim is the frightening situation that he is in, fighting for his life against parasitically controlled people and monstrosities wielding chainsaws. But again, this theory isn’t coherent plotwise, as Leon must have been trained to manage his fear in combat situations as part of his training as a special forces agent. Many players do not consider the third potential reason for Leon’s terrible aim, which I will explain in the following paragraph; as a result, these people believe that either Leon must be either a terrible shot or a coward. The lack of explanation for Leon’s terrible aim has plagued the impression that people have of him since the game’s release. Many people explain the existence of the nudge as being indicative of Leon’s actual incompetence, even though his attitude and demeanor appear competent. I recognize this as a weakness for the game: it’s easier to embrace the idea that Leon is incompetent than to recognize the larger theme that Leon’s shaky hand speaks to. In the Resident Evil series as a whole, there is an idea that, in order to improve humanity and win wars, one must create biological enhancements for people as well as biological weapons. Many of the game’s villains describe normal humans as inept and/or weak. Leon’s shaky hand speaks directly to this theme, and grounds the player in the body of a human person (albeit a very well trained human person), who is subject to imperfections and up against biologically enhanced enemies. The fact that Leon’s aim is bad maintains the consistency of the idea that Leon is physically inferior in various ways to his enemies, and only stays alive through clever use of weapons, supplies, and his own smarts. The gameplay has less to do with Leon as a person, and speaks more to the world in which he is embedded. The standard that Leon does not live up to ends up being the standard of the ideal military combatant, which in the world of Resident Evil must be biologically mutated/enhanced. One may worry that this analysis is problematic in that presumably every character in a story has uncountably many arbitrary standards to live up to, and since these standards don’t all align, the character must be failing to meet at least one of these standards. In this way it would appear that all gameplay should be instances of player hindrances. But this is clearly not the case, since intuition tells us that most gameplay is not a player hindrance. This is where narrative consistency comes into play. The narrative should define the specific standard out of the uncountably many out there that the character is not meeting, so as to justify the use of a player hindrance. In the case of Resident Evil 4, this standard is created through dialogue with a character named Lord Saddler in particular. At one point Saddler shoots down a helicopter arriving to rescue Leon and says “Don’t tell me you’ve never swatted a bothersome fly! In essence, it’s the same thing… When you’ve acquired this power, you too will understand.” Through this line, Saddler communicates to Leon that humans are no better than insects, and that there is a power greater than humanity out there to subscribe to. Leon does not meet the standard of this greater power. Leon’s shaky hand keeps this narrative consistent to make it believable that a power greater than humans—greater than Leon—could conceivably exist out there. As evidenced by the example of Resident Evil 4, player hindrances can be tricky to diagnose, for it isn’t always clear whether there is a standard within a narrative that an avatar is failing to meet. Further, player hindrances are uncommon: outside of characters who are in some way gravely injured, intoxicated, or afraid, or simply incompetent, it is difficult to imagine when a player hindrance might be used. This is especially true since players tend to find player hindrances frustrating, and so developers have a tendency not to design them, as evidenced by the number of players who bitterly complained about Leon’s aiming in Resident Evil 4, followed by the subsequent removal of this feature from the studio’s future games. Now that we’ve considered some games that incorporate player hindrances, let’s nail down exactly what player hindrances are by considering a game that initially might appear to be one in which player hindrances are warranted, but actually is not. One may be tempted to think that the example of Octodad, from Part I of this series, may be a game that would benefit from player hindrances. As a reminder to the reader, Octodad is a game about an octopus masquerading as a normal human suburban father and somehow succeeding. The game has intentionally very difficult controls, so as to put the player in the shoes of the octopus. The player’s experience navigating the difficult controls mirrors that of an octopus trying with only minimal success to be a human father. However, there is a crucial reason that Octodad does not fit in the schema of games that benefit from player hindrances. The games with player hindrances discussed above all drive home that the avatar is unable to perform some particular action regardless of the input of the player. In the case of Octodad, however, a key part of the narrative is that somehow the octopus manages to successfully act in the role of the human father, even though there are numerous physical difficulties present in doing so. Unlike the example of 2B given above, the octopus father actually does manage to accomplish his goals so long as the player succeeds, even with all of the obvious obstacles in his way. The intrigue comes from the hilarious attempt of the player to succeed at being a normal human father even with the intentionally difficult controls. As mentioned in Part I, to introduce nudges into this gameplay would take the player out of the shoes of the octopus. Like the octopus, the player must fail of their own merit, rather than being forced to fail by a player hindrance. If the player were forced to fail, the nature of the story would be very different. A Non-Obvious Similarity Between Player Aids and Player Hindrances The reader may notice an apparent discrepancy between player hindrances and player aids. It initially appears as though player hindrances are always relative to some standard, whereas player aids are more “absolute” in that they do not seem to be tied to any particular standard. This is actually not the case. Both player aids and player hindrances are relative to standards. But with player aids there is not much need to specify the standard in question, since it is relatively easy for most people to recognize an avatar with superhuman capabilities (notice the implicit standard of “human” in the word “superhuman”). In contrast, in order to understand a player hindrance, especially those similar to the Resident Evil 4 example where the standard is something the character ought to meet, it tends to be necessary to more explicitly identify the standard. So while identifying a standard seems to be less pertinent in analyzing a player aid than a player hindrance, the difference does not arise out of the theoretical grounding of these terms, but rather just the process of analysis. Conclusion In Part I and Part II of this series, we’ve defined nudgy controls, considered games that importantly do not use nudges, and considered how some games use nudges in one of two forms, player aids and player hindrances. In Part III, we will explore how this paradigm of game controls allows us to better understand the challenging control scheme of The Last Guardian. Nathan Randall is a featured author at With a Terrible Fate. Check out his bio to learn more. Thank you to my good friend Luke Wellington for the suggestion of this term as it applies to helpful nudges, as well as providing criticism to my first article which led to its theoretical grounding. As an aside, from a game design perspective, this particular choice is designed to be frustrating. The designers know that the player has no way within the game itself of knowing that the robot will trip in these contexts. When the player takes these actions to save time (as the environment is set up in a way that encourages these actions to make traversal faster), the player will spill the oil and waste time. This sort of design decision is frustrating for players, and many developers avoid it so as to keep their players from quitting playing the game. The designers of NieR: Automata likely designed this section intentionally with the goal of frustrating the player in mind so as to put the player in the shoes of the robot. Thanks to Brendan Gallagher for pointing out that this analysis is not canonical or based on the author’s intent. My analysis is agnostic to author intent, and with that disclaimer the argument presented should hold. The Tragic Irony of Final Fantasy XIII-2 Since the beginning of With a Terrible Fate, I’ve made passing comments about how deeply the storytelling of the Final Fantasy XIII trilogy offended my sensibilities, both as a player and analyst of video games. On the first day of my three months analyzing Majora’s Mask, I discussed the Zelda game’s value by showing how it succeeded where Lightning Returns failed; when I discussed my fears about Square Enix dividing the Final Fantasy VII remake into multiple games, I cited the weak episodic storytelling of the XIII saga as prima facie reasons to worry about Square’s ability to tell one story across multiple games. Yet despite constantly using the XIII trilogy as fodder for broader critiques, I have never yet devoted an article to tackling the problems of the series head-on. Well, with today at last marking the release of Final Fantasy XV, I found it a fitting occasion to turn my full attention to Final Fantasy XIII, as something of a personal reflection on why I was so let down by the trilogy. I do view the trilogy as a fantastic failure in storytelling, but the undertone of this critique is the quiet hope that Square learned its lesson and remembered how to tell stories. This, I think, is the core issue to keep in mind as FFXV finally enters the universe of game criticism in the coming weeks: remember that FFXIII also “looked pretty” and had a decent enough battle system; its colossal failure was one of storytelling, and I believe that storytelling is the measure by which FFXV will stand as a masterpiece or fall as an epic waste of time and resources. Sadly, I could probably spend as long picking apart the FFXIII trilogy’s problems as I spent analyzing Majora’s Mask (but don’t challenge me on that–it wouldn’t be fun for anyone). So today, I’m just going to focus on Final Fantasy XIII-2. I’ve long thought that, of the three games in the trilogy, FFXIII-2 was the one with the most redeeming features and the greatest narrative potential. The problem is that FFXIII-2 is, in a surprising and sad sense, a very poignant story trapped inside of a very poorly composed story. The project of this article is to explain what I mean by that claim; in particular, I want to show you how the very structure of Final Fantasy XIII-2’s universe renders its narrative shortcomings tragically ironic, perhaps even in a way that can give disappointed players a new appreciation for a game that fails in an almost beautiful way. I’ll first argue that, sacrilegious though it may sound to say so, FFXIII-2 was poised to be the spiritual successor of the classic Chrono Trigger. After that, I’ll show how the overall framing of FFXIII-2‘s story destroyed what initial potential the game had–in fact, I’ll argue that it suffers from failures similar to those of Assassin’s Creed III, but suffers from those failures to an even greater extent than ACIII does. Lastly, I’ll combine these two strands of analysis to show how the game becomes a tragically ironic narrative failure. In the end, we’ll walk away with some lessons in how stories can fail–and, hopefully, how stories can succeed. Not a Hallway Anymore: Temporal Overworlds One of the most common criticisms of the first entry in the FFXIII trilogy–named simply Final Fantasy XIII–was that its world and story were overly linear, meaning that the game consisted in a singular path from the beginning to the end of its narrative with very little by way of exploration or divergence from that path. One of JonTron’s most popular video’s, criticizing precisely this aspect of the game, bore the fitting title “Final Hallway XIII” in reference to the game’s severe linearity. So, you might expect that the developers, in crafting a sequel to FFXIII, might compensate for this aspect of the original game by making the sequel substantially less linear, with a variety of different paths and narrative outcomes to explore. And indeed, less linearity is exactly what we see in FFXIII-2; in fact, the structure of the game’s world and narrative is radically non-linear. What I mean by ‘radically non-linear’ is that, where the worlds of most games tend to be spatially organized, the world of FFXIII-2, at its highest level, is actually structured in terms of time. The player’s main interface with the game is the Historia Crux, a metaphysical space that allows them to access various moments across time–some of which occur in alternate timelines. The Historia Crux is analogous to the ‘world map’, or ‘overworld’, of many other games: the global space that contains all of the various locations to which the player can travel over the course of a game’s narrative. Yet instead of being a broad swath of space, the Historia Crux is a broad swath of time: we could justly call it a temporal overworld in the sense that it fundamentally structures the game’s narrative and locations based on time rather than on space. One might even say that the story of FFXIII is about linearity and non-linearity in narrative. The Historia Crux is made possible by a variety of paradoxes that corrupt time with impossible events following the end of FFXIII‘s narrative, when the goddess Etro intervened to save the player’s party of characters, thereby distorting the flow of history. One way of viewing the goal of FFXIII-2, then, is to travel through time resolving these paradoxes, trying to restore order to the timeline. One might actually see this as a clever response on the part of Square to the linearity criticisms about FFXIII: by resolving paradoxes in FFXIII-2, the player is able to travel to a variety of potential timelines and witness several paradoxical outcomes to the game’s history–yet all of this is done in service of restoring order and linearity to the storyline, ultimately reaching the game’s singular, canonical ending. It’s easy to interpret this as a metaphor for the tension in games between the need for games to present multiple possibilities on the one hand, and the need for games to tell a coherent story on the other hand: for players’ choices to matter in game narrative, multiple outcomes to events must be possible, and yet this increasing variability in the game seems to cut against the grain of a well-articulated story with fixed, carefully arranged events. So far, so interesting. While I haven’t yet said much at all about the particular content of FFXIII-2‘s story, the form of its world certainly seems like an interesting basis for telling a tale that plays on the special features and constraints of video games as a medium. And it’s worth noting at this juncture that this isn’t a radically new idea: in fact, it picks up on some of the central mechanics and themes of a much older game of Square’s: Chrono Trigger. Though it wasn’t structured around paradoxes, Chrono Trigger did gain fame for its time-travel narrative structure, complete with a wide variety of potential game outcomes depending on choices the player made, when the game’s ultimate enemy (Lavos) was defeated, and so on. Released in 1995, the game was ahead of its time–no pun intended–in the way it built a robust game narrative out of multiple possibilities and timelines for the player to explore. This is the tradition in which FFXIII-2 followed; you can even see echoes of the time-hopping interface of Chrono Trigger’s time machine, the Epoch, in the design of the Historia Crux. But FFXIII-2 goes beyond merely elaborating the structure of Chrono Trigger: in the details of its story–or rather, one of its storylines–it makes the game’s time-based narrative deeply poignant in a surprising way. The central antagonist of the game is Caius Ballad, a man who has been made immortal by being endowed with the heart of the goddess Etro–the Heart of Chaos. He is the designated guardian of Yeul, a Seeress with a double-edged gift: the young girl can see the future, but her lifespan shortens each time she does so, causing her to die young, only to be reincarnated thereafter. Thus the immortal Caius, knowledgeable of all time thanks to Yeul’s visions, has also had to watch countless Yeul’s die in his arms, “carving their pain on his heart” every time. Caius’ mission in the game is to kill the goddess Etro, from which time and history flow, in order to end time itself: he only wants to do this in order to end Yeul’s suffering by putting a stop to the cycle of her dying by degrees every time she sees the future. On the other hand, we have the protagonist Noel: one of the player’s two characters, who gets wrapped up in a quest to change the future and resolve the timeline. Growing up, he knew both Caius and one incarnation of Yeul; he refused to become Yeul’s guardian when he learned that he had to kill Caius in order to do so. As he travels throughout time, he clashes with Caius and meets numerous other incarnations of Yeul; thus he comes to understand both the fate of Yeul and the pain endured by Caius as Yeul’s companion and protector. In the game’s final battle, Noel confronts Caius and challenges his views about Yeul: though Caius believes Yeul to have been cursed by Etro to die and be reborn countless times, always living a short life, Noel tells Caius that he knows Yeul wanted to come back because she loved Caius and wanted to be with him, time and again. The closer you look at the story of Noel, Caius, and Yeul in relation to the overall architecture of FFXIII-2‘s narrative and world, the more poignant the story becomes. The very act of the player and Noel progressing through the story and constantly changing the future causes Yeul to have more visions, thereby shortening her life and killing her more quickly; Caius, the game’s final villain, wants Noel to be strong enough to kill him so that, by Caius dying, Etro will die too (since his heart is her heart) and Yeul will be free from seeing history. And as Noel continues in his journey, he comes to understand both Caius and Yeul, all the while unknowingly unwinding the coil of fate to the point where he is strong enough to kill Caius, and Caius forces him to do so. And on top of all this, perhaps most impressively, this narrative perfectly mirrors the act of playing the game: as the player explores and exhausts all the game’s narrative possibilities, she becomes more invested in and knowledgeable about the characters, all the while progressing the story to the point where the game reaches its conclusion, effectively ending the timeline of the game’s world and terminating the player’s interaction with the various timelines. This is a story shockingly rich with layered conceptions of time, sympathy, pathos, and the tension between possibility and fate. I started out this article by claiming that FFXIII-2 was a game with tragically ironic narrative shortcomings, but thus far I seem to have been describing an incisive, acutely self-aware game with a moving narrative. So where’s the problem? Well, you might have noticed that I said above that Noel is one of the player’s two characters–and it’s the other one of these characters that makes trouble for the game. Tragedy and Time In a nutshell, the problem for Final Fantasy XIII-2 is that the story I just related to you above is relegated to the status of a sub-plot: Noel and his cohort are effectively supporting characters in service of the player’s other controllable character, Serah Farron. The game is principally conveyed through her perspective, and her goal–the primary impetus for the game’s overall narrative–is to effectively undo the world and story of Noel, Yeul, and Caius. Serah is the sister of Lightning, who was a major character in FFXIII and the primary protagonist (and only player character) of Lightning Returns, the last entry in the trilogy. She is engaged to Snow, another key character from the first game that gets downgraded to little more than “Serah’s fiancée” in FFXIII-2 and Lightning Returns. The overarching narrative of FFXIII-2 is that, as the time paradoxes began (following the events of FFXIII), Lightning was effectively erased from history, trapped in Valhalla, the realm where the goddess Etro dwells beyond time. Serah is the only one who remembers Lightning’s presence after the events of XIII-2, due to the paradoxes; Lightning, from Valhalla, sends Noel to join Serah on a journey to fix time, along with Mog, a Moogle who guides Noel and Serah through the world and time. Personally, Serah doesn’t strike me as a very interesting character–she seems to, for most of the game, have a generally bad time in the style of Sandra Bullock in Gravity, and to be generally two-dimensional besides this–but it’s not especially insightful to critique a character by saying tit isn’t one’s personal cup of tea. I think the more interesting problem with Serah is actually much deeper and harder to forgive than anything like her likability: the problem is that Serah’s epistemic perspective is directed outside of the game’s universe. The entire thrust of Serah’s storyline is that she remembers her sister when no one else does, and wants to restore time to the way she remembers it; in other words, she remembers the events of Final Fantasy XIII, and is trying to reestablish them in a world that is radically different. (Note, as an aside, that this is one of the reasons why it’s so challenging to make sense of the series’ overall consistency: the very premise of time paradoxes in FFXIII-2 effectively undoes many narratively central elements of FFXIII, and similar anti-plot devices bridge the gap between FFXIII-2 and Lightning Returns.) So the primary objective of the game’s narrative, as presented through the lens of its focal character, Serah, is to undo the world of the game by changing history to reinstate the world of the previous game. So Serah’s narrative isn’t simply a “distraction” from Noel, Caius, and Yeul’s narrative: it actually actively disqualifies it as relevant, since that narrative constitutes part of the world that Serah is aiming to undo. Indeed, even when Serah is identified as a Seeress who, like Yeul, can see the future at the cost of her life, this fact that could potentially unify the two narratives seems nevertheless to be something that Serah’s narrative tries to overpower and disqualify: she decides to continue trying to change the future despite the fact that it may cost her life. Thus when Serah does die at the end of the game as a cost of her visions, the death doesn’t beautifully tie her story and fate together with Noel’s–rather, it just puts a final emphasis on the bizarre fact that the game you just played forced you to focus on a player who never wanted to be in the world of the game. This problem is deep and inescapable because the narrative of FFXIII-2 virtually always focuses on events through Serah’s perspective. This is important to note because there are multiple ways in which games can intermingle good and bad narratives, and these ways bring about different effects in the overall narrative. It’s useful in this regard to contrast FFXIII-2 with the case of Assassin’s Creed III. Again, regulars to the site will know I’ve been harshly critical of ACIII in the past, mostly in virtue of what I see as a baseless use of an alien-like First Civilization dominating and confusing a narrative about Templars fighting with Assassins; I first detailed this in an article comparing the “aliens” of Assassin’s Creed to the “aliens” of Majora’s Mask. Roughly, my gripe against the game is that the imposition of the First Civilization discounts the value of any agency the player appeared to have within the world of the game, thereby undercutting the entire point of having played the game; this is especially clear when Desmond killed with little narrative justification or explanation at the end of ACIII. But it’s crucial in understanding ACIII to note that there are two layers to the narrative: we have Desmond working as an assassin in present time, and we also have him accessing and living out the memories of his ancestors in the past via the Animus. When engaged in the Animus, the broader storyline of Desmond, the First Civilization, etc., largely fade away: instead, we are left with a compelling narrative about a Native American ancestor, Ratonhnhaké:ton, taking part in the American Revolution, becoming an assassin, and undertaking a deeply personal quest for justice. The key thing to notice about the above ACIII example is that the layered aspect of the narrative, with the Animus interface serving as a barrier between Desmond’s story and Connor’s story, allows us to effectively consider each narrative independently of the other, while still being able to consider them compositely if we so choose. Despite my qualms about the overall game and series, I quite enjoyed Ratonhnhaké:ton’s story in Assassin’s Creed, and the overall narrative structure allowed me to enjoy it without the overarching Desmond narrative severely impeding it. But this isn’t the case in FFXIII, because there is no Animus-like interface between Serah and Noel’s narratives: Serah is the player’s primary conduit to the entirety of the game’s world–the world she wants to undo. Even in the momentous final confrontation between Noel and Caius that I described above, we find Serah collapsed a few yards from them on the beach of Valhalla, being sad and generally having a bad time. We’re trapped in the perspective of someone who doesn’t belong or want to participate in the world in which we as players as participating, and that is the crux of FFXIII‘s failure. Conclusion: A Tale of Tragic Irony If you like irony, then there’s a silver lining for you in all this: even though the overall architecture of FFXIII-2 spoiled what could have been a moving and cerebral story, it does leave us with some tragic, dramatic irony in the way that Serah’s narrative interacts with the narrative of Noel, Caius and Yeul. Noel, Caius, and Yeul are deeply enmeshed in a universe rife with paradoxical possibilities and timelines, trying understand the best way to shape their world and each other as they grapple with the complex perspective and sympathies that come with witness life, death, and pain across countless generations and potential timelines; yet all of their struggles to understand and make meaning ultimately depend on the whim of a player whose actions are being filtered through the lens of a girl who has no intrinsic stake in the events or native inhabitants of the world in which she finds herself. This almost recalls classic Greek tragedy in how laughably ironic it is: as characters wrestle with their humanity and universe, their fate rests in the hands of someone whose priorities are entirely elsewhere–literally in a different game. If there’s any larger takeaway here, I think it’s this: the worlds and metaphysics of video game worlds are integral to the stories of video games, and the characters of games oftentimes relate to the game’s world in different ways. If the characters have different stakes in the world, then the relations between those stakes, along with the weight given to each of those stakes, must be mindfully architected, or else the whole narrative could be thrown out of balance. And, although we might think it obvious, FFXIII-2 shows us how crucial it is that the principal avatar in a game is actually invested in the world of that game. After all, what incentive does a player have to act as an avatar that does not wish to participate in the game’s world? But, with that, a new chapter is beginning. Here’s hoping that Square learned from its mistakes, and that Final Fantasy XV has a story worth telling. The only way to know for sure is to dive into its world and find out. Or, you could head back here in a few weeks and see what I think of it. Or both. Both is good. Unclear Control: An Intersection of Player Experience and Game Mechanics -by Nathan Randall, Featured Author. I spend a lot of time introducing non-gamers to video games I like. A majority of the time the non-gamer’s reaction is mixed. Amidst moments of excitement and comments about the beauty of the graphics, there are inevitable complaints about lack of clarity in the in-game systems and control scheme of the game. Some of the complaints are completely reasonable, and I agree with them entirely; understanding the implications of the Final Fantasy VIII junction system, or how to jump in Dark Souls without helpful explanation from a friend, seems to me to be a miracle. I quickly forgive these complaints because they are complaints about overly complicated systems. But there is another set of complaints that are much more foundational. These are complaints about the basic systems of a game, which often aren’t that complicated, but are steeped in convention. These complaints have to do with the fact that non-gamers by definition have less experience with the conventions of gaming than a gamer. For example, some of these complaints might be: - How am I supposed to move with one control stick and look with other? - How was I supposed to know that I’m supposed to go to the right?/Where do I go? - How do I pause? But the following example is, I think, particularly interesting, simply because it’s a problem that every game has to deal with in some way. This is the moment when the person I’m showing a game to has just finished watching the introductory cutscene (or lack thereof), and their avatar is just standing there, gazing off into the distance, until I say, “Hey, you can move now, you know?” Usually at this point, the person in charge of the avatar jiggles the joystick and, surprised, says, “Oh whoa, you’re right.” Even experienced gamers often get caught off guard at that moment, since they are by nature naive for any given game when they are starting it for the first time. Normally, conversation about that moment of unclear control ends on that note, and never comes up again (unless the player makes the mistake another time). One should ask, why doesn’t the game just inform the player that they’re in control using a text box? And that is one potential solution to this problem. But it’s easy to trivialize the narrative power of a moment like the one described above. The power of a moment that plays with player expectation so effectively should not be overlooked. The mechanics in a game that impact us most are the ones that play with our expectations. Rather than leave the unclear control at the start of a game as a nuisance in the gaming experience, why not use it to tell a game’s story better? There are a few games that have caught on to this narrative power. For example, in Batman: Arkham Knight, when Batman tells Alfred that he’s going to even the odds, and a text prompt appears that says “L1- Even The Odds”, the player shudders with anticipation of what powerful new ability is about to be introduced (or perhaps aware that it will undoubtedly be the Batmobile). In a similar way, some games have played with the accidental mechanical phenomenon of unclear control in order to create experiences that range from satirical comedy to heartbreaking loss. So what exactly is it that defines “unclear control”? To understand, I’ll propose a framework that will capture the phenomenon of unclear control so we can analyze it. The framework consists of two mechanical elements of games. We’ll then look at the phenomenology associated with the second mechanical element. I’ve created this framework with the intent of explaining unclear control, but I believe it could explain other design decisions as well. The first mechanical element of the framework is the player’s control state. The control state is the set of ways in which the player can impact the game at any one time t (for this and following explanations I will label a moment in time as variable “t”). Control state is fluid, so it can change over time, based on in-game mechanisms and controls. At time t the control state could be one thing, but it could change to something else at time t + 1 second. But at any time t the player has a definable control state. The second mechanical element is the actual game state. The actual game state is the collection of all aspects of the current run of the game, including the graphical systems, the music that’s playing, as well as many more internal calculations that vary depending on the game. Each actual game state contains exactly one control state, and there is only one actual game state at time t. But many aspects of the actual game state do not present to the player’s senses, which leads to the creation of the apparent game state. Many times in a game, the player cannot distinguish between several different game states. Thus, these game states are apparently the same. These actual game states will all be grouped together into one apparent game state. Thinking about it from the other direction, one apparent game state can arise from a variety of actual game states, only one of which is active at time t. A key aspect of this system is that a particular apparent game state can arise out of several different possible control states because it can arise out of several possible actual game states. The control state in any one particular apparent game state can be ambiguous. I’ll note the key relationships between these ideas below: - Each actual game state contains exactly one control state. - The apparent game state can arise out of any of a set of possible actual game states. - The apparent game state can arise out of several different possible control states. With this framework we can now define unclear control. Although the classic example of unclear control is the moment at the end of a cut scene when it is unclear if the game engine is once again taking player input to control the avatar, the mechanical phenomenon is actually more general than that (I describe unclear control as a mechanical phenomenon because it is a phenomenon borne out of the mechanical systems in a game). Unclear control occurs whenever the player is experiencing an apparent game state that contains several different control states. In this way the player has no way to tell how much control they have until they try to give an input. Unclear control is born out of an inherently frustrating aspect of video games: the question of how to communicate to the player that they are in control. And historically, games have had differing ways of dealing with this problem, including giving tutorials, and text-prompted hints. However, many game companies did not realize that this was a problem that they had to deal with in any particular way, and so when the initial dialogue ends at the start of the game, the avatar is left just standing there until the player figures out that they are in control. Thus, in its first appearances, unclear control is a frustrating, absent-minded, and accidentally created mechanical phenomenon. But, some game creators recognized that unclear control could be used to create narrative power, and so kept creating systems that utilized unclear control, even though it is frustrating for players initially, so that they can tell stories in a more intriguing way in the latter portions of the game. On that note, let’s turn to a few examples. Final Fantasy VI makes use of unclear control frequently throughout the game. Dialogue sections (parts of the game in which the only player input possible is to click a button to make the next dialogue box appear) often have no visible or auditory transition back into player control once the dialogue box disappears, so when they end, the player’s avatar is left standing there until the player decides to move. But sometimes there are dialogue sequences in which the player’s character just stands in place, without making a sound, with no dialogue box present. Thus, the two most common ways to know whether the dialogue section has concluded are to try to move your avatar, or just to just wait for so long that the dialogue section could not reasonably still be going. I doubt that many players actually do the latter, so I will assume that in general the way of checking to see if a dialogue section is over is to press a movement button once the dialogue box has disappeared. This brings us to an example of how unclear control can be used to forward a game’s narrative. At one point in Final Fantasy VI, Locke, one of the game’s protagonists, gets put on a mission with a previous romantic flame named Celes, who he thought had been killed earlier in the game. They bump into each other accidentally late at night outside of the inn at which they are staying. Locke attempts to apologize to her for an earlier transgression, but she won’t respond. After a moment she runs away from Locke, off-screen. Locke is left silent, staring off in the direction that she ran. At this point I impulsively tried to run after her, thinking that the dialogue section had concluded and my next goal was to find her and tell her that I didn’t hate her (chasing after characters is a recurring objective in the game). My goal at that point implicitly became to chase after Celes. But it turned out that the game hadn’t yet handed control back to me yet. The game was actually in a different control state. Instead of giving control back to me, the game slowly faded into black. But my experience should not be considered unique, because the gameplay itself is what gave me the goal to chase after her. Through unclear control, the game gives the player a goal—to chase after Celes—that is not actually achievable based on the metaphysics of the game world. I could not act on my desire to pursue Celes, just like Locke couldn’t. The unclear control in this example potentially creates a palpable feeling in the player of the difference between what is wanted and what is done. Regardless of the amount of player emotional investment, the apparent game state creates the illusion that the in-game goal is to chase after Celes, because the dialogue section appears to have concluded and the player is given an indication of where to go to follow her. Thus in some sense the player feels they can and should chase after her. But the volition cannot turn into action. When it comes down to it, Locke doesn’t chase after her, no matter how much he might want to. And then comes the feeling of failure—the feeling of not acting on your desires and also not helping a friend. Through unclear control, the game can express to the player a feeling of knowing what you want but being unable to incite yourself to action. I challenge any medium other than games to express this feeling as eloquently as Final Fantasy VI does. My next example, from Undertale (by the magnificent Toby Fox), is not nearly as emotionally charged: the goal is satire on JRPGs (Japanese Role Playing Games). So in order to understand it, I’ll need to describe the trope that is being made fun of. A great example of the trope in action comes from Final Fantasy IX. Zidane, the character that the player controls through a majority of the game, walks into a room mostly filled with water, with a bridge through it. Once the player walks into the room, the player loses control of Zidane, and then a dialogue section ensues. There is a moment of pause before a serpent slides out from a hole in the wall and falls into the water. There is another pause before the serpent attacks and a battle starts. Thus the formula is born. The player is in control, walking along, and a certain location will cause the control to be taken from the player. A monster appears and does something. Then, after a pregnant pause, the monster attacks and a battle begins. One should note that in order to use this particular series of events, generally the game must be discontinuous between the battle word and overworld, featuring a transition of some sort between the two worlds (most older JRPGs work this way). An important detail of this particular trope is that it teaches the player something about their control state during the events leading up to the battle. When the avatar stops, they know that they are no longer in the control state that allows them to move their avatar. But after the monster appears, a naive player may try moving again, to see if they have regained control. These games have now standardized that after a monster appears and control is taken from the player, control will not be given back to them until after the ensuing battle. This is not a necessary truth, just a standardized one. Undertale features a moment very similar to the one in Final Fantasy IX described above. One need only take a look at the two pictures below to see the similarity in the circumstances. In both cases the player is walking into a room filled with water, and there is a bridge across it. In a similar fashion, the characters both stop on the bridge only to be interrupted by a monster. Now, Undertale is incredibly ambiguous between two of its control states in particular: walking around the world, and dialogue. The apparent game state for the two control states is the same whenever the dialogue box is absent, especially during transitions between the control states. Dialogue sections almost always start with only an abrupt change in control state (taking control away from the player), and they almost always end by returning control to the player. Often very little indication is given that a transition has been made. So when the player is stopped on the bridge, the player immediately knows that they’ve entered a dialogue section. The monster, who we find out is named OnionSan, shows up and talks for a little while, immediately activating the conditioning any regular JRPG player has experienced. After OnionSan is done talking, all of these non-naive players are ready and waiting through the pregnant pause for the battle to start. But, little do they know, the game has actually changed the control state for the player: they are back in control. When finally they do decided to try to move, they are rewarded with watching the avatar awkwardly shuffle across the screen. With the use of an unclear control state, Undertale has fashioned a moment that is awkward both in dialogue and in the actions of the player. And since the moment repeats a few times before the player makes it to the next room (without ever fighting OnionSan), the awkwardness is effectively prolonged, leading to wonderful participatory satire. Creating ambiguity in the amount of control the player has at any one moment can be an effective means in many occasions of tying humor or story into the very mechanics of a game—a key part of the player experience. Final Fantasy VI used unclear control to give insight into Locke’s state of mind through the implicit creation of in-game goals—to experience firsthand how multiple options appeared possible, but only one choice was made. Undertale used the unclear control to satirically challenge a common trope in the JRPG genre. And I’m sure that with more searching, other brilliant examples of narratively powerful unclear control could easily pop up. But what’s most important, I think, is that unclear control takes use of what is often a frustrating or embarrassing experience for a player (not being sure whether or not they have control of the avatar) and turns it into a tool to use to expand player experience. What other frustrating aspects of games can we hijack in a similar fashion? Games don’t have to be frustrating, even for new players. If an element of the design of a game is frustrating, it should be removed (if it can be). And if it is not removed then it should be used as part of the storytelling experience. Rather than stick like glue to our common mechanical conventions, game designers should make use of their mechanics to expand their story, or maybe at least tell a joke. Let’s make use of how the mechanics of our games make players feel to enhance the experience. Let’s shoot for the standards set by Final Fantasy VI and Undertale, and use all the tools we have available to us to tell our stories. Nathan Randall is a featured author at With a Terrible Fate. Check out his bio to learn more. “Game mechanics are constructs of rules and methods designed for interaction with the game state, thus providing gameplay” (https://en.wikipedia.org/wiki/Game_mechanics). If we define a phenomenon as the object of a person’s perception, unclear control would be a mechanical phenomenon because it’s something that a person notices that is based on the mechanics of the game. I describe it as accidental because, to the best of my knowledge, no one desired to create unclear control in the design for their games in its first appearances. It may also be interesting to consider the situation in which players have no control. Is WHETHER a player has control relevantly different from HOW MUCH control a player has in any particular way? Are there special characteristics for the “null set” within this model? I’m not entirely sure what the answers to these questions are. But if we find the answers, they may help fill out the model in a more complete way. I’m eager to hear any thoughts/examples. If I find an intriguing idea I’ll likely write about it in the future. Teaser Metaphysics: Storytelling in Xenoblade X. A professor of mine once presented a lecture as “an expression of doubt and a plea for help.” He wanted very much to believe that a particular argument we were discussing was true, and yet he saw too many problems with the argument to believe in it. Thus, he was expressing doubt in the argument, while also asking his students to help him find a way to make that argument work better. I want to frame this commentary on Xenoblade Chronicles X in the same way that my professor framed that lecture: an expression of doubt in the game, and a plea for readers to help me see something in it. Regulars of With a Terrible Fate know that I am a vocal proponent of the philosophical richness of Xenoblade Chronicles; I eagerly dove into Xenoblade Chronicles X (I’m just going to call it “Xeno X” from here on out) expecting that same sort of philosophical richness. I was tremendously disappointed, and quite frankly felt robbed–that’s how much I was let down when I compared Xeno X with its most immediate predecessor. Although this piece is an explanation of why I felt so let down, I don’t want to feel robbed by the game; so, please, if there is something I am missing or that I have overlooked, I am eager for someone to let me know. With preliminaries out of the way, this article, as I said, is in principle a very negative review of Xeno X. More specifically, I argue that Xeno X promises to confront deep, interesting, metaphysical questions especially salient in video games, but ultimately only confronts broad, generic philosophical questions that can be addressed virtually anywhere. I first discuss the promised philosophical themes: the ways in which the game hints at certain philosophical puzzles, encourages (and indeed requires) the player to pursue missions that seem likely to shed light on those puzzles, but never actually follows through on these ideas. Next, I discuss the philosophical themes that are present in the game, and argue that, although certainly interesting in other contexts, the overall architecture of the game precludes these themes from being salient. Finally, I consider the fact that Xeno X is obviously set up for a sequel–I argue that, far from being an excuse for the game’s unfulfilled promises, this particular sequel dynamic is symptomatic of a severe problem in popular storytelling today. (As always, spoilers abound–for this game, and for Xenoblade Chronicles.) I. Teaser Metaphysics The best way I’ve found to describe the universe of Xeno X on its most fundamental level is as a “teaser metaphysics.” I mean to say that every deep metaphysical concern that’s apparent in the game’s universe is of obvious importance throughout the game, and yet we never actually discover the substance of those concerns. Elma says multiple times during the game that “there’s something about this planet.” In my estimation, this is a perfect tagline for the game: it’s always clear that something strange and interesting is happening on the alien planet of Mira where mankind has relocated post-alien-annihilation-of-Earth, but it’s never clear precisely what that “something” is. I’m going to offer a list of the three (and only three) moments I felt were interesting in this way, which the game never followed up on; then, I’ll discuss why I think the game’s architecture forced the focus onto these moments in a self-destructive way. - What are we talking about? (Ch. 5) When the player’s character, together with Elma, Lin, and the irredeemable Tatsu discover a group of imperiled Ma-non in Chapter 5–alien races abound on the world of Mira–Elma makes an observation about how strange it is that she and the other humans can perfectly understand all of the aliens they’ve encountered thus far. Elma: “Tatsu, the Ganglion, and now these Ma-non… Don’t you find it a little odd that we can understand these alien languages?” Lin: “Huh…good point.” […] Elma: “Tatsu, did you study our language?” Tatsu: “Friends’ language?” Elma: “What language are we speaking right now?” Tatsu: What language? Nopon, of course! Friends’ Nopon very good, by the way.” Elma: “See? Xenoforms have different anatomy, physiology–different vocal setups in general. It seems likely they would struggle with out pronunciations. And yet, here we are, conversing. Lin: “But if they can’t even produce the sounds… this shouldn’t be possible.” Elma: “No, it shouldn’t be. Unless, our words aren’t being perceived as sounds at all. Maybe our intent is getting across some other way… But how? Could it be something about this planet?” Lin: “Heh. Someone sounds pretty intrigued, huh.” Elma: “Well, what if it IS some kind of new phenomenon? Aren’t you curious to learn more?” Lin: “All right, now you’re starting to sound just like L.” Tatsu: “Okay, already! Friends talk less, help Ma-non more!” And, with that, the scene devolves into one of the story’s many jokes about Lin cooking and eating Tatsu. Just as we’re broaching metaphysically salient territory, the game drags us back into tired jokes about eating its most frustrating character. Why is this dialogue so interesting? Well, besides the obviously interesting idea that different species are somehow able to perfectly understand one another as though they were all speaking the same language, I initially thought this dialogue was suggesting that the game was philosophically aware that it was a game. What I mean by that is this: I’ve argued several times that one of the most philosophically interesting things about Xenoblade Chronicles is that you actually can’t make sense of its story unless you understand the player to be a character within the game’s narrative. In this way, the philosophical content of the game depends on its status as a video game, which I think makes it uniquely interesting. So I initially thought that, like Xenoblade Chronicles had done previously, Xeno X was created interesting philosophical content based on its status as a video game: perhaps everyone could understand one another because their intents were being represented directly to the player. This would make sense since the entire game is literally conveyed to the player, and the player is at various times able to hear Elma’s thoughts (for example). It would also be a way of explaining Elma’s cryptic comment here that speaker intent is being expressed without relying on the phonetics of language: perhaps the idea might be that the entire world, in virtue of being a video game, is simply encoded information that is then represented to the player in a comprehensible manner. The above analysis is speculative because, so far as I can tell, the game never follows up on this discussion. This is teaser metaphysics at its finest: as though mocking to the player directly, Lin responds to Elma’s curiosity by saying, “Heh. Someone sounds pretty intrigued, huh.” But perhaps I’m being unfair–perhaps other philosophically salient material in the game provides us with the analytic resources to make sense of this language puzzle. Unfortunately, I don’t think that’s the case: everywhere I turn, the game just provides more teaser metaphysics. The unstoppable success of an avatar. (Ch.8) This case is a little less straightforward than the language puzzle we just discussed, but I hope to convince you that it’s just as much a case of teaser metaphysics. In Chapter 8 of the game, in which alien forces attack the human city of New L.A., two aliens–Ryyz and Dagahn–confront Elma, Lin, and the player’s character within the city. As Ryyz approaches, Lin trembles in fear. Ryyz: “You’re right to be afraid, little girl. [To Dagahn:] Let’s kill her first.” Elma: “Lin, stay calm. Don’t let them into your head. We’ve faced worse than this before–and we’ve won, every single time. Don’t forget that.” Lin: “I know…” I want to suggest that, because Xenoblade X is a video game, Elma’s words of encouragement to Lin are much more interesting than they appear at first. Here’s an obvious fact about most video games: if the player of the game makes a mistake, the character(s) she controls can end up dying, and then the player has to repeat the narrative from a certain, earlier point, until she succeeds in progressing without dying. Certainly not all games work this way, but the majority does, and Xeno X is in that majority. Moreover, the exchange I quoted above comes two thirds through the main storyline of Xeno X–so, while it’s certainly possible that an adept player could have reached this point without her party ever dying, it’s very likely that her party has died at least once, requiring her to “try again” in the very standard way that video games expect of their players. But now we have an interesting puzzle: there’s a sense in which what Elma says to Lin is just not true, because, if the player has failed at some earlier point in the narrative, then the party hasn’t won “every single time.” There’s also a sense in which Elma is right: the player, after all, have to succeed once in every story mission in order to make it to the current point in the narrative, regardless of how many times she might have failed along the way. So, this seemingly throwaway line actually suggests that something very interesting is going on in the world of this game: somehow, the game only “counts” the player’s successes as meaningful, disallowing the player’s failure as constitutive of the game’s narrative. This could be an interesting commentary on the discrepancy between a player’s experiences on the one hand, including both failures and successes, and the experiences of the game’s characters on the other hand. Indeed, the mere fact that Elma says something so unusual and applicable to the nature of video games suggests that some sort of special relationship between the player and the game’s world is at work. But again, I must speculate because the game never follows up on this idea. There is hope that it might be explored–after all, the fact that all the humans on Mira live in replaceable, robotic, “mimeosome” bodies points to this same theme of the game’s world having video-game-esque metaphysical dynamics–but the idea is never fully articulated. Nor does the game offer us the resources to meaningfully theorize about this dimension of its world. I held out hope until the very end of the game, and a single line led me to believe that these metaphysical dimensions of the world might be explored after all; but, as we shall see, that line ultimately turned out to be another red herring. The one being who wasn’t on the computer. (Ch.12) After the final confrontation in the Lifehold Core against Luxaar and Lao, Elma pauses to reveal something unexpected to the rest of the party. Elma: “The truth is, exactly one mim in New LA…actually is being controlled remotely from a real body held in stasis here. Lin: “Wait, someone isn’t stored in the database with the rest of us? Elma: “That’s right. This was a special case.” Whereas everyone else who fled Earth and arrived on Mira had their consciousness stored digitally in a computer database, controlling mims (i.e. robotic mimeosome body) from that database, there is one mim controlled by a real person. At this juncture, I was prepared to be very impressed with the game. It seemed to me as if the game were about to answer all of my questions. What better way to explicate the special metaphysics of a video-game world than by having a character within the game point out that the player’s character is being controlled by a “real” person–i.e. by the player? If Elma had said that a real person was remotely controlling the player’s character, various otherwise inexplicable or underwhelming aspects of the game might have started to make much more sense. For instance: the character-creation aspect of the game, I submit, feels very contrived and forced. The player initially appears to have a wide variety of choice in being able to customize nearly every aspect of her character–appearance, voice, catch phrases, etc. But it quickly becomes clear that this aspect of choice is superficial: the player’s character never has an actual voice in cutscenes, and has a limited number of oft-repeated catchphrases when engaged in combat. The only way the player’s character can have input in cutscenes is by the player choosing, at various junctures, between several lines of text for her character to “say” (though, again, these lines aren’t vocalized). And this choice element is superficial: virtually no text choices the player makes can seriously influence the plot of the game. The game’s narrative is linear, and, as a result, the player will be “pushed” towards a single outcome of events regardless of the “choice” she makes. When my party discovered Tatsu, I tried to use every dialogue choice available to me to leave him behind and not let him join the party (as I mentioned, he seems, consistently, to be more of a nuisance than he’s worth–and not in the trope of a character you “love to hate”). So the choices the game appears to offer the player don’t really matter, whereas at obvious choice-points in the narrative, the player has no power. For instance: after Lao betrays the party and the party defeats him, Elma wants to kill Lao as punishment, and Lin tries to stop her. This is an obvious choice point where, if choice really matters, the player should be able to choose a side for her character to take: side with Elma, or side with Lin. But this doesn’t happen: the player’s character automatically sides with Lin, forcing Elma to stand down. And of course, this must be the case–since Lao ultimately reappears in the final battle of the game, and the narrative is linear, it couldn’t be an option for Lao to die here. But this makes the game smack of fake choices: the player, presented with an illusion of choice, ultimately lacks any sort of real input over a character that everyone notices “doesn’t say much.” However, if Elma had said that a real person was remotely controlling the player’s character, I would have forgiven this design choice. The notion of a custom-designed character works extremely well if it’s true within the conceit of the narrative that the character was created as a proxy for the real player. We might then also have more supporting evidence for the theory I suggested about how language works within the game: perhaps the player’s character never needs to literally speak because his intentions are conveyed representationally through the medium of the game, along with everyone else’s. And perhaps this could even help explain the mechanics of success and failure that I described in the last section: perhaps the player’s knowledge of her failures, imputed to her character, are part of the narrative explanation of how the party was able to progress so far successfully. To say as much would be to marry the form of the narrative as a video game with the content of the narrative in a novel, metaphysically and epistemically interesting way. But of course, Elma doesn’t say that a real person was remotely controlling the player’s character. Instead, she reveals that she is actually an alien, whose real body has been stored in the Lifehold Core, controlling the mim who has followed the player’s character throughout the whole game. While certainly a plot twist, it offers no help in making sense of the game’s teaser metaphysics, nor of the ontological status of the player’s character. Thus the game leaves us with many questions, the promise of many answers, yet no actual answers. II. Backgrounded Philosophical Issues The reader might think me unfair to Xeno X. After all, broadly speaking, I’m comparing it to Xenoblade Chronicles, and maybe it’s simply not trying to be the same kind of game as Xenoblade Chronicles. Well, the reader may be partly right: Xeno X does try to explore a number of issues that aren’t deeply addressed in Xenoblade Chronicles, and it’s a different game in many other ways, as well (Skells, mission-based storylines, etc.). But I contend that, even taking this into account, Xeno X fails as a cohesive narrative because its game design suggests to the player that the kind of metaphysical issues I described in Part I will be central to the game: and because the game is designed in this way, it’s hard to deeply explore any of the other philosophical issues the game raises. Some of the putative philosophical issues in the game include: enslavement (the Ganglion race, representing the game’s main antagonists, has enslaved the Prone race), xenophobia (the various alien species are called “xenoforms” and much of the game focuses on dealing with inter-species difference), and the value of one’s body (humans are initially told that their real bodies were preserved in the Lifehold Core and that they are controlling their mims remotely from there; ultimately it is revealed that their real bodies were left on Earth and destroyed, and all that remains are digital representations of their consciousness, contained in a Lifehold database). All of these themes are certainly interesting on their own terms, and great stories have considered all of them in the past. So the problem with Xeno X isn’t that it lacks any interesting themes: the problem is that it directs the player’s attention away from these themes and towards its teaser metaphysics, leading to ultimate disappointment in the game’s philosophical salience. The story in Xeno X is broken up into missions, each with certain “progress requirements” that the player must meet before she can begin the mission. Many of these requirements are “survey” requirements: you have to go out into Mira and survey a certain amount of land in a particular region before you can take on the mission in question. This means that you can’t go through the entire story of Xeno X continually because the game effectively requires you to stop in between missions and explore the world. Although I do think that game’s shouldn’t require players to explore the game’s world extensively in order to complete the story (meaningful exploration in games ought to be left to the discretion of the player, or else it ceases to really be exploration and instead becomes a chore), that isn’t the problem I’m pointing out in Xeno X. The problem is far deeper than that: they’re effectively using the game’s world to tell a story that forces the player’s attention toward the game’s teaser metaphysics. It’s no secret that video games can use the very world of the game as part of its narrative, in order to tell unique and interesting stories. Xenoblade Chronicles, again, is an excellent example of this: the entire world of the game is two monoliths, which, without getting into details, represent both the central conflict of the game and the themes on which the game is centered. The more I’ve looked, the more it seems to me that many of the most philosophically interesting games use their worlds as storytelling elements in this sort of way. Xeno X, on the other hand, is a clear example of how using a game’s world as part of its storytelling can handicap the game’s central themes and messages. From as early on as Chapter 5, when the dialogue about the language puzzle happens, it’s clear that Mira works differently than the player and various characters were originally led to believe. Humans and Ganglion alike mysteriously ended up there with little-to-no explanation; everyone can understand one another without sharing the same language, etc. As Elma suggests in Chapter 5, and again at the end of the game, the overriding theme of this strangeness is “there’s something about this planet” that explains all of these bizarre phenomena. And there’s a very easy inference we can draw about a game that claims “there’s something about this planet” and then requires the player to explore that planet in order to progress through its story: by exploring the planet, the player will discover the mysterious aspect of the planet that explains its special dynamics. That is how the game’s very world, in conjunction with the requirement that the player explore that world, forces the player to focus on the game’s teaser metaphysics. And when it becomes evident at the end of the game that all the many hours of exploration did not shed any light on the true nature of the game’s world, the player, I contend, feels and ought to feel cheated: the game has effectively reneged on its promise to explain itself and its world. In the absence of any such explanation, the required exploration feels contrived within the context of the game’s narrative; indeed, the best explanation I’ve found for all the required exploration built into the game’s story is that developers wanted to ensure that they could show off the entirety of their world to players. But the developer saying “look at this world I built” should not be an explanation for the most foundational elements of a game’s narrative dynamics. The result is that the game focuses on the philosophical issues on which it never follows through, and the philosophical issues that it does explore are left in the background. Indeed, discussions of race, enslavement, and the status of body all felt distracting to me because I was always waiting for the true nature of the world to be revealed–and it never was. III. The Problem with a Promised Sequel Maybe I’m being unfair to Xeno X because, judging by its ending, the game is quite obviously set up for a sequel. Elma discovers at the end that the database supposedly holding everyone’s consciousness is and has been in ruins (this is when she says again that “it’s something about this planet); after being mutated and destroyed by the party, an apparently regenerated Lao washed up on a beach. The game leaves so many questions unanswered, you might argue, because it intends to resolve them in a sequel (or DLC, or what have you). So perhaps we should excuse the game’s apparent incompleteness and focus on what it does, as opposed to what it promises that its sequel will do. I think that this sort of reasoning is a mistake. Speaking candidly, it seems to be increasingly more common nowadays for stories to be predicated upon sequels. The ending of Final Fantasy XIII-2 was nothing more than a cliffhanger leading into Lightning Returns; books-remade-as-movies are split from a single book into multiple movies (e.g., Harry Potter, The Hunger Games). This strikes me as a disingenuous way of getting consumers to spend more money just to get the second half of a story in which they’ve already invested. Worse, though, this kind of storytelling that builds a sequel into the first story simply doesn’t work, especially in video games–and there are deep theoretical reasons why it doesn’t work. I argued precisely this in my work about why Final Fantasy VII shouldn’t be remade as multiple games. I’m going to quote, rather lengthily, the relevant argument, since it also applies to the case of Xeno X. The argument starts with two basic claims about how video game narrative works. “Claim 1: The player of a video game is able to substantially, causally influence the events in that game’s universe, in virtue of her actions through the proxy of her avatar(s). Claim 2: The causal influence of a player on a video game’s universe is essential to the narrative of that game. (Note: when I say ‘video game’, I’m not talking about all video games, strictly speaking. I’m primarily concerned with analyzing story-based, single-player games.) Intuitive though these claims may be, they are substantive claims nonetheless. I don’t expect to offer conclusive proofs of them as “principles of game narrative” within the scope of this paper, but I do hope to convince readers that they are two very plausible assumptions to make about a very broad set of video games. […] Claim 1 just says that the player of a video game is able to shape its world in a significant way. At first glance, this claim might seem obvious—“This is a trivial fact,” one might say, “because the player literally controls someone in the game’s world (the avatar), and the avatar’s actions, derived from the player’s control, clearly influence the events of a game’s universe.” But this response is too quick for two reasons. First, it’s not readily apparent that people in a universe really do have causal power over the universe—it could just be that the universe as a whole evolves over time, with its various parts only appearingto interact in a series of causes and effects. That’s very different from a universe in which people can genuinely modify the events of the universe through their own actions. Second, even if we grant that game avatars do have causal power within their universe, it’s not obvious that this power is derived from the player. Even though the player is controlling the avatar, you might think that, within the context of the game’s narrative, the avatar’s actions can only be properly understood as choices that the avatar chose to make. It would be unwarranted, unnecessary, and bizarre to make sense of the plot of a Mario game by saying something like “Bowser kidnapped Peach, and so then the player took control of Mario in order to make Mario save Peach.” Rather, we just say, “Bowser kidnapped Peach, and so then Mario saved Peach.” Claim 1 suggests that we really have to analyze the story of a game partly in terms of the player’s causal influence, which seems like an odd thing to do. But a closer examination suggests that Claim 1 survives these two criticisms intact. We can get around the first criticism by considering replays of a single video game: when we play through the same video game more than once and have the avatar make different choices, the events of the game evolve differently. This doesn’t require that the game have choice-determined endings, or anything like that: the mere fact that we can move an avatar either left, or right, or not at all, in the same moment of the game’s narrative during different playthroughs of the game, suggests that avatars really are agents within their universes—their actions aren’t wholly determined by the universe external to them. What about the worry that the avatar’s causal power is enough, without invoking any implausible causal power on the part of the player? Though this point may be more controversial, I think we have fairly clear-cut cases (and less clear-cut cases) suggesting that we do have to analyze the stories of games partly in terms of player agency if we are to adequately explain and understand those stories. In many games, the player will be provided with information that her avatar could not reasonably know—perhaps something is revealed through a cutscene where the avatar is absent. This knowledge may well lead the player to make decisions in the game and direct her avatar in ways that could not be adequately explained by appealing to what the avatar believed and desired—instead, we need to appeal to what the player believed abut the world of the game, and how she acted on those beliefs through the avatar. We see this phenomenon even more clearly in replays of games: a player may well make different choices during her second playthrough of a game based on certain facts that were only revealed to her (and her avatar) very late in the narrative of her first playthrough—and so it would be even less plausible to account for these choices purely using the avatar’s mental life. We need a concept of the player acting as a causal agent through the avatar. So I think that Claim 1 remains plausible. The player, acting through her avatar, can causally influence the events of a game’s universe. This influence is substantial in the sense that the player’s actions, by influencing the game’s universe, influence the whole causal chain of the universe thereafter—the actions aren’t somehow “negated” by some counterbalancing force. I think that we typically think of causal influence in this way (i.e. a single action has ripple effects through time and space), and so this is a fairly intuitive view of game narratives. What about Claim 2? This claim says that the causal impact a player has on the world of a game is an essential part of that game’s narrative—without that same impact, the game wouldn’t have the same narrative. So it isn’t just enough for a player to be able to make a choice in a game’s universe that has nothing to do with the story: in some sense, the game’s story must be inextricable from the player’s choices. But this seems to be patently true. Witness first: in many games […] the events of a game’s narrative will not transpire at all unless the player chooses to engage the game and exercise her causal force. More to the point, the player’s avatar often constitutes the point-of-view through which the narrative is conveyed, and the avatar’s actions are crucial determinants of the events of that narrative. As a result, the narratives of games do seem deeply dependent on player choice. Even in cases where game narratives seem to suggest that the game’s universe is ultimately indifferent to the actions of the player—e.g., Bloodborne—the narrative functions on this level as a denial of the impact that the player and avatars actions had. This narrative function is still irreducibly a claim about the player’s causal impact, and so it does not threaten Claim #2. The claim, when considered, seems both intuitive and sound. If we accept these two claims—and I think that we should—then we are faced with an interesting consequence. The consequent claim is this: if a player’s causal impact extends over the entirety of a game’s universe, and that causal impact is essential to the narrative of a game, then it seems that the entirety of a game’s universe, insofar as a player causally impacts it, is essential to that game’s narrative. Another way to put our newfound consequence is this: it’s not enough for a game’s narrative to essentially involve the choices of the player in a local, finite sense. Rather, game narratives of this sort involve the impact of a player’s choices on the game’s whole universe, however narrow or broad that universe may be specified. I think that this, too, tracks with our intuitions about how game narratives often work: oftentimes, a primary element of a game’s story is demonstrating how player’s choices have impacted the game’s world. Nor is this a feature of heavily “choice-based” games: perfectly linear games nonetheless reflect the impact that a player’s actions have on the game worlds, even though the player didn’t have much of a choice as to how to act. (Think of Shadow of the Colossus: linear though it may be, it’s hard to deny that the game’s narrative is heavily focused on the ways in which the player’s actions have permanently altered the game’s world.)” If the argument I presented is right–and I think it is–then, just based on the storytelling dynamics of video games, you can’t present a video game narrative that “points beyond itself” to reference events in a future sequel. The totality of the game’s world is causally related to the actions of the player: if the nature of the player’s influence is rendered mysterious in the game’s narrative, promised to be explained as a sequel, then that game simply doesn’t work. Its narrative, metaphysics, world structure, and so forth, end up depending on a world alien to both the game itself and the purview of the player: and thus the game is render deeply, thoroughly incomplete. This, I submit, is precisely what we see in Xeno X. As I said at the outset, I would very much like to be wrong about this argument: I had very high expectations for the Xeno X, and was saddened to finish it with such disappointment. The world that Monolith Soft built is expansive and intricate, but that alone doesn’t make for a compelling story. Indeed, in this case, by pointing to the game’s teaser metaphysics and unfulfilled narrative commitments, I think the world actually damages the story. At this point, I truly don’t know whether I would invest in the inevitable sequel. To my knowledge, she says it twice: once during the brief scene where the party discusses the bizarre language dynamics of Mira, and again when she discovers the annihilated Lifehold computer in the game’s post-credit scene. Bayonetta: Female Sexuality and Agency in Video Games –by Laila Carter, Featured Author. Equipped with her four guns and always waging war against the heavenly army, the Umbra Witch Bayonetta has become one of the most recognizable female characters in gaming. Some people have (understandable) qualms with Bayonetta as a character: they claim that her over-sexualization – making someone excessively sexual whether in looks or actions – only attracts people to look at her body for pleasure, and that viewers do not respect her as a women of agency. However, judging by the many reactions people had when she was announced as the newest character in the last Super Smash Brothers game, I do not think that this is true. People respect Bayonetta and her abilities despite her over-the-top sexuality, or, as I argue, because of it. She is one of the few women in video games who is overtly sexy yet owns her sexiness, incorporating it in her personality. She is not simply some side-girl with no purpose other than to show off her huge breasts. She is the main star of her game and kicks major butt with witch power and sexual grace, showing off a butt-shot here and there simply because she feels like it. Bayonetta has agency of her own over-sexuality; She has the ability for a character to create and change the way she presents herself, and she does so by owning her image and enjoying every minute of it. Let me be clear about the goal of this article: I am not discussing whether or not Bayonetta is a feminist icon in gaming. That discussion is an ongoing one that will probably never be fully answered, but it has no place here. I am instead discussing how Bayonetta uses her sexuality in a different way than most women in video games. Bayonetta, The Male Gaze, and Agency When watching film or animation, certain topics tend to appear when analyzing how and why a scene is shot. The most relevant film term here is the term “gaze”; its definition is to “look at steadily and intently, in fixed attention.” In film studies, “the male gaze” specifically refers to when the camera positions itself so as to objectify the woman (or women) on screen. The audience does not view the woman as a person, but rather as an object, thanks to camera angles and movement, character attire, or scene setting (for a simple example, a woman lying in bed in a provocative manner). You can use these terms when talking about any visual medium, like comics, art, television, and video games. The types of art that use the “male gaze” depend on spectators’ scopophilia: deriving pleasure in looking at a woman for sexual interest. Scopophilia is what feminist film critics argue heavily against because the “male gaze” reduces women on screen to an object rather than to a character. By “object,” I mean a thing that one can own and handle as their own, and by “character” I mean an fictional entity representing an intelligent and sentient being that has its own independent existence. Critics and gamers have argued against Bayonetta’s entire character because of the “male gaze” the game’s cinematics produce; they claim that she invites spectators to look at her for her over-sexual body and not for her actual character. While I agree that the “male gaze” is a problem in film and animation, I do not think it can fully apply to Bayonetta’s character. To demonstrate, I will compare Bayonetta to the comic heroine Power Girl of the DC Universe, and to another controversial video game heroine, Tracer from Overwatch. Through Power Girl and Tracer, I will show the inconsistency between their character design – the way they look – and their character development – the way they act, feel, and understand the world as a whole. The inconsistency between design and development is a common way to distort female characters and attract the “male gaze,” having viewers focus on appearance rather than the overall character; And yet, this flaw of design and development does not exist within Bayonetta’s character. The comic book heroine Power Girl is a tough, short-tempered superhero who has all the superpowers of Superman, except that she as a very low tolerance of nonsense. Her outfit, though, is more suggestive. It is a leotard, but it has a huge hole at the chest, which reveals Power Girl’s unnaturally huge breasts. While Bayonetta does possess unnatural body portions, mainly in her freakishly long legs, her sexual organs – breasts and backside – are fairly normal. Power Girl’s obviously enlarged and showcased breasts attract the “male gaze,” inviting viewers to read her comic for sexual pleasure rather than for her actual story. Her sexualized character design contradicts her character development, ignoring her no-nonsense personality, making it apparent that her outfit and body were not of her own design. The only explanation for these features is that the creators wanted her to look that sexualized; nothing in her own personality and behavior suggests that she would ever wear such an outfit (especially with breasts as big as those – one jump and they are flying right out). Another good example of character inconsistency comes from a recent controversial pose of a female character. In Blizzard’s new team-based shooter Overwatch, the most iconic character, Tracer, had a new victory pose that some people did not like. In the shot, she had an “over-the-shoulder” look, meaning her back was as the camera while her head looks over the shoulder. With her back to the camera, she shows off her orange behind, fully outlined in tight spandex. Tracer is a fun-loving, silly, and friendly character, but the pose had nothing “to do with the character [Blizzard] is creating.” The argument does not call out all female heroes in the game (such as a sniper who purposely “flaunts her sexuality” to distract her enemies, so it makes more sense for her to be showing off her behind), but does not approve of Tracer’s pose because it showed that “at any moment [the creators] are willing to reduce [female characters] to sex symbols.” The pose contradicted her personality and was very jarring in comparison to her character development. The article sparked a huge discussion to the point where Blizzard studios removed the pose altogether. On the other hand, Bayonetta’s black, detailed body-suit establishes her as a sexy character. She is a flirtatious and dramatic dominatrix, not afraid of showing off her sexy body to anyone who is willing to watch. Her skin-tight outfit, in both games, pronounces her behind, but not so much her breasts. It creates a strange balance of sexualization, not making her too top-heavy but still allowing her to flaunt her body. It would not make sense if she wore a modest outfit, just like it does not make sense that tough and cranky Power Girl wears an overtly suggestive one. Her design works well and builds upon her character development, making her a more consistent character overall, one that does not feel like the creators wanted to give her a sexy overfit for the sake of sexiness. The most important aspect about Bayonetta’s character design is that her sexuality does not seem out of place. Bayonetta takes full control of her sexiness and unashamedly shows it off. She is a dominatrix, sexy yet intimidating and powerful. She poses erotically as she performs killing blows on her enemies. She summons demons fully naked, making the most ridiculous and sexy stances in the game. Everything about Bayonetta reflects over-the-top female sexuality that startles, shocks, and impresses its viewers. Her hair-woven outfit and appearance in general match her abundant sexiness in her speech and actions. Unlike many other female character designs that have no business being sexual, Bayonetta’s design encompasses her sexuality in all aspects of her person: her outfit, her personality, her behavior, and her gameplay (more on that later). She has agency – the ability to create and change – over her sexuality and revels in it, using it as a means to portray who she is as a person. If you take away her sexiness, Bayonetta would cease to be Bayonetta. In both of Bayonetta’s games, she exhibits her over-sexualization in two media: cutscenes and gameplay. Both produce different iterations of Bayonetta, as the prolonged cutscenes are more blatantly sexual than the gameplay, but the latter produces many instances of Bayonetta flaunting her body, triggered by the player’s choice in attack. I will discuss both separately in order to further argue my case that Bayonetta has the ability to create and change her own over-sexuality. Bayonetta in Cutscenes When you are first introduced to Bayonetta, chances are that you think she is just another over-sexualized female character in gaming. You load Bayonetta 2 and start the story by watching the first cinematic cutscene of the game. You see Bayonetta in a fancy shopping outfit strolling down the sidewalk, when a fighter jet barrels towards her. She stops it, leaps on top of another one in midair, and faces the horde of angelic monsters that confront her. They attack, she dodges; but in the process, the angels’ weapons tear away her outfit, presenting her in the middle of the sky fully naked (luckily, shading prevents the game from being pornography). She then summons her hair to wrap around her nude body, creating her outfit (yes, it is made out of her hair) as she poses dramatically. She then proceeds to destroy the angels in a series of sexy and over-the-top attacks before the game drops you into gameplay. Bayonetta’s cutscenes are, to put it mildly, absurd. If players manage to survive the opening cutscene, then they realize Bayonetta’s over-sexualization definitely earns the word “over.” Bayonetta performs ridiculous stunts, flying through hell on a giant demonic horse, avoiding weapons by spreading her legs, or participating in a sexy posing contest with an enemy angel. She may perform her actions in sexual ways, but everything happens so fast and so outrageously that it leaves one in utter surprise rather than in sexual pleasure. Bayonetta will summon a demon and slap an angel’s behind in the same scene, and the player can barely process all the images and what they imply. The over-sexualization of the opening scene is mainly for shock value: the combination of the presentation and subject material makes it hard for the viewer to take everything seriously. Bayonetta’s sexuality is less for visual pleasure than for people to stop and question what they just saw, to rethink the entire situation that Bayonetta is in. This is especially true if you play the game for the first time and have never seen the cutscenes. Bayonetta’s over-sexualization is so absurd and over-the-top that it becomes comical – it is nonsensical shock-value entertainment. Even when players watch the cutscenes and Bayonetta’s poses for the third or fourth time, nothing gets old; it’s still fascinating how Bayonetta creates an extravagant show out of her own sexuality. Bayonetta in Game Design People are sometimes rightfully frustrated with female characters in video games because of their narrative placement: that is, when a woman appears in the narrative and what she does to impact the story. Many women appear in DLC, or in no gameplay at all–they are there to help, but are never fully playable. They are in the game to be rescued, to help the main protagonist but never accomplish anything by themselves, or for the infamous factor of sex appeal. This kind of representation of women becomes more frustrating when the designers decide to sexualize female characters that are crucial to the narrative. For example, Kaine from Nier is not playable at all and shows up to assist the protagonist Nier most of the time. She is important to the story, but her apparent lack of agency over her own story (she gets possessed by a monster at one point, and it’s up to the player whether she lives or dies) can be very disheartening for people who want her to have more control in her own narrative. In addition, her skimpy outfit barely covers her body, revealing most of her behind, and greatly contradicts her cold and anger personality (much like Power Girl). Her character placement is frustrating because her lack agency over her own story and her contradictory design, which invites the “male gaze” to mostly “gaze” at her cutscenes. Kaine’s sexualized (and unnecessary) character design and placement makes it seem like she is in the story mainly for the player’s pleasure, and not for a consistent character development. Bayonetta, on the other hand, is the main and most prominent character of her game (it is named after her). Her character placement is the center stage, and the player does most of the action through her character. She is playable 98% of both games, and, more importantly, she is the active character of the game. Active characters change the environment and story according to their own will. In the first Bayonetta, she decides to head to the ancient city Vigrid to figure out her past and find her lost memories. Without spoiling anything, in the end she reclaims who she is as a person and fights for both what’s right and for the safety of the world. In Bayonetta 2, she decides to venture into Inferno itself, ignoring the improbability of survival in order to save her near-dead best friend Jeanne. She rekindles relationships with many characters and saves the world in the process, again. In the first Bayonetta, the plot revolves around her self-discovery and asserting her right to live, and in the second Bayonetta, the plot follows her selfless adventure to save her one true friend. She is not a side character present in order to assist the protagonist, nor is she unessential to the plot. The narrative would not exist without her taking charge, without her deciding her own fate, and without her overcoming all obstacles with the strength and willpower of her one-woman army. Not only does she direct the game’s story (as a well-designed character should do) by making her own decisions and changing the course of the narrative, but Bayonetta has also become one of the most powerful figures in video games. This is important because, as I have stated before, many women who are sexualized are portrayed as weak compared to other characters (protagonists especially) in the story. On the other hand, Bayonetta is ridiculously strong and is arguably the strongest character in the game. In terms of gameplay, Bayonetta has one of the most fluid and powerful combo systems containing a large variety of options that never make the gameplay dull. She acquires different weapons that can pair with other weapons to form even more combos. These weapons range from sharp and deadly swords to a giant hammer, from ice skates to whips, and from a living scythe to a bulky grenade launcher. Every weapon has a unique demon that Bayonetta can summon either if the player uses the right attack combination or if the player initiates umbra Climax, a mode in Bayonetta 2 in which Bayonetta’s attacks increase in magical strength. In this mode, Bayonetta manifests larger versions of her normal attacks and can summon her large personal demons more easily. Everything on screen explodes in purple magic with Bayonetta glowing, and the players gets a rushing sense of exhilaration. They can feel her magical power whenever they destroy a fleet of angels with her giant, demonic punches, and they can feel the true strength of an Umbra witch when they annihilate a boss as big as battleship. The player feels powerful through Bayonetta, that they, through her, can conquer any obstacle standing in the way. Cutscenes may show off some of Bayonetta’s fighting power in sexy and comical ways, but players get real understanding of her ridiculous and amazing strength through gameplay. Her combos and demonic summons demonstrate the full force of an Umbra Witch, a being who is not to be trifled with. To top it off, Bayonetta incorporates her sexiness in all of her gameplay. Some attack combos have Bayonetta perform acrobatic stunts, which she finishes with dramatic and flirtatious poses. For example, when Bayonetta attacks with her “breakdance” move, she spins around on the ground, shooting bullets in a whirlwind that does great damage to nearby enemies. She stops this attack by lying on the ground with her behind in the air, arching her back and winking directly at the camera (breaking the 4th wall). Torture attacks are special summons that produce great damage or instantly kill enemies. When she summons them, Bayonetta usually performs another sexy pose; for example she can summon a tombstone to flatten enemies, and when the heavy stone lands, she squats with her knees spread and makes a face, all like she is posing for the camera (the flattened enemy is behind her). The funniest are the punish attacks, where she will sit on top of a fallen enemy and slap them to death, usually on their butt. It is highly sexual and creates the picture of Bayonetta as a dominatrix; yet the player prompts Bayonetta to use her punish attacks because they are incredibly efficient in dealing with enemies, not just because they are sexy. The most sexually revealing of Bayonetta’s attacks are her demonic summons, yet they are the most spectacular parts of the game. Bayonetta summons her large inferno demons at the end of mini boss fights, boss fights, after certain attack combos, or during umbra Climax in Bayonetta 2. She can call forth beasts such as Gomorrah, a dragon, Diomedes, a Unicorn whose horn is a giant sword, and the infamous Madame Butterfly, her personal female demon whose limbs Bayonetta summons the most for fighting. The witch even uses her to fight against an equally strong opponent angel, resulting in an grand aerial battle between Bayonetta and a Lumen Sage in the foreground (the fight the player controls) and between the giants of Madame Butterfly and the angel Temperantia in the background. Demonic beasts encompass the entire screen, finishing off other large enemies with ease. In order to summon such monsters, Bayonetta uses her hair; her hair, though, is what makes up her clothing, so in order to summon demons, Bayonetta has to be naked. It is a little startling when a player first summons Madame Butterfly’s fist and Bayonetta appears nearly naked on the screen. It is not complete nudity: gray shading covers her breasts, stomach, vagina, and behind, but she still does not wear any clothes. She will appear like this in regular combat, whereas in cutscenes she will be naked, but with her hair blocking anything inappropriate. When she summons demons for a grand finisher, her nakedness is more suggestive as the gray shading is no longer present and only weaves of hair cover her private areas. It is over-sexual to the extreme: the over-the-top, ridiculous, and absurd nature of Bayonetta’s near-nudity adds even more to the shock value of game, making players ask whether if what they saw on screen really happened. Playing as Bayonetta gives the player a whirlwind of initial confusion and shock, but it never deters from the thrill of overpowering enemies by summoning a giant canine to tear them to shreds. Bayonetta’s attacks are graceful and powerful, exhibiting the female body throughout the gameplay. Her moves mean business, and that’s what is so great about Bayonetta. She is over-sexualized, but she defeats her enemies with overwhelming strength. She fights legions of both Paradiso and Inferno, angels and demons, minions and giant bosses, and still is able to pull a dramatic pose at the end of fighting. Her prominent display of her feminine body is empowering; in art media, the female body is usually presented as sign of weakness – something undesirable for the self to become – or as sexual interest – something desirable for the self to possess. Bayonetta demonstrates through her gameplay that having a female body does not make a person any less powerful: that one can have sexy breasts or a sexy behind and still defeat any enemy that comes one’s way. She proves that the female body is not a sign of weakness but of strength, because she accepts the body she was given and is proud of it. Conclusion Bayonetta’s agency of her over-sexualization makes her a wonderful female character. Many female characters have no agency at all, making their visual design mismatch their personality and behavior, thereby creating bad character design. With many fictional female characters – whether in movies, TV shows, animation, comics, or video games – female sexuality is present for the spectators, and not for the woman herself. She is sexy for the appeal of the audience, but not for her own tastes and pleasure. Bayonetta, however, fully enjoys her over-sexualization and professes it to the world, which is apparent in both cutscenes and gameplay. Who would perform sexy poses while in the midst of battle if they did not love their own body? She has full agency over her entire character – she owns her outfit, her sexiness, her personality, her narrative actions (meaning decisions she makes within the story), and her goals, and nothing stops her from believing in herself, sexiness and all. Her sexiness does not make up her entire character, either: she is courageous, witty, commanding, headstrong, and compassionate for her friends and family. Bayonetta is not a character who only has a game to exhibit her undying sexiness: she is there to teach her enemies a lesson and display real emotions at the same time. Looking sexy while doing it is just a good bonus. Bayonetta exemplifies that it is okay for a woman to be sexy if the woman wants to be sexy; you can have characters with sexy breasts, a sexy butt, and a sexy personality, and that’s fine as long as the characters are okay with it. This applies to both fictional characters and real people, male and female. Yes, it would be outstanding to have a female character who is just as powerful, prominent, and successful as Bayonetta without the intense over-sexualization; but I, a straight woman, do enjoy Bayonetta’s abundant sexiness because, for once, she also enjoys her own sexiness and celebrates it for her own sake. Laila Carter is a featured author at With a Terrible Fate. Check out her bio to learn more. http://www.pastemagazine.com/articles/2014/10/femme-doms-of-videogames-bayonetta-doesnt-care-if.html Feminist Film Studies, to be precise. https://gomakemeasandwich.wordpress.com/2011/06/03/bayonetta-and-the-male-gaze/ The original post and the huge discussion it caused: http://us.battle.net/forums/en/overwatch/topic/20743015583. Another video explaining the pose: https://www.youtube.com/watch?v=hf5SdrJoOdc The only reason I had any problem with the pose is because Tracer had no butt to show off – it’s non-existent and looks weird to me. Kaine is a hermaphrodite, but most people use she/her pronouns to describe her. I understand the argument for why she reveals most her skin – that she must expose the most skin to sunlight in order to control the monster possessing her – but it’s still shady. It is also in great contrast to her cold, calm, and shy personality. The other 2% you play as Jeanne, her best friend, and Loki, an important side character. For more on active and passive characters: http://readingwithavengeance.com/post/77195680492/on-writing-active-vs-passive-characters As a video game avatar, Bayonetta cannot completely control her actions: her fighting and traveling is in the hands of the player. But in terms of her crucial decisions and how she responds to certain events, Bayonetta has control. Beyond the Moral Binary: Decision-Making in Video Games –by Richard Nguyen, Featured Author. Video games designers engineer worlds receptive to player input. Players are empowered with the agency to make decisions that can change the course of the game’s narrative and the characters within it. This decision-making is a core, interactive tenet of video games. In emulating the experience of choice and deliberation, there are various elements that designers must consider. Key among them is morality, or the principles humans hold to distinguish between “right” and “wrong” behavior, and how it influences player choice. The mechanics of moral decision-making across video games have been diverse, and only sometimes effective. In the time I have spent playing narrative games with morality as a central component and game mechanic, I have found that the games with the most minimal and least intrusive systems better emulate not only moral decision-making, but also the emotional consequences that follow. Presenting morality as its own discrete game mechanic is counter-intuitive, because it diminishes the emotional impact and self-evaluation of moral decision-making. To begin, I will be applying a rudimentary framework of morality to fuel this discussion because the focus is not on morality proper, but on how it influences player choice. Video games that use the moral binary framework present to the player three possible moral courses of action: good, bad, and, sometimes, neutral. For our purposes, we will assume that the majority of players are good-natured, and believe in what society deems and teaches them to be “right” or “good.” At the very least, players understand what should be done. This includes, but is not limited to, altruism and cooperation. Good moral decisions often require self-sacrifice to achieve a greater good. Your avatar will sacrifice money for the emotional satisfaction of having donated to a virtual beggar. “Wrong” or “bad” behaviors, then, violate moral laws. Such behaviors include, but are not limited to, murder, lying, cheating, and stealing. Video games present morally “wrong” or “evil” choices as temptation, the desire to make the easier, selfish choice. Of course, life is not so simple as “right” and “wrong” or “good” and “bad.” To clarify, I will be using “good” and “right” to refer to the same concept, and will be using them interchangeably. The same applies to “bad” and “wrong”. The “neutral” alternative describes behaviors with no moral value, which is often presented as inaction in gaming scenarios. A flavored subset of the “neutral” choice is the “morally gray” choice, occupying a middle area between “good” and “bad” in which the moral value of an action is unclear. For instance, a typically “wrong” behavior, such as stealing, may be inflected with the “right” intention, such as stealing medicine in order to save your dying sister. In this situation, it is difficult to value the action as fully “good” or “bad”. I outline this moral theory under the assumption that players’ moral beliefs will extend to the decisions they make as the avatar in the game world. Of course, players often experiment with moral decision-making in games by “role-playing” the good or bad person, but such an action already makes players acknowledge their pre-existing moral beliefs. At this point, players become detached enough from the avatar, under the knowledge that the avatar’s actions do little to reflect their own moral selves, that they would care drastically less about the consequences of such actions. I will instead be examining the cases in which players seek to make decisions in games as if their avatars were a full extension of their moral selves. In other words, players make decisions as if their own moral selves were truly operating in this world. Therefore, players would care more about how their decisions accurately reflect their moral beliefs. Otherwise, there are little to no personal stakes involved in decisions when you know they say nothing about you. Designers often abide by the convention that morally right decisions are selfless and performed for the greater good, while morally wrong decisions are selfish and performed for personal gain. Players that make the morally right decision often engage in the more difficult and complicated narrative pathway. For instance, choosing to ignore a mission directive in order to save an endangered life may lead to punishment, and requires the player to work harder make up for lost time or resources. In spite of the extra layer of difficulty, these morally right decisions are more emotionally rewarding because they preserve the player’s conscience. Again, we assume that the majority of players inherently abide by what society deems to be right and wrong. Players that make the morally wrong decisions engage in the more expedient pathway that facilitates direct personal gain. For instance, choosing to ignore endangered civilian lives in order to fulfill the mission directive leads to no direct punishments. Instead, the consequences of this morally “wrong” decision come through the emotions of guilt and disappointment due to its violation of the player’s conscience. This is not to say that players are discouraged from making morally “wrong” decisions in video games. Rather, having players choose either a “good” or “bad” decision places responsibility on their own hands, rather than the writer’s. Allowing players to explore the emotional consequences of both ends of the moral spectrum forces them to reevaluate their own beliefs. In the case of the moral binary in video games, such reevaluation turns into the reaffirmation of societal norms. Designers use this moral theory in decision-making to reinforce the conventional meaning of “right” and “wrong.” The two primary elements of morality in a video game context are intention and behavior. The player’s intentions are enacted through the avatar’s in-game behavior. In other words, the decisions made in a video game are determined by player intention. The behavior can be objectively categorized into “right” and “wrong” according to the game’s narrative. However, the behavior carries with it the player’s intention, which cannot definitively be measured or categorized by the game itself. The player’s subjective experience is then the key factor in determining how well the video game emulates moral decision-making. What the avatar feels is independent of the player’s own feelings as a result of a moral decision. With the binary morality system, designers make a direct appeal to the player and his or her moral beliefs. The psychological phenomenon of “cognitive dissonance,” where one’s conflicting and inconsistent behaviors and beliefs cause discomfort, drives the consequences of moral decision-making. This internal, emotional conflict compels a person to change one of those beliefs/behaviors in order to reduce such discomfort. When good-natured players make a morally “wrong” decision in a videogame, their beliefs will be inconsistent with their behavior. Even if the player unwittingly or does not believe that they made a morally “wrong” decision, the game’s systems will still punish and treat them as if they did. For example, a person playing Grand Theft Auto 5 may fire a gun in public and not believe that it is wrong or against the law. The game’s systems, in the form of police, will nevertheless respond negatively. The player is left to reconcile his moral beliefs with those of the video game. There are three likely responses when a good-natured person (as we assume the majority of us are) makes a morally wrong decision: (1) change your beliefs to be more consistent with your behavior, (2) live with and accept the discomfort and inconsistency, or (3) sublimate, and find a reason or rationale to justify your inconsistency. The idea is that cognitive dissonance creates the emotion of discomfort. The first two options are labeled as truer dissonance scenarios because they are done in response to such discomfort. Option (3), on the other hand, precludes discomfort because the sublimation will have already taken place due to a third-party influence. Thus, players are not made aware of the inconsistency and continues, unaffected by their moral decision. From my experience, the most effective moral systems have compelled me to respond with Options (1) and (2), which most align with realistic moral decision-making and the phenomenon of cognitive dissonance. By provoking the visceral discomfort of making a decision you realize was inconsistent with your beliefs, you will ostensibly be more compelled to respond. When video games inspire Option (3), sublimation, the player transfers responsibility to a third party and is therefore relieved of any personal, emotional consequence. Sublimation allows players to rationalize or provide an external explanation for their behavior. Therefore, responsibility for that moral decision is displaced, which mitigates any true feelings of cognitive dissonance. This is not to say that Option (3) never occurs in realistic moral decision-making. I am arguing that the modern video game most often counter-intuitively facilitates this transfer of responsibility, even when their goal is to appeal to or challenge a player’s moral beliefs through cognitive dissonance. Now that I have clarified both my moral framework and the role cognitive dissonance plays in moral decision-making, I will analyze how these work in popular video games that use the moral binary framework. I will examine its role and evolution in several narrative-driven open-world and role-playing games. We will start from the simplest, most direct binary systems and work our way into games that add eschew the binary for more minimalist approaches. In the Infamous series, the player must decide whether his avatar (Cole), a super-being with electric-based powers, will be a “hero” (good) or a “villain” (bad). In order to secure the most successful playthrough, in which the player unlocks the strongest abilities and completes the narrative, players must commit to one moral path and constantly commit the deeds that earn them either good or bad karma points. Each path provides unique abilities inaccessible in the other, incentivizing commitment to one moral path rather than neutrality. As a result, players have access to only two viable playthroughs of the same story. The hero playthrough facilitates a precise and focused combat play style while keeping your electricity blue, and the villain playthrough facilitates a chaotic and destructive combat playstyle while turning your electricity red. In order to earn karma points, the player must constantly engage in activities consistent with the respective path, as demarcated by the video game itself. Good karma points are earned by helping citizens and choosing the good prompt instead of the bad during pivotal story events. Bad karma points are earned by destroying the city, murdering citizens, and choosing the bad prompt instead of the good during pivotal story events. There are no neutral or morally grey options. A player’s karma meter is plastered on the heads-up display (“HUD”) to remind the player that their actions are omnisciently tracked and scored, essentially turning morality into its own mini-game. In spite of its blatant tracking and systematic reminders, Infamous’s binary morality system is comically shallow and ineffective in producing realistic emotional consequences. The game reduces moral decision-making to a binary, because it can only be completed upon fulfillment of either the hero or villain pathways. The narrative makes its morality clear in that heroes are “good” and villains are “bad.” For the ordinary player, the only choice then is to consider whether they want to be consistent with their own good-natured beliefs and choose the hero path, or to deviate from the norm and explore moral violations as a villain. Aside from the joys of blowing everything up, choosing the villain’s path should then inspire some amount of discomfort, which should consequently lead to either (1) a change in player attitude to coincide with the behavior, (2) an acceptance of the discomfort, or (3) sublimation. The game’s blatant morality system in all cases inspires sublimation, and therefore fails to provoke any genuine cognitive dissonance within the player for several reasons. First of all, Infamous’s blatant tracking turns morality into a purposeful meta-game to be conquered. Therefore, the goal to reach the highest karma levels is extrinsically motivated by in-game rewards such as unlockable abilities, rather than intrinsically motivated by the game’s narrative. The sheer volume of moral decisions the player makes as Cole are driven not by how the player would act, but by what moral pathway the player committed to at the very beginning. This allows for little moral experimentation on a case-by-case basis, as the player’s goal is to globally make either good or bad decisions. Second, the game’s design makes it so that skill progression is tied to fully achieving full hero or villain status. This makes it difficult to completely finish the game if the player does not commit to a moral pathway. Thus, game designers are obligated to provide players with the opportunity to “farm” karma points, in the case that they have poorly leveraged the karma system, to advance in power. Scattering redundant and bountiful opportunities to advance in karma level throughout the city diminishes the emotional impact of each moral decision. For example, there will be countless civilians on the street whom you can either choose to revive (good) or bio-leech for energy (bad). This becomes mundane because (1) you have already made the same decision countless times before and (2) you do not have a choice because your decision has already been made based on your playthrough. Infamous presents morality as a game mechanic with clear, delineated consequences. Both pathways end in earning more powerful abilities. By asking the player to virtually choose a side at the beginning of the playthrough, no further thought or questioning is required because the player no longer feels any responsibility for their actions. Once players lose a sense of responsibility for their and their avatar’s actions, it is easier to dissociate themselves from moral acts that the avatar has performed. The game itself tracks and quantifies the player’s moral choices and produces a predictable response every time. Any cognitive dissonance is displaced by how the game virtually forces the player to commit to a single moral pathway in order to succeed. In games like Infamous, we submit to the game’s predetermined, simplistic morality, and are given no chance to evaluate such decisions based on our own moral beliefs. Granted, no one has ever expected Infamous’s binary morality system to be the paragon of moral decision-making in video games or for it to change anyone’s moral code. Yet, it is clear that binary morality systems have become the rule, not the exception to exploring morality in video games. For example, high-profile and critically acclaimed narrative games such as BioShock, the Mass Effect trilogy, and even the Fallout series all abide by similar moral mechanics. In BioShock, the ending changes based on the player’s decisions about how to deal with its Little Sisters. The binary morality is as follows: save the sister (good) or harvest her (bad). Harvesting a sister will kill her in order to drain her life force and reap more economic benefits. The moral dimension of this decision lies in determining the fate of this narrative entity, in choosing whether or not to kill the sister. The good and right choice is to save the sister and restore her life, which provides less Adam (in-game currency) immediately but is rewarded with gifts of gratitude later on. One of the game’s central figures, Tenenbaum, explicitly denotes this to be the narratively good moral choice, especially when the most optimistic and humanist ending can only be achieved upon saving all of the sisters in Rapture. It is only in this ending where the Sisters help the avatar escape from Rapture. The cutscene, saccharine and hopeful, is accompanied by Tenenbaum’s affirmation of the player’s “good” morality. The morally bad and “wrong” choice is to harvest the sisters and essentially take their life to receive more Adam immediately but with no long-term reward. The bad ending (accompanied by Tenenbaum’s extremely bitter and dismissive monologue if the player harvests all the sisters) depict the avatar’s brutal and power-hungry takeover of Rapture’s remains, and the splicer’s savage invasion of the world above the surface. The narrative makes evident, through Tenenbaum’s insistence upon humanity and these dichotomous endings, that there is a clear moral binary between good and bad. Yet, by tying the moral decisions concerning the fate of these sisters to directly economic, rather than purely emotional consequences, the game pollutes any potential moments of cognitive dissonance as a result of the morally “wrong” decision. What is initially posited as a measure of the player’s moral values is transformed into an exercise in economic impulsivity: whether or not players can delay immediate gratification for longer-term rewards. This is not to say that moral decisions can never be tied to economic consequences. Choosing between stealing or donating money holds unpredictable consequences and punishments, and one can get away with morally bad economic decisions while feeling internal guilt. For BioShock, however, the endings clearly attempt to evoke emotional consequences, particularly through Tenenbaum’s shaming of the player in the bad endings with no further reference to economic rewards. The experience of cognitive dissonance would be where the morally “bad” player either (1) changes their beliefs to be more consistent with their actions (believing that they were inherently justified in or truly wanted to harvest the sisters) or (2) accepts their actions as bad and lives with the shame of having murdered little children. Thus, it seems as though the added economic layer of Adam rewards in moral decision-making was done more out of convenience, a way to give the player Adam instead of inspiring a moral quandary. By the end of the game, players may place responsibility on economic motivations, rather than personal or internal motivations, as the driving force behind their decisions. Moral responsibility is displaced by the justifications of either achieving a certain ending cutscene or by maximizing economic gain. As a result, the player experiences no dissonance because their “bad” actions are believed to be consistent not with their moral beliefs, but rather with this other economic motivation. While BioShock does a better job of posing a more complicated moral situation than the simple choice of “being a hero” versus “being a dick,” it instead settles with the economic quandary of choosing between “being a rich hero” versus “being an impoverished dick.” While I adore the Mass Effect trilogy, I would be foolish to believe that people did not already determine to pursue a full “paragon” (good) or “renegade” (bad) playthrough within the first ten minutes. Paragon choices most often involve dealing through compassion, non-violence, and patience, whereas renegade choices are aggressive, violent, and intimidating. Narratively, paragon decisions are framed as heroic, which is met by an NPC’s openness and friendliness. On the other hand, renegade decisions are framed as apathetic and ruthless, met with an NPC’s fear and disapproval. The game’s feedback loop then reinforces the idea that paragon is conventionally good, and renegade is conventionally bad. The entire morality mechanic in this game revolves around the choices made in conversation. In fact, the game’s dynamics conversation wheel facilitates moral decision-making without the player even having to look at the dialogue options:, the upper right and left segments of the wheel are paragon choices and the lower right and left segments are renegade choices. The right middle section is reserved for neutral options, but is not a viable option for those looking to maximize their moral decision-making output. While being neutral is, in and of itself, a moral decision, the game grants little to no narrative benefits to doing so, and players are positioned to either progress to full paragon or renegade status. Players can practically play and achieve full paragon or renegade status without even reading or thinking about the dialogue options they choose. At this point, players have broken the moral binary system, because the player action no longer directly reflects their beliefs, eliminating the possibility for cognitive dissonance and genuine moral quandaries. Mass Effect nearly transforms moral decision-making into an automatic, thoughtless process. Instead of playing as how you deem to be the appropriate moral choice to make in different contexts, your morality is globally predetermined by the type of playthrough you wish to achieve. There are incentives and narrative rewards for committing to either paragon or renegade, and nothing is gained by choosing neutral dialogue options. For instance, Commander Shepard begins as a neutral personality to fit the player, and is strongly characterized by moral decisions the player makes at the dialogue wheel. There is even a meter that tracks how good and bad your Shepard is on a moral spectrum. You start in the neutral gray zone in the middle, and “progression” is achieved whenever your tracker moves towards paragon’s blue side or renegade’s red side. As a player, morally wrong acts can then be justified by playing by the game’s moral rules, and not their own. By turning morality into a game in and of itself, you undercut any emotional consequences these decisions may have on the player. The Fallout series has done well in both perpetuating and addressing the problematic moral binary in video games. In Fallout 3, your behaviors are omnisciently tracked and marked under a karma score, distinguishing the both the player’s and the avatar’s actions as good or evil. Good choices include granting charity to survivors in the wasteland, while evil choices include stealing, even when no one is looking and even if the object were but a mere paper clip. Again, this is another example of an unrealistic moral scenario, in which every time you steal a paper clip you receive a notification and unpleasant screech denoting that you have lost karma. It is almost as though I avoid making evil choices, not to avoid guilt or to save my karma score, but primarily to avoid that unpleasant screech. Here is yet another case in which the game’s progression system rewards committing to one moral side, and every decision you make is under scrutiny and is met with predictable consequences. Upon learning that the only penalty to pay for stealing is a bit of on-screen text and a screech, why not just steal everything when no one is looking? Any guilt you might feel regularly is diminished by the reminder that this morality system is but a meta-game that can be exploited to increase your karma level by repeatedly donating caps to any schmuck in the wasteland. Fallout New Vegas takes measures to address this issue by incentivizing players to maintain a morally neutral playthrough via dedicated and rewarding perks for neutrality. However, there still lies an issue in the blatant “gaminess” of its morality systems, where players feel as though their moral decisions are motivated extrinsically rather than intrinsically. In this case, players feel the need to satisfy the game’s expectation to commit to one of two (or, for New Vegas, three) moral pathways because of the various benefits/perks that come with such a playthrough. Not only that, but the Fallout games also fail to imbue narrative consequences to a player’s morality. For the sake of preserving this open-world game’s consistency across playthroughs, the narrative is largely unaffected by player’s moral decisions. NPCs respond equally to “bad” and “good” avatars. The game’s primary response to moral decisions is merely mechanical, by the omniscient tracking meter and consequent on-screen notification of when a player has committed a moral decision. The drastic disconnect between the player’s moral decisions and the game world’s frigid indifference to such moral actions inspires little questioning or thought. Players, knowing that their actions have minimal consequence, place moral responsibility upon the game’s system rather than themselves and their own moral beliefs. By the end, the experience has boiled down to accommodating the game’s own defined sense of morality instead of exploring your own beliefs. However, not all hope is lost! Some games come closer to emulating the experience of moral decision-making. Telltale’s The Walking Dead series remarkably captures the insecurity, spontaneity, and unpredictability that often comes with moral decision- making. Throughout the game’s interactive cutscenes, there are often timed decisions players must make between four options. The player never knows which decisions are tracked, nor what consequences they might have, whether short-term or long-term. The only indicator players receive are a line of text that denotes “[insert character name here] will remember that.” Even in that statement, the impact is ambiguous, and the player is left to discern whether they made a good or bad decision according to their own morality, rather than that of the game’s narrative. Mechanically, The Walking Dead presents no explicit menu or HUD tracker for the player’s morality level, provides little-to-no feedback on these decisions’ narrative/gameplay impacts, and inflicts unpredictable consequences. By contrast, the games mentioned above explicitly posited their own binary moral system: firm rules that the player must play by. In addition, the games above predictably provided information and definitive feedback to these moral decisions, lessening their emotional impact in the long run. Players, once made cognizant of the extrinsic forces that may be guiding their decisions, feel relieved of any moral responsibility for choices made in these narratives. This is because player action is driven and can be explained by a factor other than their internal beliefs. In The Walking Dead, a minimalist morality system with no clear categorization or consequence keeps responsibility in the player’s hands. To explain, systems may still track player choices and make them instrumental to the progression of the story. However, minimalist systems do little to display or indicate to the player the value of their decisions and how they will impact the narrative, which feels more realistic. Choices made are more satisfying when the player understands or feels that they have been intrinsically motivated, and are the result of their own agency unpolluted by other incentives. The Witcher 3 also succeeds in unpredictably imbuing morality into the seemingly mundane scenarios that occur in its world. Aside from major quest lines that also pose variable, complicated moral decisions, the decisions the player makes through Geralt’s ordinary day of work reach a sobering, disarming level of emotional realism. Geralt constantly runs across merchants, beggars, looters, and all sorts of unsavory characters throughout the game world. More often than not, the player must decide upon whether or not to intervene, and how to resolve conflicts upon entrance. For instance, consider this example: a townsman asks me to find his missing child in the woods. Here, I have the opportunity to haggle for more pay beyond my standard fares, even though it is evident that he holds very little of value in his hut of sticks and mud. I eventually discover the son’s bones, leftover by wolves. Upon return, I am presented with two more difficult decisions. I can choose to lie about his son’s fate or to tell him the truth, which is a subjective moral quandary I will not pursue here. Either way, he refuses to pay me because I have produced no evidence, while realistically he is likely disheartened by his loss and has no money anyways. At this point, I can choose to “Witcher” mind-trick him into paying me, take it by force, or leave him in his grief. Even if I choose to be “evil” and force him into paying me, I will be receiving so little money that it would be insignificant, and my “evil” deed will not be sufficiently justified by the economic gains. The difference between such economic decisions in The Witcher 3 and BioShock is that, while they are both tied to “bad” morality, BioShock’s immediate rewards and short-term gain rationalize the decision. Here, the economic rewards are so blatantly insignificant that the only rationale behind such a deed most likely stems from the player’s indifference to this NPC’s plight. Therefore, The Witcher 3 is more likely to provoke cognitive dissonance because morally “bad” decisions can not be rationalized or justified by any other incentives. I will admit that I opted to mind-trick him for his money, as a spur-of-the-moment decision. I took his handful of coins and left him to grieve for his son. What is remarkable is that nothing guided me to make such a morally questionable decision. Money mattered little to me, so it must have been a matter of pride: desiring some acknowledgment for the completion of work. I would like to think of myself as a good person, and I always aspire to do so in video games. Yet, no substantial financial, mechanical, or other extrinsic factor possessed me to exploit the man. The worst part is, I got away with it, and I have to live with this decision throughout the rest of my playthrough, not to mention the chance that I may see that man again. At this moment, I felt like a bad person, and chose to live with this discomfort. This side quest alone presents at least three moral choices that work. They work because The Witcher 3 holds no formal morality system, which means none of your actions is omnisciently tracked or denoted by the HUD. More importantly, the consequences/punishments are unpredictable and change depending on context. My interactions with the desperate townsman above may be repeated in different scenarios and stories with different effects. I found these numerous little scenarios to be the most effective because the game appeared to be indifferent to my choices. The Witcher 3’s world of vice and monsters holds no definitive criteria to define good and evil actions, and therefore does little to mechanically address them, such as through on-screen notifications. This places all responsibility upon the player to (1) determine what is right and wrong based on his own beliefs and (2) deal with the consequences (e.g. guilt) of his own accord. Beyond crimes committed in the city, the game realistically grants you the freedom to be both the hero and the dick without formal judgment beyond your own self-evaluation and the unpredictable reactions of narrative agents. This is not to say that the game holds no morality at all, but that it does not commit to an objective, explicitly defined moral binary. The moral universe is then determined not by the game itself, but by the agents, such as NPC characters, within it that interact with the player and present their own diverse moral beliefs. Self-contained moments in other video games also succeed in provoking realistic moral quandaries. For instance, Red Dead Redemption: Undead Nightmare has a side quest in which you hunt for a monster that is terrorizing country folk. You find that it is a peaceful sasquatch, the last of its kind. You must choose between killing it to satisfy the bounty and, in a sense, end its loneliness, or to leave it to live and die alone in solitude. Here, there is no clear good or bad, even if the choice is still binary. The choice will therefore also have no clear or predictable consequences. You will have to live with this permanent, immutable choice for the rest of the game, as the game itself will be indifferent to your decision. Games like The Walking Dead and The Witcher 3 capture an essential component of moral decision-making: internal conflict. One’s cognitive dissonance is most active when these moral decisions have no extrinsic explanation or justification. Rather, the quandary is found within, an internal conflict propelled by self-evaluation. Discrete morality systems, such as the prominent binary system, may actually detract from the emotional impact of moral decision-making because it so readily and easily provides players with an extrinsic justification for their behaviors. By turning morality into an explicit meta-game, designers may unintentionally displace the player’s responsibility for their own actions and hinder the effects of cognitive dissonance in moral decision-making. Minimalist game design for moral decision-making better matches the moral experiences of ordinary life. Should I steal a cookie from the cookie jar? No one will know. The lines between good and bad are realistically blurred, because there exists no omniscient authority (unless you count your conscience) to denote and tally you on all the karmic decisions you have made in a day. At the end of the day, moral experiences in video games should not be determined by karma meters and reward systems. Richard Nguyen is a featured author at With a Terrible Fate. Check out his bio to learn more.
https://withaterriblefate.com/tag/gamedesign/
I haven't tried it yet, but I was excited to find an egg free chocolate chip cookie recipe. Chocolate Chip Egg Free Cookies Serves/Makes:2.5 dozen Ready in: < 30 minutes Difficulty: 3 (1=easiest :: hardest=5) Categories: Chip Cookies 1 cup butter or butter-flavored Crisco 1 teaspoon vanilla extract 1 cup powdered sugar 1 1/2 cup flour 1/2 teaspoon baking soda 1 cup quick-cooking oats 1 cup semisweet chocolate chips Preheat oven to 350 degrees. Cream together butter, vanilla, and powdered sugar in a large bowl. In a separate bowl, combine flour, baking soda, and oats; stir into creamed mixture. (Mixture may be slightly dry - do not add liquid.) Mix in chocolate chips; stir to combine. Drop rounded tablespoons of dough, 2 inches apart, onto ungreased cookie sheet. Bake 8-10 minutes, or until golden brown. Cool cookies on baking sheet; carefully remove and store in an airtight container. Source: Royal Pontaluna Bed and Breakfast, Spring Lake/Grand Haven, Michigan.
https://www.peanutallergy.com/boards/egg-freenut-free-chocolate-chip-cookies
Patrick "Pat" Malloy Andrews, age 69, of Harrison, passed away Tuesday, September 4, 2018, at his home. He was born July 20, 1949, in Harrison, the son of Norman and Lillian (Malloy) Andrews, who preceded him in death. Pat had been a member and served on several committees at First Christian Church in Harrison where he was a deacon and elder. He was also a youth Sunday school teacher for many years and attended several working mission trips with the youth to help clean up and rebuild less fortunate communities. He was a member of CrossRoads Community Church where he helped lead the Security team and was passionate about keeping the children and others safe while they attended church. Pat enjoyed being a Stephen minister and was also a Stephen leader in the Stephen’s Ministry group at CrossRoads, which help those who are facing crisis in life. He was also part of the Hospital Ministry group wherein he visited those of the congregation and/or their families. He went beyond what was expected by driving members to out of town hospital and doctors visits. Tales of him driving people for multiple trips to Little Rock or the VA Hospital in Fayetteville are not uncommon. Pat started his funeral service career at Christeson Funeral Home, which was co-owned by his parents and Dr. William Christeson. He went from high school graduate to the Dallas Institute of Mortuary Science where he received his credentials for funeral director/embalmer. Aside from being a lifeguard in high school, Pat never worked anywhere but the funeral home. In 1976, Pat and his brother, Mike, bought Dr. Christeson’s share of the funeral home prior to their father, Norman’s passing. In 1977, when Norman passed away, Pat took over management of the funeral home at the age of 28 and for the next 32 years served the people of Harrison and surrounding Boone County. He married Debbie Edmonson December 17, 1982, and she served by Pat’s side for 26 of his 32 years in the funeral industry. At the age of 60, Christeson Funeral Home sold to the Roller Funeral Homes of Arkansas and for the next 5 years he continued as manager retiring at the age of 65 after serving area families for 47 years. He retired from the funeral industry in 2014. In 2017, Pat was inducted into the Arkansas Funeral Directors Hall of Fame. Pat was a PTA president for several years; he had visited area schools all over Boone and surrounding counties in the 1980s and 1990s speaking to students about suicide prevention and the results of drinking and driving; and helped work with the Cub and Boy Scouts. He also helped with the annual mock driving and driving crash scenes that were staged each year at the area schools, usually before graduation time. Pat was a member of the Harrison Roundup Club in the 1980s and 1990s; was on the K-Life board; was instrumental in the startup of Hospice of the Hills in 1992, and was on the board of directors and served as president. Pat was active in the community before he fell ill in December, 2017 and there’s no telling the numerous memberships and benevolent services he provided over the years to the community there were not recalled by his friends and children. One thing that will always stand out was his assistance he provided to the community when it came to funerals for babies and toddlers. He never charged a family for those services. He felt that the situation was heartbreaking enough for the parents. Christeson Funeral Home was the only funeral home within the area that had this practice, and still does to this day. Before retiring, Pat already had a plan to help others and was working on his next chapter in life. Upon retirement, he became licensed and certified to teach gun safety courses and founded Be Safely Armed. He became a NRA instructor and taught concealed carry. His heart and focus was always on gun safety. He had previously held world titles in competitive pistol shooting. Pat was a very passionate and accomplished man, but the one thing you cannot attach to a committee or award is the number of people he comforted and took care of during their worst of times. He gave back to the community when families were at their lowest point in their life and that is one reason why so many people loved Pat because of his kindness and generosity he provided during those dark times. He cried with families, he prayed with families, but most importantly he never forgot about them. Pat is survived by his loving wife of 35 plus years, Debbie Andrews, of the home; his son and daughter-in-law, Brandon and Janet Andrews; his daughters and son-in-law, Amber and Hugh Thomas and Kasey Andrews (Layman Dunaway); his brother and sister-in-law, Michael and Kay Andrews; and his sister-in-law and brother-in-law, Denise and Landis Dutton; his grandchildren, Drew Curtis, Claire Curtis, Abigail Andrews, Anna Andrews, Lincoln Sims, Sarah Thomas and Joseph Thomas; a host of other family; and many dear friends. He loved his children and grandchildren deeply. Visitation is 5:00 to 7:00 PM, Thursday, September 6, 2018, at CrossRoads Community Church, Harrison. Funeral service is 2:00 PM, Friday, September 7, 2018, at CrossRoads Community Church, with Bro. Johnny Walters and Bro. Paul Braschler officiating. Interment is in Maplewood Cemetery. Pallbearers are Drew Curtis, Ronnie Carpenter, Ricky Morris, Matt Odom, Hugh Thomas and Layman Dunaway. Honorary pallbearers are Lynn and Sue Jenkins, The Roller Family, Dr. Rebecca Simon, CrossRoads Church, Friends of the NRA and the Harrison High School Class of 1967.
http://www.arfda.com/Resources/NewsAnnouncements/tabid/95/articleType/ArticleView/articleId/149/Pat-Andrews.aspx
Recent advances in computer technology and wireless communications have enabled the emergence of stream-based sensor networks. In such sensor networks, real-time data are generated by a large number of distributed sources. Queries are made that may require sophisticated processing and filtering of the data. A query is represented by a query graph. In order to reduce the data transmission and to better utilize resources, it is desirable to place operators of the query graph inside the network, and thus to perform in-network processing. Moreover, given that various queries occur with different frequencies and that only a subset of sensor data may actually be queried, caching intermediate data objects inside the network can help improve query efficiency. In this paper, we consider the problem of placing both operators and intermediate data objects inside the network for a set of queries so as to minimize the total cost of storage, computation, and data transmission. We propose distributed algorithms that achieve optimal solutions for tree-structured query graph topologies and general network topologies. The algorithms converge in Lmax(H Q + 1) iterations, where Lmax is the order of the diameter of the sensor network, and HQ represents the depth of the query graph, defined as the maximum number of operations needed for a raw data to become a final data. For a regular grid network and complete binary tree query graph, the complexity is O(√N log2 M), where N is the number of nodes in the sensor network and M is the number of data objects in a query graph. The most attractive features of these algorithms are that they require only information exchanges between neighbors, can be executed asynchronously, are adaptive to cost change and topology change, and are resilient to node or link failures.
https://asu.pure.elsevier.com/en/publications/distributed-operator-placement-and-data-caching-in-large-scale-se
Watch a short video explaining the rational for this lesson and why I have chosen to apply transformations to parallel lines cut by a transversal. Clarifying and Sharing Learning Goals Always begin by clarifying for the students what it is they will be learning from the activity today. Click here to watch a short video on what it means to clarify and share learning intentions and criteria for success with students: The learning goals for today are to continue to focus on graphing linear equations with the purpose of using those lines to see angle relationships. Tell students that they will be graphing two different intersections which will create eight angles for this activity. The goals are the same as on the angles activity prior to this: move angles through transformations to map relationships between certain pairs of angles. Let students know that there will be a lot of vocabulary throughout the activity and it is important to know these angle pair names. An organizer to help with the vocabulary will be used on the last day of the activity. Working in Cooperative Groups to Graph Allow students time to graph the three given equations and number the angles as the directions indicate. Students should work within their partnership or small group to correctly graph and label. While students are graphing, move about the room formatively assessing progress and providing feedback that will move your students’ learning forward. To better understand how I group students into cooperative teams and how I provide feedback to students, click on the links below to watch a short video on each strategy. After allowing time for groups to correctly graph and label ask one group to place their papers under the document camera and share how they created the linear graphs. Choose this group as you are providing feedback and let them know you would like for them to present their correct work. I sometimes allow more than one presentation if multiple graphing strategies were used and each is a viable method of graphing. I call this time of students presenting their work a “mini wrap-up” because I do not spend long periods of time closing a lesson at the end of the class period. We use small lesson closers after a small chunk of material has been completed. Click below to watch a short video on how I use the mini wrap-up strategy. Once all students have correct graphs, it is time to begin answering the angle relationship questions. These questions are challenging and ask students to think about and prove angle relationships. This is a key time to provide feedback to students both yourself and through partners within small groups. Students need to be able to discuss ideas and questions. I often had to remind students to use their tracing paper in order to compare angles and find their relationship first (congruent or supplementary) and then begin to think through the transformations that would move and map angles together. Click below to watch a short video on how students provide feedback to one another within small groups. Students as resources for one another Where to End the Lesson The goal for the first day is to reach at least question 4 and discuss the answers to these questions in a mini wrap-up before the end of class. Of course students will work at their own pace and that is encouraged. You just want to at least get everyone through question 4 by the end of the first class period. Some of the key ideas to focus on throughout the lesson today include: graphing linear equations (fluency), understanding algebraically why two lines are parallel (same slope different y-intercepts), and the definition of a transversal. The math standards of focus in this lesson today include: 8.F.A.3 Interpret the equation y = mx + b as defining a linear function, whose graph is a straight line. Through the use of partners to provide feedback and work to graph linear equations with efficiency and fluency the following math practice standards will also be used: MP3MP7.
Twenty-one children with Down syndrome (DS) and 20 without disability, ages 3 to 11 years, completed the experiment in which they were asked to grasp and lift cardboard cubes of different sizes (2.2 to 16.2 cm in width). Three conditions were used: (a) increasing the size from the smallest to the largest cube, (b) decreasing the size from the largest to the smallest, and (c) a random order of sizes. Children with DS were found to have smaller hand sizes in comparison to age-matched children without DS. In addition, the shift from one-handed to two-handed grasping appeared at a smaller cube size for children with DS than for children without DS. However, when the dimensionless ratio between object size and hand size was considered, the differences between groups disappeared, indicating that the differences in grasping patterns between children with and without DS can be attributed to differences in body size. Search Results Geert J.P. Savelsbergh, John van der Kamp and Walter E. Davis Hilde Van Waelvelde, Willy De Weerdt, Paul De Cock, Bouwien C.M. Smits-Engelsman and Wim Peersman The aim of this study was to compare the quality of ball catching performance of children with DCD to the performance of younger typically developing children. The outcome measures used were a modified ball catching item of the Test of Gross Motor Development and the number of grasping errors in a ball catching test. In the study, children with DCD were matched with younger typically developing children according to gender and the number of caught balls in the ball catching test. Children with DCD made significantly more grasping errors and scored significantly lower on the modified TGMD-item. Children with DCD were not only delayed in ball catching but they also seemed to use different movement strategies compared to younger typically developing children.
https://journals.humankinetics.com/search?pageSize=10&q=%22grasping%22&sort=relevance&t=PhysEdCoach
Person-Affecting Views and Saturating Counterpart Relations Christopher J. G. Meacham Forthcoming in Philosophical Studies Abstract In Reasons and Persons, Parfit (1984) posed a challenge: provide a satisfying normative account that solves the Non-Identity Problem, avoids the Repugnant and Absurd Conclusions, and solves the Mere-Addition Paradox. In response, some have suggested that we look toward person-affecting views of morality for a solution. But the person-affecting views that have been offered so far have been unable to satisfy Parfit's four requirements, and these views have been subject to a number of independent complaints. This paper describes a person-affecting account which meets Parfit's challenge. The account satisfies Parfits four requirements, and avoids many of the criticisms that have been raised against person-affecting views. 1 Introduction Interesting ethical questions arise when we consider decisions that bear on the makeup of the overall population, present and future. Traditional moral theories tend to yield highly counterintuitive results when applied to these kinds of cases, and finding alternatives to the traditional theories that avoid these counterintuitive results is surprisingly difficult. The resulting state of affairs is nicely described by Derek Parfit, who ends his exhaustive examination of these issues with the following summary of his investigations: "We need a new theory of beneficence. This must solve the Non-Identity Problem, avoid the Repugnant and Absurd Conclusions, and solve the MereAddition Paradox. I failed to find a theory that can meet these four requirements."1 Although these remarks concern his own inquiries, they could reasonably be said to represent the prevailing opinion regarding this literature as a whole.2 In response to these 1Parfit (1984), p.443. 2Of course, many have denied that all of these requirements need to be met, and have gone on to endorse theories which satisfy some subset of these requirements. See Ryberg, Tännsjö and Arrhenius (2009) for a comprehensive discussion of the many different responses that have been offered to Parfit's dilemma. 1 problems, some have suggested that we look toward "person-affecting" views of morality for a solution-views whose evaluations are sensitive to the identities of the subjects in the different possible outcomes.3 In order to provide a satisfying person-affecting treatment of these issues, two things need to be done. First, one needs to address the many criticisms of person-affecting views that have been offered in the literature. These kinds of theories have been argued to be either inconsistent, highly counterintuitive, or unhelpful with respect to the original problems.4 More generally, critics have maintained that person-affecting views are unable to satisfy all of Parfit's requirements. And a satisfying person-affecting response to the issues Parfit raises must either rebut these criticisms, or provide an account that avoids them. Second, one needs to determine how to identify subjects in different possibilities. This is because the prescriptions of person-affecting views depend crucially on how we crossidentify subjects in different possibilities. These two tasks are not independent. Certain approaches to the second task will require us to re-evaluate whether the standard criticisms of person-affecting views still arise. In light of this, it's natural to wonder whether there is a way of tackling both tasks at the same time. I.e., one might adopt an account of how to identify subjects in different outcomes that allows one to circumvent the criticisms that have been raised against person-affecting views. This is precisely what I propose to do. I will sketch a person-affecting view, the Harm Minimizing View. Then I will sketch a way of pairing subjects in different possibilities using what I call saturating counterpart relations. I will suggest that we should use these kinds of counterpart relations when making person-affecting judgments. We can then combine this person-affecting view with this way of pairing subjects. The resulting combination yields a person-affecting approach that satisfies Parfit's four requirements, and which avoids many of the criticisms that have been raised against person-affecting views. The rest of this paper will proceed as follows. In the next section, I will briefly lay out some preliminary assumptions. In the third section, I will lay out the Harm Minimizing View. In the fourth section, I will turn to examine the Non-Identity Problem. While doing this, I'll describe and motivate the adoption of saturating counterpart relations. In the fifth section I will examine the Repugnant Conclusion. In the sixth section I will examine the Absurd Conclusion. In the seventh section I will examine the Mere-Addition Paradox. While doing so, I'll describe and assess a powerful decision-theoretic objection to approach I advocate. I'll also show why the various "impossibility theorems"-theorems which show that no theory can satisfy all of some desirable set of features-do not tell against this approach.5 In the eight section I assess some other potential objections. In the ninth section, I conclude with some brief remarks. 3For example, see Narveson (1967) and Roberts (1998). 4For example, see Parfit (1984), Broome (1992), Arrhenius (2003) and Holtug (2004). 5For example, see Ng (1989), Blackorby and Donaldson (1991), and Arrhenius (2000). 2 2 Preliminaries I'll assume that we have established some account of who the moral patients are; i.e., of who matters morally. When I speak of individuals, subjects, etc. in the sections that follow, I will be implicitly restricting myself to such beings. I'll assume that there is some sense in which moral patients can be "well-off" that is morally relevant. And I'll assume that there is some way of providing an overall lifelong assessment of how well-off these moral patients are; I will call this the patient's well-being. I'll assume that the level of a subject's well-being can be given a numerical representation. And I'll assume that there is some level of well-being below which a life is not worth living. In what follows I'll employ a numerical representation for well-being which is additive, and whose zero-point is set so that positive values represent lives worth living and negative values represent lives not worth living. I'll assume we can make sense of what an agent's options are at a time. For simplicity, I'll also assume that every option available to an agent leads to a definite outcome, and that the agent knows this. So I'll ignore any role that chance and uncertainty might play. I'll skirt issues involving infinities by restricting my attention to finitary cases. In particular, I'll assume that (i) agents are faced with only finitely many options at any given time, (ii) there are only finitely many subjects in any given possibility, and (iii) the wellbeing of these subjects is finite. Finally, I'll assume that something like Lewis' counterpart theory is correct.6 On counterpart theory, the truth values of de re modal claims are cashed out in terms of counterpart relations between possible individuals ('a is a counterpart of b'). For example, let "Bob" be the name of some possible individual. Then "Bob could have been a plumber" is true iff some counterpart of Bob is a plumber.7 Likewise, "Bob is essentially human" is true iff every counterpart of Bob is human. On Lewis' theory, counterpart relations are similarity relations. A possible individual is a counterpart of another iff the intrinsic and extrinsic qualitative properties of the former resemble those of the latter in the relevant respects. The kinds of properties that are relevant, and the stringency of the resemblance that's required, is something that can vary from context to context. Note that counterpart relations are generally not symmetric-b may be a counterpart of a, even though a is not a counterpart of b. Likewise, counterpart relations are generally not transitive-b may be a counterpart of a and c may be a counterpart of b without c being a counterpart of a.8 6See Lewis (1986). Although I'll be assuming that Lewis' theory is correct in broad outlines, I will not be assuming that he is right regarding all of the particulars; c.f. section 4. 7Or, more precisely, "Bob could have been a plumber" is true iff there is some world W , and some counterpart of Bob in W , which is a plumber (see Lewis (1986), p.9-10). Similar remarks apply to the example that follows. 8To see the former, note that b may be the individual at b's world that most closely resembles a, but there may be other individuals at a's world that more closely resemble b than a does. To see the latter, note that b may be similar enough to a to be its counterpart, and c may be similar enough to b to be its counterpart, but the resemblance gap between c and a may to be too wide for c to be a's counterpart. 3 3 The Harm Minimizing View In this section, I will describe a person-affecting view, which I'll call the Harm Minimizing View. This view is similar to a number of other person-affecting views that have been described in the literature, such as those of Roberts (1998) and Arrhenius (2003).9 For exegetical purposes, I'll present the view in two stages. First I'll sketch the view for cases in which all of the outcomes contain the same individuals. Then I'll extend the account to cases in which outcomes have different individuals. 3.1 Same Population Cases To begin, let's restrict our attention to cases where all of the potential outcomes contain the same individuals. At first pass, we might characterize the person-affecting intuition as follows: in order for an option to be better or worse than another, it has to be better or worse for someone.10 So suppose an agent is choosing between two outcomes, W1 and W2. When we compare the W1-option to the W2-option, we should consider, for each subject, how much better or worse-off she is in W1 than in W2. To turn this into a concrete proposal, we need to determine which subjects are betteroff in which outcomes, and to turn this into a judgment about what the best options are. Let's look at a way to do this. Consider all of the outcomes that an agent a could bring about at a given time t. In some of these outcomes, a given subject s will have a higher well-beings; in others, a lower well-being. Let's call the highest well-being that s receives in any of these outcomes s's peak well-being (with respect to a at t). If s's well-being in some outcome W1 is below her peak well-being, then there's a sense in which bringing about W1 harms s. We can use this notion of harm to assess an agent's options. Let the harm done by the W1-option (with respect to a at t) be equal to the sum, for each of the subjects in W1, of the amount by which that subject's well-being is below her peak. Then we can evaluate an agent's options as follows: The Harm Minimizing View (HMV): An option is morally permissible (for a at t) iff no other option does less harm; i.e., iff the option minimizes harm. Example: Weighing Losses. Consider an agent who has a choice between two outcomes, W1 and W2. In W1 there will be two individuals, a and b, each with a well-being of +10. In W2 there will be two individuals, a and b (where giving two individuals the same name indicates that each is a counterpart of the other). But in W2, a will have a well-being of +15, while b will have a well-being of 0. Visually, we can represent this case as follows: 9In the case of Roberts (1998), the similarity is less apparent. But one can think of the Harm Minimizing View as a quantitative version of Roberts' view. And the prescriptions of the two views are almost identical (though Roberts' view is silent in some cases in which the Harm Minimizing View is not). 10"At first pass" because satisfying this description is neither necessary nor sufficient for being a personaffecting view. For discussion regarding different ways of spelling out the person-affecting intuition, see Arrhenius (2003), Roberts (2003b) and Holtug (2004). 4 W1 W2 a b a b +10 +10 +15 0 According to HMV, what should the agent do? In this case, a's peak well-being is +15, while b's peak well-being is +10. In W1, a's well-being is 5 units below her peak, while b's well-being is 0 units below her peak. So the harm done by the W1-option is: 5 + 0 = 5. In W2, a's well-being is 0 units below her peak, while b's well-being is 10 units below her peak. So the harm done by the W2-option is: 0 + 10 = 10. Since the W1-option does 5 units of harm and the W2-option does 10, HMV prescribes the W1-option. 3.2 Different Population Cases Now let's look at cases in which different outcomes contain different individuals. Consider a choice between two outcomes, W1 and W2, where some subject s comes to exist in W1 but not W2. How should s's existence bear on our assessments of these options? There are two natural ways to proceed. First, one might maintain that s's existence should have no bearing on the harm of the W1-option, regardless of what s's well-being happens to be. Since s only exists in W1, s's peak well-being will be whatever s's well-being in W1 is. Thus s will have her peak wellbeing, and won't add to the harm done by the W1-option. (She also won't have any affect on the harm done by the W2-option. The harm done by the W2-option is an assessment of how much the subjects in W2 are below their peak, and so will only take into account subjects who are in W2.) What about the harm done by the W2-option? That's an assessment of how much the subjects in W2 are below their peak, so that will only take into account subjects who are in W2. So s's existence in W1 won't have any affect on the harm done by the W2-option. Second, one might maintain that s's existence can have a bearing on the harm done by the W1-option. In particular, suppose that s's well-being in W1 is so low that s's life isn't worth living. Then there's a sense in which s can claim to have been harmed if W1 comes about. After all, the agent could have picked the W2-option, and s would not have existed. But the agent picked the W1-option instead, and now s is forced to live a life not worth living. (Again, s's existence will have no bearing on the harm done by the W2-option, since this value only considers subjects who exist in W2.) Some have argued that approaches like the second are incoherent.11 But detailed and compelling responses to these arguments have been offered in the literature.12 And the second approach fits better with person-affecting approaches. So I will adopt the second approach here. We can implement the second approach by modifying the characterization of a subject's peak well-being given earlier. Let s's peak well-being (for a at t) be the highest well-being that s receives in any of the available outcomes, where for these purposes, s is 11For example, see Broome (1999) and Arrhenius (2003). 12For example, see Parsons (2002), Roberts (2003a) and Holtug (2004). 5 treated as having a well-being of 0 in outcomes where she doesn't exist. This modification will yield the verdicts we want. Example: The Question of Creation. Consider an agent who has a choice between two outcomes-creating no one, or creating both a happy person and an unhappy person:13 W1 W2 a b +5 -5 According to HMV, what should the agent do? For the purposes of determining peak well-beings, a and b are treated as having a well-being of 0 in W1. So a's peak well-being is +5, and b's peak well-being is 0. In W1, there are no individuals, and thus there is no one whose well-being is below their peak. Thus no harm is brought about by the W1-option. In W2 a's well-being is 0 units below her peak, while b's well-being is 5 units below her peak. So the harm done by the W2-option is: 0 + 5 = 5. Since the W1-option does 0 units of harm and the W2-option does 5, HMV prescribes the W1-option. Many people have asymmetric intuitions regarding the moral significance of creating future people.14 On the one hand, it seems like there's no moral pressure to create more people who would have worthwhile lives. On the other hand, it seems like there is moral pressure to not create people who would have lives not worth living. HMV's method of assessing options captures this asymmetry.15 Consider a choice between two outcomes, W1 and W2. Subjects with a positive wellbeing who only exist at W1 won't make the W1-option any more attractive. They'll have their peak well-being at W1, and so they won't affect the harm done by the W1-option. So the fact that bringing W1 about will create happy people doesn't give us a reason to bring it about. But subjects with a negative well-being who only exist at W1 will make the W1-option less attractive. Their well-being at W1 will be below their peak (0), and thus they will increase the harm done by the W1-option. So the fact that bringing W1 about will create unhappy people does give us a reason to not bring it about. 4 The Non-Identity Problem and Saturating Counterpart Relations Consider Parfit's Case of the 14-Year Old Girl: 13If one holds the view that all moral agents are moral patients, then this case is, strictly speaking, impossible. I.e., since there is no individual present in all of the outcomes, there couldn't be an agent who was facing these choices. (Recall that these outcomes include all of the agents who exist, at all times.) However, nothing of substance hangs on this, so I'll occasionally engage in the simplifying fiction of ignoring the presence of the agent in question. 14For example, see Narveson (1967), Wolf (1997), Parsons (2002) and Roberts (2003a). 15That said, some have argued that this asymmetry is actually counterintuitive. I discuss these arguments in section 8. 6 A young girl decides to have a child at the age of 14. Because she cannot care for it effectively, the child ends up having a hard life, though a life still worth living. If she had decided not to have a child at the age of 14, but had waited until she was 21, she would have been able to care for the child effectively, and it would have had a much better life. Even if the girl herself would have been no better off having the child later, it seems clear that what the girl did was wrong. But why? As Parfit notes, our instinctive explanation is a person-affecting one: "The objection to this girl's decision is that it will probably be worse for her child. If she waited, she would probably give him a better start in life." (Parfit (1984), p.359) With this in mind, we might represent the Case of the 14-Year Old Girl in the following way: Has Child Now Has Child Later Mother Child Mother Child +10 +5 +10 +10 If we apply a person-affecting view like HMV to this case, we'll get the result that the latter option is obligatory. In the latter case, both the mother and the child have their peak well-being, while in the former case the child has a well-being 5 units below its peak. Since having the child now will do 5 units of harm, and having the child later will do none, the girl should have the child later. But, Parfit argues, this way of thinking about the case is mistaken. The child that would be born to the girl at the age of 14 would not be the same as the child that would be born to her at the age of 21, so we can not claim that she has harmed the very same child by bringing it into existence now. So our instinctive person-affecting explanation can't be right. The right way to think about the case, Parfit maintains, is this: Has Child Now Has Child Later Mother Child1 Mother Child2 +10 +5 +10 +10 where neither child is a counterpart of the other. And if we apply a person-affecting view like HMV to this case, we'll get the result that both options are permissible. In both cases, the mother and the child have their peak well-being. So neither option does any harm, and the girl is free to do as she likes. So HMV's prescriptions will depend on what counterpart relation we employ. Which counterpart relation should we employ when making moral judgments of this sort? 4.1 Counterpart Relations and Moral Judgments On counterpart theory, the counterpart relation is picked out by context. But different proponents of counterpart theory might adopt different accounts as to which counterpart 7 relations are picked out by which contexts. Consider one account-that suggested by the writings of David Lewis.16 On this account, the counterpart relation delivered by a context is roughly the one that matches our intuitive judgments regarding how to identify subjects in different possibilities in that context. Call this the Lewisian counterpart relation. On Lewis' account, counterpart relations are notoriously context sensitive. If we employ Lewisian counterpart relations to ground moral claims, we risk making our moral claims context sensitive in the same way. For example, consider Parfit's Case of the 14-year Old Girl. In some contexts-when someone is arguing that if she has her child later then it will be better off, say-the Lewisian counterpart relation may be one that identifies the child she would have now with the child she would have when she's 21.17 In other contexts-when someone is appealing to the essentiality of origins in order to argue that the child she would have now and the child she would have when she's 21 are not the same, say-the Lewisian counterpart relation may be one that doesn't identify the children in the two cases.18 These results aren't in conflict. It's just that different contexts will pick out different Lewisian counterpart relations, even when we're considering what is (intuitively) the same case. Here are two ways one might proceed in light of this. First, one might conclude that, given a person-affecting view, moral claims themselves must be context dependent. And thus, given a person-affecting view, it will turn out that moral claims are not objective in some of the ways we originally thought.19 This option holds on to the thought that we should employ the Lewisian counterpart relation when making moral judgments, but gives up on the thought that moral claims are objective in all of the ways we thought they were. Second, one might conclude that while Lewisian counterpart relations are highly context sensitive, the way in which we pair individuals when assessing moral claims is not. This option holds on to the thought that moral claims are objective, but gives up on the thought that we should employ the Lewisian counterpart relation what assessing moral claims. The first option has some uncomfortable consequences. I take it, for example, that we would like there to be a definite (context-independent) answer to the question of whether it's permissible in the Case of the 14-Year Old Girl for the girl to have the child now. But if we adopt the first option, this will not be the case. Thus I suggest we adopt the second option. 16For example, see Lewis (1986). 17On these kinds of questions, Lewis writes: "You could do worse than plunge for the first answer to come into your head, and defend that strenuously. If you did, your answer would be right. For your answer itself would create a context, and the context would select a way of representing, and the way of representing would be such as to make your answer true. ... That is how it is in general with dependence on complex features of context. There is a rule of accommodation: what you say makes itself true, if at all possible, by creating a context that selects the relevant features so as to make it true." Lewis (1986), p.251. 18"In parallel fashion, I suggest that those philosophers who preach that origins are essential are absolutely right-in the context of their own preaching. They make themselves right: their preaching constitutes a context in which de re modality is governed by a way of representing (as I think, by a counterpart relation) that requires match of origins." Lewis (1986), p.252. 19I use the term "objective" here broadly (if loosely) to cover the rejection of any number of ways in which moral claims might be defective, relative, insubstantial, etc. 8 There are two different ways to flesh out the second option. One way is to hold one's person-affecting view fixed and to change one's account of which counterpart relations are picked out in moral contexts. On this approach, one will replace the Lewisian counterpart relation with a more stable counterpart relation when evaluating moral claims. The other way to flesh out the second option is to stick with the Lewisian counterpart relation in moral contexts and modify one's person-affecting view. On this approach, the person-affecting view will not employ counterpart relations. Instead, it will employ some other relations-call them counterpart∗ relations. These counterpart∗ relations will presumably line up with counterpart relations in most ordinary contexts, but the two will sometimes come apart. So while it is counterpart relations that determine the truth values of de re modal claims, it is counterpart∗ relations that we employ when applying our person-affecting view. (One might complain that the resulting view is not a personaffecting view, just a person-affecting-ish view. This is not an unreasonable complaint. But if it can solve Parfit's problems then it's an interesting view, regardless of what we decide to call it.) One can understand the proposals made in this paper either way. Those who think we should employ Lewisian counterpart relations in all contexts have a reason to prefer the second approach. Those who want a view that is full-bloodedly person-affecting have a reason to prefer the first approach. But since nothing about these proposals requires me to make a choice, I'll leave it open. To avoid cumbersome repetition, I will continue to talk in terms of counterparts instead of 'counterparts/counterparts∗' in what follows. Let's call a proposal regarding which counterpart relations we should employ when assessing moral claims a moral-counterpart proposal. I suggest that we evaluate moralcounterpart proposals according to three desiderata. (i) Stability: we should favor moralcounterpart proposals that employ counterpart relations that are context-insensitive. (ii) Plausible Identifications: we should favor moral-counterpart proposals that match our intuitive judgments regarding how to identify subjects. (iii) Plausible Prescriptions: we should favor moral-counterpart proposals that yield plausible prescriptions when plugged into the correct moral theory.20 Of course, assessing the third desiderata is tricky. We're trying to determine what the right moral-counterpart proposal is by looking at whether it yields plausible prescriptions when plugged into the right moral theory. But we're also trying to determine what the right moral theory is by looking at whether it yields plausible prescriptions when paired with the right moral-counterpart proposal. This puts us in a delicate situation. We're trying to figure out what the right moral theory is and what the right moral-counterpart proposal is at the same time. But our evaluation of each depends on what decisions we make with respect to the other. As a result, it's hard to evaluate the plausibility of person-affecting views and moralcounterpart proposals in isolation. In order to get a grip on the plausibility of these accounts, we need to assess them in pairs. This, I suggest, is the right way to evaluate the 20If we understand these as desiderata for counterpart∗-fixing proposals, it should be clear why we want the first and third desiderata. Why do we want the second desiderata? Because we are, in part, trying to capture personaffecting intuitions. The more a counterpart∗-fixing proposal diverges from our intuitive judgments regarding how to identify individuals, the less faithful it is to our person-affecting intuitions. 9 two proposals being offered in this paper. Instead of trying to evaluate the plausibility of HMV and moral-counterpart proposals separately, we should assess them as a pair. 4.2 Saturating Counterpart Relations To get specific prescriptions, we need to pair HMV with a moral-counterpart proposal. In what follows, I will tentatively propose a moral-counterpart proposal for us to use. Let us say that two individuals are indiscernable-up-to-t iff they are alike with respect to all of the intrinsic and extrinsic properties that supervene on the qualitative state of the world up to t.21 Let us call the worlds that could result from the options available to an agent the agent's available worlds. Now consider an agent in a decision situation at time t. And consider a counterpart relation which, for each ordered pair of available worlds (Wi,Wj) (i 6= j), maps individuals in Wi to counterparts in Wj in a way that satisfies the following four conditions:22 1. One-to-One Function: No individual in Wi is mapped to more than one individual in Wj, and no individuals in Wi are mapped to the same individual in Wj.23 2. Before-t Match: Each individual a who exists before t in Wi is mapped to an individual b who exists before t in Wj that is indiscernable-up-to-t with a. 3. Saturation: As many individuals in Wi are mapped to individuals in Wj as possible. 4. Minimization: There is no mapping which satisfies the first three conditions and which results in the Wi-option having a lower harm. 21In relativistic worlds we can instead consider what it is for two individuals to be indiscernable-up-to-r, where r is the spatiotemporal region the agent occupies at the moment of decision. We can say that two individuals are indiscernable-up-to-r iff they are alike with respect to all of the properties and relations that supervene on the qualitative state of the world in the backwards light cone of r. (I'm assuming here that there aren't closed timelike curves; some other strategy needs to be employed if there are.) 22My use of the term "maps" should be understood to imply only that there is a multivalued function (or "multimap") from individuals in Wi to individuals in Wj, not that there is a function (or "map") from the former to the latter. The first condition below will, in fact, require there to be such a function. But we want all of the substantive constraints on the counterpart relation to appear in the list of conditions, not to be smuggled in by our set-up. 23If we take counterpart relations to be similarity relations, then this condition is a bit too strong. Problems arise in cases in which there are multiple indiscernable individuals at a world-individuals who share all of their intrinsic and extrinsic qualitative properties. Because these individuals are indiscernable, a qualitative counterpart relation can't assign them different counterparts, or take them to be counterparts of different individuals. So in these cases both parts of condition 1 can fail. There are a couple of different ways to handle such cases. One approach is to shift the qualitative requirement from the counterpart relations themselves to what counterpart relations a context can pick out. Then we could allow counterpart relations to be more fine-grained (and so allow them to be one-to-one functions even in cases with multiple indiscernable individuals), but require contexts to deliver multiple counterpart relations-all of the counterpart relations that are 'precisifications' of the coarse counterpart relation the original theory employed. Then, when assessing person-affecting views like HMV, one could employ any or all of these fine-grained counterpart relations, since they'll all deliver the same results. 10 Since the most distinctive feature of these counterpart relations is provided by the third condition, I'll call a counterpart relation which satisfies these four conditions a saturating counterpart relation. These four conditions won't pick out a unique counterpart relation. For example, at worlds in which there are multiple individuals that are indiscernable-up-to-t, there will often be wiggle room with respect to which one serves as the counterpart of some qualitatively similar other-worldy individual.24 For a more mundane example, if there are multiple individuals at a world who come into existence after t and have the same level of well-being, then counterpart relations which permute them will satisfy these conditions equally well. But this wiggle room needn't bother us. All of the counterpart relations which satisfy these four conditions will yield the same prescriptions when coupled with HMV. So it doesn't matter which one we use. (This is the main reason for including the fourth condition-it ensures that any wiggle room that remains won't bear on HMV's prescriptions.) I propose to pair HMV with the following moral-counterpart proposal: when making moral judgments, we should employ counterpart relations which satisfy conditions 1-4; i.e., saturating counterpart relations. In the previous section I offered three desiderata for assessing a moral-counterpart proposal for a person-affecting view: stability, plausible identifications, and plausible prescriptions. I think the moral-counterpart proposal given by conditions 1-4 does a good job of satisfying these desiderata when paired with HMV. It satisfies stability, it does relatively well with respect to plausible identifications, and (as I'll argue) it does well with respect to plausible prescriptions. Now, one could do better with respect to plausible identifications by adding some additional 'matching' conditions which are assessed before the fourth condition. And some of these modified proposals will do just as well, if not better, with respect to plausible prescriptions. So I won't claim that conditions 1-4 yield the optimal moral-counterpart proposal. But I will claim that in most cases conditions 1-4 will yield the same prescriptions as the optimal moral-counterpart proposal. So I think conditions 1-4 do well enough to allow us to fairly assess the merits and demerits of this approach to person-affecting views. Let's call the combination of this moral-counterpart proposal and HMV the Saturating Harm Minimizing View (SHMV). A warning: we can't just map an individual at an available world W to all of the other available worlds in a way that satisfies these conditions, and then take all of these individuals to be counterparts of one another. The reason is that the counterpart relation is generally not symmetric or transitive, and a fortiriori is generally not an equivalence relation. So when we're evaluating whether a counterpart relation satisfies these conditions, we need to assess each ordered pair of available worlds. Similar remarks apply to the manner in which we assess the harm of an option. Here 24All of an agent's outcomes will be identical up to t. So if there are multiple individuals that are indiscernableup-to-t at one available world, there will be the same number of individuals who are indiscernable-up-to-t at every other available world. Thus there may be looseness regarding which of these indiscernable-up-to-t individuals are mapped to which other indiscernable-up-to-t individuals. (Even this looseness will sometimes be removed by the fourth condition, if the individuals end up having different levels of well-being due to their experiences after t, and this ends up impacting the harm assigned to the world.) 11 is how to assess the harm of some W -option with respect to a given counterpart relation. First, for each individual in W , determine who they're mapped to in every available world. Second, determine the peak well-being of each individual in W by finding which of these counterparts has the highest well-being (where they're treated as having a counterpart with a well-being of 0 at worlds in which they don't have counterparts). Third, consider how much the well-being of each individual in W falls below their peak, and sum these values. The resulting quantity is the harm brought about by the W -option. And to assess the harm of our other options, we must go through the same procedure. Example: The Somewhat-Happy Addition. Consider an agent who has a choice between the following two outcomes: W1 W2 a b c +10 +10 +5 Let's begin by determining what the saturating counterpart relations are, and then determine how much harm is done by each option. First let's consider who the individuals in W1 will be mapped to. Suppose that none of these subjects exist at the time of the choice, so the before-t match condition doesn't come into play. The saturation condition requires a to be mapped to either b or c. In either case a's peak well-being will be +10, and the so W1-option will do no harm; thus either mapping will satisfy the minimization condition. So a can be mapped to either b or c. Next, let's consider who the individuals in W2 will be mapped to. The saturation condition requires either b or c to be mapped to a. If b is mapped to a, then b's peak well-being will be +10, c's peak-well being will be +5, and the W2-option will do no harm. If c is mapped to a, then both b and c's peak well-being will be +10, and the W2-option will do 5 units of harm. So the minimization condition requires b to be mapped to a. Given these mappings, neither option does any harm. So SHMV takes both options to be permissible. 4.3 The Non-Identity Problem Let us return to the Case of the 14-Year Old Girl. Parfit suggests that we think of the case like this: Has Child Now Has Child Later Mother Child1 Mother Child2 +10 +5 +10 +10 where neither child is a counterpart of the other. As we saw in section 4, if Parfit is right about how we should identify subjects when making person-affecting judgments, then HMV yields the counterintuitive result that there's nothing wrong with the girl having the child now. But if Parfit is right, we're left with a puzzling question. Why do we have the instinctive reaction to this case that Parfit describes? Why does it seem to us that "the objection to this girl's decision is that it will probably be worse for her child... if she waited, she 12 would probably give him a better start in life"?25 This seems to be a paradigmatic case of a person-affecting judgment.26 But its hard to reconcile this judgment with the claim that we should use the counterpart relation Parfit suggests when making person-affecting judgments. Let's consider a different approach. Suppose, as I've suggested, that we're inclined to pair up as many subjects as possible when making moral judgments. I.e., suppose that we make moral judgments using a saturating counterpart relation. The mothers will be mapped to one another because they are indiscernable-up-to-t, and the saturation condition will then require the two children to be mapped to one another. Thus we'll represent the Case of the 14-Year Old Girl in the following way: Has Child Now Has Child Later Mother Child Mother Child +10 +5 +10 +10 And as we saw in section 4, giving this pairing of subjects, HMV will yield the desired result that having the child now is morally impermissible.27 This justifies our initial reaction-that the first option is worse because it is worse for the child.28 It's true that the two children are not counterparts according to the counterpart relation Parfit suggests. But that is not the counterpart relation that we should use when making moral judgments. The right counterpart relation to use is a saturating one. And when we use a saturating counterpart relation, HMV yields the moral judgment that we're initially inclined to give. 5 The Repugnant Conclusion Consider Parfit's Repugnant Conclusion: "For any possible population of at least ten billion people, all with a very high quality of life, there must be some much larger imaginable population whose existence, if other things are equal, would be better, even though its members have lives that are barely worth living."29 To simplify a bit, suppose we have a choice between two options, which lead to the following outcomes: W1 W2 a1−a10 b1−bn +100 +1 25Parfit (1984), p.359. 26One might suggest that we understand the assertion that "this girl's decision... will probably be worse for her child" as a de dicto, not a de re claim (see Hare (2007)). If so, then this is not a person-affecting judgment, and talk of counterpart relations is besides the point. I think the second half of this assertion-"if she waited, she would probably give him a better start in life"-suggests a de re reading. But in any case, not much hangs on this. SHMV delivers the correct prescription regardless of what story we end up deciding on. 27A similar response to the Non-Identity Problem is suggested by Wrigley (2006), who employs counterpart theory to assess the moral status of genetic selection. 28If we employ counterpart∗ relations we may have to hedge this claim a bit, since there are contexts in which the counterpart and counterpart∗ relations can come apart. 29Parfit (1984), p.388. 13 Further suppose, as Parfit suggests, that there are entirely different populations in W1 and W2; we are like deities choosing to create one of two very different universes. That is, suppose that none of the individuals in either world is a counterpart of an individual in the other (putting aside, for the moment, the moral counterpart proposal of section 4.2). If there is some n large enough to make the W2-option obligatory, we're led to the Repugnant Conclusion.30 Does HMV yield this result? No. None of these individuals have any counterparts in the other world. So all of these individuals are at their peak well-being. Thus neither option does any harm. And HMV will take both options to permissible, regardless of how large n is. So HMV avoids the Repugnant Conclusion. That said, this way of avoiding the Repugnant Conclusion isn't very satisfying. Let's distinguish between the Strong Repugnant Conclusion-that the W2-option is obligatory, and the Weak Repugnant Conclusion-that the W2-option is permissible. Parfit identifies the Repugnant Conclusion with the strong version. And HMV avoids this conclusion by taking the W1 and W2-options to be on a par. But most people feel that not only is the W2-option not obligatory, the W2-option is not even permissible. And since HMV takes both the W1 and W2-options to be permissible, HMV does not capture this intuition. Let me suggest an explanation for why the W2-option strikes us as strictly worse than the W1-option.31 Our moral judgments tend to be comparative in nature. We try to assess the importance of the well-being of different subjects in comparative terms as much as possible. And although we've been told in the above case that none of the subjects in the two outcomes correspond to the same individual, we're still inclined to pair up as many of them as possible for the purposes of comparison. If this explanation is correct, then there is a natural way to capture the intuition that the W2 option is impermissible. We can employ a counterpart relation which pairs as many subjects in different outcomes as it can; i.e., a saturating counterpart relation. Then, in the above case, we can map the ten subjects in W1 to ten of the subjects in W2 and vice versa, and think about the case like this: W1 W2 a1−a10 a1−a10 b11−bn +100 +1 +1 Given this saturating counterpart relation, HMV yields the desired result that only the W1option is permissible. In W1, a1-a10 have their peak well-being. So the W1-option does no harm. In W2, b11-bn have their peak well-being, but a1-a10 have a well-being 99 units below their peak. So the W2-option brings about 10 × 99 = 990 units of harm. Since the W1-option brings about 0 units of harm while the W2-option brings about 990, the W1-option is obligatory. 30Strictly speaking, Parfit is talking about assessments of which worlds are better than one another, not assessments of what one ought to do. But I take the interesting question to be the one concerning obligation; questions regarding which world is better are only interesting insofar as they relate to what we ought to do. So this is how I'll understand the problems Parfit raises. (See also the discussion in section 7.) 31Of course, nothing much hangs on this explanation. SHMV yields the right result regardless of whether this explanation of our intuitions is correct. 14 So SHMV avoids both the strong and weak versions of the Repugnant Conclusion. 6 The Absurd Conclusion Consider Parfit's Absurd Conclusion: there can be a moral difference between worlds whose populations have the same distributions of well-being, but where the subjects live concurrently instead of consecutively. So suppose we have a choice between two options. One option leads to a "concurrent world", a world in which there are n > 1 individuals, each with a well-being of m, who come into existence at the same time and die off at the same time. The other option leads to a "consecutive world", a world in which there are also n individuals with a well-being of m, but where each comes into existence alone and dies off before the next individual is created. If there is some n and m which makes only one of these options permissible, we're led to the Absurd Conclusion. Will SHMV lead to this conclusion? No. To see why, let's work out what the saturating counterpart relations between these two outcomes will be. First, note that both of these populations will consist of future people, so the before-t match condition will not apply. (When an agent faces a choice at t, all of her potential outcomes will be identical up to t. So if a concurrently existing population has already existed before t, there will be concurrently existing individuals in every outcome, including the consecutive world. Likewise, if any lonely individuals have already existed before t, there will be lonely individuals in every outcome, including the concurrent world.) The saturation condition requires each of the subjects in each world to be mapped to a subject in the other. And the minimization condition requires agents with the same well-being to be mapped to each other, since these are the mappings that will minimize the harm of each option. So the saturating counterpart relation will map all the individuals in each world to counterparts in the other who have the same well-being. Every individual in both worlds will have their peak well-being, and neither option will do any harm. Thus SHMV will take both options to be permissible. 7 The Mere-Addition Paradox and the Independence of Irrelevant Alternatives Suppose that an agent faced with a choice between outcomes W1 and W2 would prefer the W1-option to the W2-option. Then it seems she should continue to prefer the W1-option to the W2-option when faced with a choice between W1, W2 and some third outcome, W3. After all, it's hard to see why the inclusion of this third outcome should bear on the relative merits of W1 versus W2. More generally, it seems that an agent's preferences regarding a W1-option versus a W2-option should be independent of what other options are available. This requirement is a version of the "Independence of Irrelevant Alternatives" (IIA), one of the canonical decision-theoretic constraints on the preferences of rational agents. 15 (If we accept IIA, and assume that rational agents can have preferences which line up with the "all-things-considered-better-than" relation, then IIA will also constrain this "better than" relation. I take it that rational agents can have preferences which line up with the "all-things-considered-better-than" relation. Thus, although I speak in terms of preferences in what follows, what I say applies mutatis mutandis to the "all-things-consideredbetter-than" relation.) Strictly speaking, IIA doesn't say anything about normative theories like SHMV. Normative theories like SHMV are accounts about what options are morally permissible, not about what preferences one should have. But there's a natural way to link normative theories to preference constraints like IIA. Let's say that a normative theory meshes with a preference constraint C iff an agent who always prefers the options prescribed by the theory can satisfy C. Then we can bring IIA to bear on a normative theory by asking whether the theory meshes with IIA. We can formulate the requirement that a theory mesh with IIA in deontic terms. Call a decision situation in which both a W1-option and a W2-option are available a W1W2situation. Then we can formulate the requirement as follows: Deontic IIA (IIAd): (i) If there exists a W1W2-situation in which both the W1 and W2-options are permissible, then in all W1W2-situations the W1-option is permissible iff the W2-option is permissible. (ii) If there exists a W1W2-situation in which the W1-option is permissible and the W2-option is impermissible, then in all W1W2-situations the W2-option is impermissible. A normative theory meshes with IIA iff it satisfies IIAd .32 (The proof is provided in the appendix.) IIAd seems like a plausible constraint. However, SHMV appears to violate IIAd . To see this, consider two decisions. First consider the choice between the following two outcomes: W1 W2 a +5 The W1-option doesn't do any harm to anyone, since there's no one in W1. The W2-option doesn't do any harm either, since a has her peak well-being in W2. So both options are permissible. Now suppose we add a third outcome, W3: W1 W2 W3 a a +5 +10 32Interesting questions arise regarding how to understand conditional deontic claims if we reject IIAd . (Thanks to Ted Sider here.) Although these are interesting issues, I won't attempt to address them here. 16 Again, the W1-option won't do any harm. And the W3-option won't do any harm either, since a has her peak well-being in W3. But the W2-option will now do 5 units of harm, since a's well-being in W2 is 5 units below her peak. Thus only the W1 and W3-options are permissible. But this appears to violate IIAd . Both of these cases are W1W2-situations. And since the W1 and W2-options are both permissible in the first case, IIAd requires a W2-option to be permissible whenever a W1-option is. But in the second case, the W1-option is permissible and the W2-option is not. Should we take this to be a reason to reject SHMV? Here are two reasons to think not. First, as Roberts (2003b) points out, it's not clear that person-affecting views like SHMV actually do fail to satisfy principles like IIAd . If we think of W1 and W2 in an appropriately detailed way, then the outcomes in the two cases won't be the same. In the first case, the outcome we called "W1" will include facts about an agent who faced a choice between two outcomes, while in the second case, the outcome we called "W1" will include facts about an agent who faced a choice between three outcomes. And if these are different outcomes, then these two cases will involve different decision situations: in the one case we have a W1W2-situation, in the other a W ∗1 W ∗ 2 -situation. Since IIAd only places constraints on prescriptions for situations of the same kind, SHMV's prescriptions in these two cases won't have any bearing on each other. So given this detailed picture of outcomes, SHMV will satisfy IIAd . Of course, if we think of outcomes as being detailed in this way, then it will be impossible to have preferences that violate IIAd , since the same outcome will never appear in different decision situations. This makes principles like IIA and IIAd vacuous. One might take this to be a reason to think of the outcomes we're considering in a more coarse-grained way. For the sake of argument, I will grant the kind of coarse conception of outcomes required to make principles like IIAd non-trivial in what follows. Likewise, I will grant that person-affecting views like SHMV will not satisfy IIAd . Let's turn to the second reason for not rejecting SHMV. Although the violation of IIAd first seems like a demerit of the account, there are reasons to think that it is in fact a strength. As we will see, it is this violation of IIAd that allows SHMV to satisfy our intuitive judgments in the cases comprising Parfit's Mere-Addition Paradox. Indeed, given the assumption that there must always be a permissible option available, we'll see that this violation is inescapable: any solution which captures all of these intuitions must violate IIAd . Let's examine each of these points more carefully. 7.1 The Mere-Addition Paradox Consider the three cases that lead to the Mere-Addition Paradox. First, consider a choice between the following two outcomes: W1 W2 a1−a10 b1−b10 b11−b20 +10 +10 +5 17 It seems like the W2 must be at least as good as W1. After all, the same number of equally well-off subjects exist, and then there are some additional well-off subjects hanging around as well. So intuitively, either both the W1 and W2-options are permissible, or only the W2option is permissible. Second, consider a choice between the following two outcomes: W2 W3 b1−b10 b11−b20 c1− c10 c11− c20 +10 +5 +9 +9 It seems like W3 must be better than W2. There are the same number of people in both, and the people are significantly happier on average in W3 than they are in W2. So intuitively, only the W3-option is permissible. Third, consider a choice between the following two outcomes: W1 W3 a1−a10 c1− c10 c11− c20 +10 +9 +9 It seems like W1 is at least as good as W3. Indeed, if we increase the disparities in the number of agents and their well-being's in the different outcomes in these cases, this case turns into the Repugnant Conclusion case discussed in section 5, where it's clear that W1 is better than the alternative. So intuitively, either both the W1 and W3-options are permissible, or only the W1-option is permissible. As Parfit (1984) noted, these three judgments appear to be in tension. One way to characterize this tension is in terms of preferences. Let "≥" stand for the "preferred at least as much as" relation, and "<" stand for the "preferred more than" relation. Then these judgments suggest preferences according to which a W1-option≤ a W2-option < a W3-option≤ a W1-option. But this ranking is incoherent, since it requires the W1-option to be preferable to itself. However, it's easy to be distracted by tangential matters when we characterize the issue in terms of preferences.33 We can avoid these distractions by characterizing the tension as a straightforward contradiction in deontic terms. Namely, given IIAd and the assumption that some option must be permissible, these three prescriptions lead to a contradiction. The full proof is given in the appendix. But let's see how to get the contradiction given the most natural judgments in these cases: that both options are permissible in the first case, that only the W3-option is permissible in the second case, and that only the W1-option is permissible in the third case. Consider a choice between all three of the above outcomes: W1 W2 W3 a1−a10 b1−b10 b11−b20 c1− c10 c11− c20 +10 +10 +5 +9 +9 33Boonin-Vail (1996) and Arrhenius (2004) are among those who suggest that these issues are better evaluated by characterizing the paradox in deontic terms instead of a 'better-than' or preference ranking. 18 Given the natural judgment in the choice between W1 and W2-that both options are permissible-IIAd entails that the W1-option is permissible iff the W2-option is permissible. Given the natural judgment in the choice between W2 and W3-that only the W3option is permissible-IIAd entails that the W2-option is impermissible. Since we've seen that the W1-option is permissible iff the W2-option is permissible, it follows that the W1option is impermissible as well. Finally, given the natural judgment in the choice between W1 and W3-that only the W1-option is permissible-IIAd entails that the W3-option is impermissible. So all three options are impermissible. But some option must be permissible. Contradiction.34 7.2 SHMV and the Mere-Addition Paradox How does SHMV deal with this Paradox? To find out, let's look at what SHMV says about each of the cases that comprise the Mere-Addition Paradox. Consider the first case: W1 W2 a1−a10 b1−b10 b11−b20 +10 +10 +5 This case is identical to The Somewhat-Happy Addition case discussed in section 4.2, except for the fact that there are ten times as many subjects. Multiplying the number of subjects in a uniform way like this won't change SHMV's prescriptions, however. So SHMV will yield the same verdict as before: both options are permissible. Now consider the second case: W2 W3 b1−b10 b11−b20 c1− c10 c11− c20 +10 +5 +9 +9 Since the number of individuals in both outcomes is the same, the saturation condition requires us to map each individual in one outcome to an individual in the other. And since all of the individuals in W3 have the same well-being, it won't matter what mapping we choose. So suppose we pair the subjects in each outcome in numerical order (i.e., b1 with c1, b2 with c2, etc.). Then the first ten subjects will have a peak well-being of +10, and the second ten subjects will have a peak well-being of +9. It follows that the W2-option will do 40 units of harm, while the W3-option will do 10 units of harm. Thus the W3-option is obligatory. Finally, consider the third case: 34A number of results demonstrating the incompatibility of several normative theses that yield these three judgments have been given in the literature; see Ng (1989), Blackorby and Donaldson (1991), and Arrhenius (2000). The result stated here, and proved in the appendix, is both weaker and stronger than these results. It is stronger in that it makes no assumptions about the normative theses that justify our intuitive judgments in these three cases, and thus applies regardless of how one tries to justify these verdicts. It is weaker in that it doesn't directly yield conclusions regarding which kinds of normative theses are mutually inconsistent. (Though one can use this result to generate such conclusions by finding sets of principles that yield the three verdicts in question.) 19 W1 W3 a1−a10 c1− c10 c11− c20 +10 +9 +9 The saturation condition requires us to map ten individuals in W1 to individuals in W3, and vice versa. And since all of the individuals in W3 have the same well-being, it won't matter what mapping we choose. So suppose we pair the first ten subjects in each outcome. Then the peak well-being for the first ten subjects will be +10, and the peak well-being for the other ten subjects will be +9. It follows that the W1-option will do no harm, while the W3-option will do 10 units of harm. Thus the W1-option is obligatory. So SHMV yields the same verdicts as our intuitive judgments do in the first three cases. But how, then, does it avoid the contradiction that these judgments lead to? To see, let's consider how SHMV treats the case in which all three outcomes are available: W1 W2 W3 a1−a10 b1−b10 b11−b20 c1− c10 c11− c20 +10 +10 +5 +9 +9 One can show that pairing the first ten subjects in each outcome, and the next ten subjects in W2 and W3, is a saturating counterpart relation. Given this pairing, the peak well-being for the first ten subjects will be +10, and the peak well-being for the other ten subjects will be +9. It follows that the W1-option does no harm, the W2-option does 40 units of harm, and the W3-option does 10 units of harm. Thus the W1-option is obligatory. So SHMV avoids the contradiction. And it does so by violating IIAd : given SHMV's prescriptions in the first three cases, IIAd requires SHMV to maintain that the W1-option is impermissible in the combined case. But SHMV maintains that the W1-option is permissible. Although this violation of IIAd initially looked like a weakness of the SHMV, we can now see that it is a strength. Given that some option is always permissible, the only way to capture our intuitive judgments in the three cases is to reject IIAd . So in order to offer an intuitively satisfying response to the Mere-Addition Paradox, IIAd must be rejected.35 This puts us in a position to see why the various kinds of "impossibility theorems" that have been offered in the literature-results showing that no theory can satisfy all of 35Another principle along these lines that person-affecting views conflict with is the Pareto Plus Principle (PPP): if a W1-option is permissible, a W2-option is available, and W2 is the same as W1 except that it contains an additional happy person, then the W2-option must be permissible. I don't think this conflict raises any additional interesting issues, however. Rather, I think that the conflict between person-affecting views and PPP is just the conflict between person-affecting views and IIAd in disguise. To see why, consider the Restricted Pareto Plus Principle (RPPP), which applies solely to cases in which there are only two options available. I suggest that RPPP captures the distinctive intuition behind PPP. And personaffecting views like SHMV won't conflict with RPPP. But we can derive PPP from RPPP if we assume IIAd . And person-affecting views like SHMV will conflict with PPP. So it isn't until we add IIA to RPPP that we get a conflict with SHMV. This suggests that the conflict between person-affecting views like SHMV and PPP stems from the implicit IIA-like assumptions built into the formulation of PPP, not from anything distinctive regarding PPP per se. (See Roberts (2003b) for another argument for why proponents of person-affecting views should reject PPP.) 20 some set of desirable features-are not a threat to person-affecting views like SHMV.36 These theorems explicitly assume that outcomes can be ranked according to their value in a situation-independent way. And these theorems implicitly assume that this notion of value is relevant to determining our moral obligations. If this notion of value had nothing to do with what we ought to do, then these results would be of little interest. But proponents of person-affecting views will take one of these two assumptions to be false. They can grant that there are notions of "value", such as monetary value, with respect to which the values of outcomes can be ranked in a situation-independent way. But they will deny that these notions of value are morally interesting, since they have little to do with our moral obligations. Likewise, they can grant that there are notions of "value" that tracks what we ought to do, such as the harm done by the outcome, with respect to which the values of outcomes can be ranked in a given situation. But then they will deny that the value of an outcome can be determined in a situation-independent way, since the harm done by an outcome will depend on what other outcomes are available to the agent in that situation. This also allows us to see the arguments offered by proponents of intransitivity, such as Temkin (1987), Rachels (1998) and Persson (2004), in a new light. Proponents of intransitivity can be seen as arguing that there is no "all things considered better than" relation which is (i) directly tied to moral obligation, (ii) situation-independent, and (iii) transitive. Proponents of person-affecting views like SHMV will agree. But proponents of intransitivity take the culprit to be (iii). Proponents of person-affecting views will take the culprit to be either (i) or (ii). I.e., either the "all things considered better than" relation is not directly tied to moral obligation (in which case it's of little interest), or it's not situation-independent.37 8 Objections What objections to SHMV might one have? Let me briefly consider five kinds of objections, in ascending order of strength. First, one might reject counterpart theory. In this paper I've simply assumed that counterpart theory is correct. And those who reject counterpart theory might get off the boat right from the start. That said, it's worth noting that even those who reject counterpart theory could employ the machinery of SHMV. They could take the algorithm for determining the deontic status of one's actions that SHMV provides, and strip it of the counterpart theoretic interpretation it's been given here. Of course, if we pursue this approach, this algorithm is less wellmotivated. How heavy this cost is, and whether the results SHMV yields are attractive enough to overcome it, is a question I'll leave for others to decide. 36For examples of such theorems, see Ng (1989), Blackorby and Donaldson (1991), and Arrhenius (2000). 37What if one thinks that it's analytic that an "all things considered better than" relation will satisfy (i)-(iii)? Then proponents of person-affecting views will follow proponents of intransitivity in denying that there is such a relation. 21 A second source of objections stems from the fact that, like the canonical forms of utilitarianism, SHMV is a well-being focused theory. And we have lots of moral intuitions regarding things like rights, justice, desert, equality, and so on, that typical well-beingfocused theories don't accommodate.38 There are a couple of ways to respond to these worries within the general framework of this approach. The first is to follow the utilitarian tradition of trying to defuse these kinds of intuitions. The second is to try to incorporate such considerations into the basic notions the account employs. In the case of desert, for example, we might follow Feldman (1995) and incorporate such considerations into our assessment of a subject's well-being. I have little to say about which of these approaches we should employ with respect to which worries. But given the availability of these kinds of responses, I take it that these worries, while interesting and reasonable, do not threaten the viability of person-affecting approaches like SHMV. A third source of objections comes from disagreements regarding Parfit's desiderata. A number of people have argued that one or more of Parfit's four requirements are misguided, and that once we think about these cases in the right way, we'll see that some of the counterintuitive results Parfit tries to avoid are not really counterintuitive after all. For example, some have argued that we should come to accept the Repugnant Conclusion.39 I don't want to rule out the possibility that these claims are correct.40 That said, I think it's clear that there is at least a strong prima facie case to be made in favor of these verdicts. So I don't take these to be compelling objections to SHMV. A fourth source of objections stems from worries regarding the asymmetry regarding the moral significance of creating future people described in section 3.2. For asymmetric theories like SHMV, there is moral pressure to not create individuals with negative well-being, but no corresponding pressure to create individuals with positive well-being. Consider again the Question of Creation case described in section 3.2: The Question of Creation. Consider an agent who has a choice between two outcomes- creating no one, or creating both a happy person and an unhappy person: W1 W2 a b +5 -5 On SHMV, the presence of the +5 individual is not a mark in favor of the W2-option, but the presence of the -5 individual is a mark against it. And there is nothing that tells against the W1-option. So according to SHMV, it's obligatory to choose the W1-option. In section 3.2 I suggested that this asymmetric treatment of future individuals is intuitively plausible. But some have argued that this asymmetry is actually counterintuitive (see Sikora (1978) and Holtug (2004)). In the Question of Creation example, it is suggested that both options should be permissible. And if we were to make a's well-being 38Likewise, our moral intuitions may distinguish between things like preventing harms and providing benefits, something that typical well-being-focused theories won't be sensitive too. (Thanks to Elizabeth Harman here.) 39For example, see Sikora (1978), Mackie (1985), Hare (1993), Ryberg (1996), Holtug (2004) Tännsjö (2004), and Huemer (2008). 40I say this because I'm sympathetic to these kinds of utilitarian apologetics. Indeed, I think something like utilitarianism may well be correct. 22 a little higher (+6, say), it is suggested that the W2-option should be obligatory. I do not share these intuitions, though my feelings here aren't very strong. But let me note that a different kind of case, which has been taken by critics to provide a decisive objection to the asymmetry, falls short of the task. Sikora (1978) and Holtug (2004) both discuss the question of whether we should continue to have children and propagate the human race, or whether we should stop reproducing and let the human race fall into extinction. In the former case we will bring about the existence of many more people, most of them happy, but a few with lives not worth living. If we accept the asymmetry, then there's pressure to not create unhappy individuals, but no countervailing pressure to create happy individuals. So it will be better to create no one than to create a bunch of future individuals, a few who would be unhappy. Thus if we accept the asymmetry, the critics argue, we're obligated to stop having children and to let the human race go extinct. Although our intuitions about this case are much stronger than in the Question of Creation example, I think this is a bad case to appeal to. First, this case drags in a number of misleading or orthogonal intuitions, such as implicit assumptions about the desires of the populace and the consequences of such choices on their well-being, sentiments about things like the "right to procreate", intuitions regarding the intrinsic value of the survival of the species, and so on. (See Wolf (1997) for a discussion of some of these issues.) And these issues are orthogonal to the question of whether or not there's an asymmetry with respect to well-being. Second, the argument won't generally go through in realistic cases. Consider: why think that your choice to procreate will result in the existence of individuals whose lives are not worth living? The thought might be this: "The effects of your choice to procreate will ripple outward, and change a great many things. And it may result in some individuals being harmed relative to their counterparts in the outcome that results from a different choice." But this is just as true of the choice not to procreate. And there's no reason to think that the decision to procreate will lead to more harm, all things considered, than the decision not to procreate. (Indeed, to get the conclusion that you should never procreate, it needs to be the case that all of your options to procreate (at any time, with any partner) will result in more harm than the other options available.) Third, to the extent that we're concerned with subjective obligations, our assessment of this case will hang on tricky issues regarding probability-issues that we've been avoiding so far. When we choose to have children, we're taking a gamble with respect to how welloff their lives will be. We may be relatively confident that they'll have lives worth living, but we can't be entirely certain of this. In order to argue that people like us are obligated not to have children, given SHMV, the critic needs to claim that the epistemic possibility of our child not having a life worth living is sufficient to make it impermissible to have that child. But whether this is true will depend on how we decide to incorporate uncertainty into our theory. And there are natural ways of doing this-evaluating harm with respect to the expected well-being of a subject, for example-which will not yield the claim the critics require. We can get around these complications by setting up a more straightforward case, such as the following: a deity is able to bring about one of two outcomes, both full of well-off 23 subjects who will propagate indefinitely. But one outcome contains an additional pair of subjects, one who is extremely well-off (has a well-being as high as you like), and one who is so miserable that her life isn't worth living, though only barely so. This case avoids my complaints. And asymmetric theories like SHMV will maintain that the deity should decline to create the additional pair of subjects. But once we clean up the case like this, I no longer have the intuition that this prescription is incorrect. A fifth kind of objection stems from cases like the following:41 Asymmetric Creation. Consider an agent who has a choice between the following three outcomes: W1 W2 W3 a1 a2 b1 b2 +5 +10 +6 +9 One can show that a saturating counterpart relation will map a1 to b1 and a2 to b2, and vice versa. So the W1-option will do no harm, while both the W2 and W3-options will do 1 unit of harm (since a1 is 1 unit below her peak well-being in W2, and b2 is 1 unit below her peak well-being in W3). Thus according to SHMV, the W1-option is obligatory. This may seem like a funny prescription for SHMV to make. At first glance, one might think that all three options should be permissible, not just the W1-option. What's going on? Here is my diagnosis. I think IIAd-style reasoning is illicitly sneaking into our assessment of this case. If we just had the W1 and W2-options to choose between, both options would be permissible. And if we just had the W1 and W3-options to choose between, both options would be permissible. So when presented with a case with all three options available, it's natural to implicitly appeal to IIAd-reasoning to reach the conclusion that all three options should be permissible. But as we saw in section 7.2, we should be wary of IIAd-style reasoning. And such reasoning won't generally lead to the intuitively correct prescriptions in these kinds of cases. Consider the following case: Dominant Creation. Consider an agent who has a choice between the following three outcomes: W1 W2 W3 a1 a2 b1 b2 +5 +10 +10 +20 Regardless of how we map the individuals in W2 and W3 to one another, the W2-option will do 15 units of harm and the W3-option will do no harm. Thus on SHMV both the W1 and W3-options are permissible. I take it that SHMV delivers the intuitively correct prescription in this case. But this prescription entails the same kind of IIAd-violation as SHMV's prescription in the previous case. As before, if we just had the W1 and W2-options, or the W1 and W3-options, then both options would be permissible. But we don't want to conclude from this that all three options are permissible in the Dominant Creation case. 41I owe this case to James Patten. 24 If my diagnosis of the Asymmetric Creation case is correct, then the Asymmetric Creation case does not yield a problem for SHMV. Instead, it yields a moral: we need to be careful not to slip into IIAd-style reasoning when evaluating SHMV's prescriptions. 9 Conclusion I've presented a person-affecting approach to the problems in population ethics that Parfit (1984) raises. The first part of the approach is a particular person-affecting view, the Harm Minimizing View: The Harm Minimizing View (HMV): An option is morally permissible (for a at t) iff it minimizes harm. The second part of the approach is a moral-counterpart proposal: The Moral-Counterpart Proposal: When applying HMV, we should employ saturating counterpart relations. Together, these two claims comprise the Saturated Harm Minimizing View (SHMV). SHMV has a number of attractive features. It accords with our person-affecting sentiments. It naturally captures our asymmetric intuitions regarding the moral significance of creating future people. And it satisfies all four of Parfit's requirements: it addresses the Non-Identity Problem, it avoids the Repugnant and Absurd Conclusions, and it resolves the Mere-Addition Paradox. Furthermore, it fulfills two of these requirements in a particularly satisfying way: it avoids both the strong and weak versions of the Repugnant Conclusion, and it resolves the Mere-Addition Paradox in a way that preserves all of our initial judgments with respect to the three key cases. In Reasons and Persons, Parfit (1984) posed a problem: provide a satisfying normative account that complies with four requirements. A number of people have suggested looking toward person-affecting views for a solution. The Saturated Harm Minimizing View vindicates this suggestion. It complies with Parfit's four requirements, and it offers an attractive solution to many of the problems in population ethics.42 References Arrhenius, Gustaf. 2000. "An Impossibility Theorem for Welfarist Axiologies." Economics and Philosophy 16:247–266. Arrhenius, Gustaf. 2003. "The Person-Affecting Restriction, Comparativism, and the Moral Status of Potential People." Ethical Perspectives 10:185–195. 42I'd like to thank Phil Bricker, Maya Eddon, Fred Feldman, Peter Graham, Elizabeth Harman, Julia Markovits, James Patten, Melinda Roberts, Ted Sider, Dennis Whitcomb, members of 2011 Bellingham Summer Philosophy Conference, and an anonymous referee, for helpful comments and discussion. 25 Arrhenius, Gustaf. 2004. The Paradoxes of Future Generations and Normative Theory. In The Repugnant Conclusion: Essays on Population Ethics, ed. Jesper Ryberg and Torbjörn Tännsjö. Kluwer Academic Publishers pp. 201–218. Blackorby, Charles and David Donaldson. 1991. "Normative Population Theory: A Comment." Social Choice and Welfare 8:261–267. Boonin-Vail, David. 1996. "Don't Stop Thinking About Tomorrow: Two Paradoxes About Duties to Future Generations." Philosophy and Public Affairs 25:267–307. Broome, John. 1992. Counting the Costs of Global Warming. White Horse Press. Broome, John. 1999. Ethics out of Economics. Cambridge University Press. Feldman, Fred. 1995. "Adjusting Utility for Justice: A Consequentialist Reply to the Objection from Justice." Philosophy and Phenomenological Research 55:567–585. Hare, Caspar. 2007. "Voices From Another World: Must We Respect the Interests of People Who Do Not, and Will Never, Exist?" Ethics 117:498–523. Hare, R. 1993. Possible People. In Essays in Bioethics. Clarendon Press pp. 67–83. Holtug, Nils. 2004. Person-Affecting Moralities. In The Repugnant Conclusion: Essays on Population Ethics, ed. Jesper Ryberg and Torbjörn Tännsjö. Kluwer Academic Publishers. Huemer, Michael. 2008. "In Defense of Repugnance." Mind 117. Lewis, David. 1986. On The Plurality of Worlds. Blackwell. Mackie, John L. 1985. The Parfit Population Problem. In Persons and Values. Clarendon Press. Narveson, Jan. 1967. "Utilitarianism and New Generations." Mind 76:62–72. Ng, Yew-Kwang. 1989. "What Should We Do About Future Generations? Impossibility of Parfit's Theory X." Economics and Philosophy 5:235–253. Parfit, Derek. 1984. Reasons and Persons. Clarendon Press. Parsons, Josh. 2002. "Axiological Actualism." Australasian Journal of Philosophy 80:137–147. Persson, Ingmar. 2004. The Root of the Repugnant Conclusion and its Rebuttal. In The Repugnant Conclusion: Essays on Population Ethics, ed. Jesper Ryberg and Torbjörn Tännsjö. Kluwer Academic Publishers. Rachels, Stuart. 1998. "Counterexamples to the Transitivity of Better Than." Australasian Journal of Philosophy 76:71–83. 26 Roberts, Melinda. 1998. Child versus Childmaker: Future Persons and Present Duties in Ethics and the Law. Rowman and Littlefield. Roberts, Melinda. 2003a. "Can it Ever Be Better Never to Have Existed At All? PersonBased Consequentialism and a New Repugnant Conclusion." Journal of Applied Philosophy 20:159–185. Roberts, Melinda. 2003b. "Is the Person-Affecting Intuition Paradoxical?" Theory and Decision 55:1–44. Ryberg, Jesper. 1996. "Is the Repugnant Conclusion Repugnant?" Philosophical Papers 25:161–177. Ryberg, Jesper, Torbjörn Tännsjö and Gustaf Arrhenius. 2009. "The Repugnant Conclusion." The Stanford Encyclopedia of Philosophy. URL = http://plato.stanford.edu/archives/sum2009/entries/repugnant-conclusion/. Sikora, Richard I. 1978. Is It Wrong to Prevent the Existence of Future Generations? In Obligations to Future Generations, ed. Richard I. Sikora and Brian Barry. The White Horse Press pp. 112–166. Tännsjö, Torbjörn. 2004. Why We Ought to Accept the Repugnant Conclusion. In The Repugnant Conclusion: Essays on Population Ethics., ed. Jesper Ryberg and Torbjörn Tännsjö. Kluwer Academic Publishers. Temkin, Larry. 1987. "Intransitivity and the Mere Addition Paradox." Philosophy and Public Affairs 16:138–187. Wolf, Clark. 1997. Person-Affecting Utilitarianism and Population Policy; Or, Sissy Jupe's Theory of Social Choice. In Contingent Future Persons, ed. Nick Fotion and Jan C. Heller. Kluwer Academic Publishers pp. 99–122. Wrigley, Anthony. 2006. "Genetic Selections and Modal Harms." The Monist 89:505–525. Appendix Proof: A Normative Theory Satisfies IIAd iff it Meshes with IIA. Here we'll prove that a normative theory satisfies IIAd iff it meshes with IIA. Definitions: Let's begin with the definitions required to make the meaning of this claim precise. I'll say that the choice of an option A in a decision situation is in accordance with normative theory T iff T takes A to be permissible in that decision situation. I'll say that the choice of an option A in a decision situation is in accordance with preference function f iff, for all available options X , A≥ X . And I'll say that a set of choices S is in accordance with f /T iff all and only the choices in that set are in accordance with f /T . Finally, I'll 27 say that a normative theory T meshes with preference constraint C iff the set S of choices that are in accordance with T is also in accordance with some preference function f that satisfies C. We can characterize IIA and IIAd as follows: IIA: If there is a W1W2-situation in which the W1-option ≥ the W2-option, then in all W1W2-situations the W1-option ≥ the W2-option. IIAd: (i) If there exists a W1W2-situation in which both the W1-option and the W2-option are permissible, then in all W1W2-situations the W1-option is permissible iff the W2-option is permissible. (ii) If there exists a W1W2-situation in which the W1-option is permissible and the W2-option is impermissible, then in all W1W2-situations the W2-option is impermissible. Proof: With this terminology in place, we can make sense of the result to be proved: a normative theory T satisfies IIAd iff it meshes with IIA. We'll prove the result in two parts. First (part I), we'll show that if a normative theory T violates IIAd , then it will not mesh with IIA. Second (part II), we'll show that if a normative theory T satisfies IIAd , then it will mesh with IIA. Together, these results entail the desired conclusion: that a normative theory T meshes with IIA iff it satisfies IIAd . Part I: If a normative theory T violates IIAd , then it will not mesh with IIA. We'll demonstrate this in two steps. First (I.A), we'll show that if a normative theory T violates the first clause of IIAd , then the set of choices in accordance with T will only be in accordance with preference functions that violate IIA. Second (II.B), we'll show that if a normative theory T violates the second clause of IIAd , then the set of choices in accordance with T will only be in accordance with preference functions that violate IIA. I.A. The First Clause: First suppose a theory violates the first clause of IIAd : there are W1W2-situations in which both the W1-option and the W2-option are permissible according to T , and other W1W2-situations in which one is permissible and the other not. Consider the set S of choices in accordance with T . Any preference function f in accordance with S must be such that, (i) in the W1W2-situations in which both the W1-option and the W2option are permissible, the W1-option≥ the W2-option and the W2-option≥ the Wa-option, and (ii) in W1W2-situations in which (say) the W1-option is permissible and the W2-option is not, the W2-option 6≥ the W1-option. This violates IIA. I.B. The Second Clause: Suppose a theory violates the second clause of IIAd : there are W1W2-situations in which (say) the W1-option is permissible and the W2-option impermissible according to T , and other W1W2-situations in which the W2-option is permissible. Consider the set S of choices in accordance with T . Any preference function f in accordance with S must be such that, (i) in the W1W2-situations in which the W1-option is permissible and the W2-option impermissible, the W1-option ≥ the W2-option and the W2-option 6≥ the W1-option, and (ii) in the W1W2-situations in which the W2 option is permissible, the W2-option ≥ the W1-option. This violates IIA. 28 Part II: If a normative theory T satisfies IIAd , then the set of choices in accordance with T will mesh with IIA. (I.e., there will be some preference function f that's in accordance with this set of choices that meshes with IIA.) Consider the preference functions f that are in accordance with the set S of choices that's in accordance with a normative theory T that satisfies IIAd . Any such preference function f will either (i) mesh with IIA or (ii) not mesh with IIA. If f satisfies IIA, then we're done. If f doesn't satisfy IIA, then we'll show (II.A) that there's always a nearby preference function in accordance with S which does satisfy IIA. So no matter what, a comprehensive strategy in accordance with an IIAd-satisfying theory T will be in accordance with some preference function f which satisfies IIA. So any normative theory T that satisfies IIAd meshes with IIA. II.A. The Key Result: Let S be the set of choices in accordance with a normative theory T that satisfies IIAd , and let f be a preference function in accordance with S. If f violates IIA, then there is a always a nearby preference function in accordance with S which does satisfy IIA. Take any two W1W2-situations in which f yields a violation of IIA with respect to it's rankings of W1 and W2 in these situations. Since f violates IIA, it must be the case that the W1-option ≥ the W2-option in one situation, and the W1-option 6≥ the W2-option in the other. Call the first α and the second β . Let's consider what set S of choices f could be in accordance with, given these constraints. In particular, let's consider the choices with respect to the W1 and W2-options in α and β that f could be in accordance with. To start, we have 16 possibilities: in each situation, α and β , S could contain (i) the W1-option (and not the W2-option), (ii) the W2-option (and not the W1-option), (iii) both options or (iv) neither option. Let's narrow this down. First, S needs to be in accordance with a theory T that satisfies IIAd . This rules out 6 possibilities, leaving us with 10 possibilities.43 Second, S needs to be in accordance with f . Since f maintains at β that the W1option 6≥ the W2-option, it follows that S can't include the W1-option at β . This rules out 8 possibilities, 4 of which have already been ruled out, leaving us with 6 possibilities.44 Likewise, since f maintains at α that the W1-option ≥ the W2-option, it follows that S can't include the W2-option at α without also including the W1-option. This rules out 4 possibilities, 2 of which have already been ruled out, leaving us with 4 possibilities.45 These are the four possible ways that S could treat the W1 and W2-options in α and β that are compatible with the constraints we've imposed: (α : W1, β : neither), (α : neither, β : W2), (α : both, β : neither), (α : neither, β : neither). Now consider two preference functions, f1 and f2, which are the same as f in every respect except for their preference rankings of the W1 and W2-options in α and β . While f maintains that the W1-option ≥ the W2-option in α and the W1-option 6≥ the W2-option 43The 6 possibilities this rules out are: (α : W1, β : W2), (α : W1, β : both), (α : W2, β : W1), (α : W2, β : both), (α : both, β : W1), (α : both, β : W2). 44The 4 additional possibilities this rules out are: (α : W1, β : W1), (α : both, β : W1), (α : both, β : both), (α : neither, β : both). 45The 2 additional possibilities this rules out are: (α : W2, β : W1), (α : W2, β : both). 29 in β , f1 maintains that the W1-option ≥ the W2-option in both, and f2 maintains that the W1-option 6≥ the W2-option in both. In each of the four possibilities for S compatible with the constraints, either f1 or f2 will be in accordance with S. ( f1 is in accordance with (α : W1, β : neither), f2 is in accordance with (α : neither, β : W2), and both are in accordance with (α : both, β : neither) and (α : neither, β : neither).) And both f1 and f2 are compatible with IIA with respect to the W1 and W2-options in α and β . These nearby preference functions only 'fix' f with respect to one violation of IIA. But by iterating this process, we can transform any f in accordance with S which fails to satisfy IIA into a nearby alternative which is also in accordance with S and which does satisfy IIA. Proof: Given IIAd and that Some Option is Permissible, the Three Judgments Yield a Contradiction. Here we'll see that given IIAd and the assumption that some option is always permissible, our intuitive judgments in the three cases that comprise the Mere-Addition Paradox lead to a contradiction. The intuitive judgments that are reported with respect to these three cases leave a bit of wiggle room. It is usually left open in the first case whether both options are intuitively permissible or whether only the W2-option is permissible. Likewise, it is usually left open in the third case whether both options are intuitively permissible or whether only the W1option is permissible. This gives us four permutations. We'll show that all four of these possible prescriptions lead to contradictions. First, consider the most natural judgments: suppose that both options are permissible in case one, and that only the W1-option is permissible in case three. And consider the case in which the agent has a choice between all three of the outcomes: W1 W2 W3 a1−a10 b1−b10 b11−b20 c1− c10 c11− c20 +10 +10 +5 +9 +9 Given the judgment in the first case, IIAd entails that in W1W2-situations the W1-option is permissible iff the W2-option is permissible. Given the second judgment, IIAd entails that in W2W3-situations the W2-option is impermissible. It follows that in W1W2W3-situations, both the W1 and W2-options are impermissible. Given the third judgment, IIAd entails that in W1W3-situations the W3-option is impermissible. It follows that in W1W2W3-situations like this one, all three options are impermissible. But there must always be a permissible option available. Contradiction. Second, suppose that only the W2-option is permissible in case one, and only the W1option is permissible in case three. Then this will change what the first judgment and IIAd entail in the initial case: they will now entail that in W1W2-situations (and a fortiriori W1W2W3-options) the W1-option is impermissible. Since the second and third judgments and IIAd entail that the W2 and W3-options are also impermissible in these situations, we again get the result that all three options are impermissible. But there must be a permissible option. Contradiction. 30 Third, suppose that both options are permissible in both cases one and three. Then this will change what the third judgment and IIAd entail in the initial case: they will now entail that in W1W2W3-situations, the W1-option is permissible iff the W3-option is permissible. Since the first judgment and IIAd entail that the W1-option is permissible iff the W2-option is permissible in these situations, it follows that all three options are either permissible or impermissible. And since the second judgment and IIAd entail that the W2-option is impermissible in these situations, we get the result that all three options are impermissible. But there must be a permissible option. Contradiction. Fourth, suppose that only the W2-option is permissible in case one, and both options are permissible in case three. Then in W1W2W3-situations, the first judgment and IIAd will entail that the W1-option is impermissible, the second judgment and IIAd will entail that the W2-option is impermissible, and the third judgment and IIAd will entail that the W3option is permissible iff the W1-option is permissible. Together this entails that all three options are impermissible. But there must be a permissible option. Contradiction.
Shanna Steals grew up in different parts of rural Eastern Ontario, including Casselman, St-Albert, Limoges and Alfred, where she lives today. Shanna earned a double bachelor’s degree in visual arts and in English literature from the University of Ottawa in 2006. As a student, she specialized in darkroom photography, 35 mm black and white, as well as in mixed media sculpture, assemblage and installation. She explored different mediums, as well as drawing. painting and performance to convey her concepts, but keeps coming back to acrylic painting, mixed media, sculpture and drawing, the four core mediums in her creations. She currently works in multiple mediums, favouring traditional fabrication methods, given that the process of creation, materiality and experimentation are at the source of her art. She has completed a few major murals, designs and has produced logos, and masks for theatre, dance groups and a short film. She regularly exhibits her work in local and regional galleries. Shanna has been teaching visual arts to children and teens, and acrylic painting to adults, since 2003 in cultural centres and schools as a guest artist. In 2014, she took the Artist-Educator Foundations Course at the Royal Conservatory in partnership with the Ontario Arts Council. When the Conseil des arts Prescott-Russell Arts Council (CAPRAC) was founded in 2014, she became the organization’s Coordinator, and later its Executive Director until 2020. She is now a freelancer and divides her time between teaching, consulting, her artistic practice and raising her son and Monarch Butterflies. After finding a Monarch caterpillar on the wild milkweed in her garden during the summer of 2018, Shanna decided to embark on a new journey to save the Monarch Butterfly. She quickly became a Monarch enthusiast and is known in her community as "Madame Papillon." To this day, she continues to share her extensive knowledge and experience as an artist, educator, cultural worker, exhibition curator, environmentalist and Monarch rearer.
https://www.shannasteals.ca/copy-of-bio
Of all the events taking place at this week’s National High School Finals Rodeo, reined cow horse doesn’t immediately come to mind as a signature rodeo event. After all, it has less to do with speed, strength or endurance than other events. Instead, reined cow horse is more like synchronized swimming, or ice skating, in the sense that contestants perform a series of ordered moves and are then scored on the timing and precision of their movements. The event begins with the horse and rider alone on the arena floor, where they must perform a routine that includes running the length of the arena in a consistent rhythm, turning in circles and coming to a quick stop from a stride. Things get intense when a cow is released onto the event floor, and the rider must box the cow in with his or her horse, turn it back and forth along the fence and make sure the cow does not escape to the other end of the arena. It’s as much a mental challenge as it is a physical one, something that gives Bloomfield native Tatum Olson passion for his chosen event. “I enjoy the challenge of remembering your reining pattern and then when you go down the fence you get a great adrenaline rush, too,” Olson said. Olson’s parents have a history of training reined cow horses, making it second nature for Olson to take up the event when he was 9. Olson’s mother, father and sister all made the roughly three-hour drive from Bloomfield to Lincoln over the past few days to support him, and their cheers were even loud enough to carry onto the event floor. “I could hear them cheering a little bit in the background and it was real nice,” Olson said. Olson has grown in his reined cow horse abilities over the past few years, and his horse, Buster, has grown with him, too. Buster has been with the Olson family since he was just a baby, and seven years later he may be part of a championship-winning team. The top 20 finishers from the first two go-rounds qualify for a final short round on Friday morning, and Olson appears to be in good shape to make the cut. He finished third among 108 competitors with a score of 294 on his first go-round and followed with a score of 283 on Wednesday night. The two go-round scores are averaged together to find the top 20 qualifiers. This is Olson’s third appearance in reined cow horse at the national finals, and he qualified for the final short round in both his previous finals. Two years ago, he finished 14th overall and last year he finished 13rd, meaning the pattern would suggest a 12th-place finish. Olson is hoping he can make an even bigger jump than just one place. “I’m hoping for a little more improvement this time,” he said. “I felt really good and I was a little surprised at how I did, but I was really glad of how well it went. I’ve been having a real good time.” Photos: Competitors round the barrels and more on Day 5 at National High School Finals Rodeo National High School Finals Rodeo, 7.22 Melissa Armendariz of Hermosillo, Mexico, competes in barrel racing at the National High School Finals Rodeo on Thursday, July 22, 2021, at the Lancaster Event Center. FRANCIS GARDLER, Journal Star National High School Finals Rodeo, 7.22 Morgan Beckstrom of Spanish Fork, Utah, competes in barrel racing at the National High School Finals Rodeo on Thursday, July 22, 2021, at the Lancaster Event Center. FRANCIS GARDLER, Journal Star National High School Finals Rodeo, 7.22 Hannah Belvin of Durango, Colo., competes in barrel racing at the National High School Finals Rodeo on Thursday, July 22, 2021, at the Lancaster Event Center. FRANCIS GARDLER, Journal Star National High School Finals Rodeo, 7.22 Luke Mavity of Dickinson, N.D., competes in tie-down roping at the National High School Finals Rodeo on Thursday, July 22, 2021, at the Lancaster Event Center. FRANCIS GARDLER, Journal Star National High School Finals Rodeo, 7.22 Molly Bell of Avon, Ill., competes in barrel racing at the National High School Finals Rodeo on Thursday, July 22, 2021, at the Lancaster Event Center. FRANCIS GARDLER, Journal Star National High School Finals Rodeo, 7.22 Hannah Belvin of Durango, Colo., competes in barrel racing at the National High School Finals Rodeo on Thursday, July 22, 2021, at the Lancaster Event Center. FRANCIS GARDLER, Journal Star National High School Finals Rodeo, 7.22 Luke Crigger of Rural Retreat, Va., competes in tie-down roping at the National High School Finals Rodeo on Thursday, July 22, 2021, at the Lancaster Event Center. FRANCIS GARDLER, Journal Star National High School Finals Rodeo, 7.22 Kade Berry of Poolville, Texas, competes in bareback riding at the National High School Finals Rodeo on Thursday at the Lancaster Event Center. FRANCIS GARDLER, Journal Star National High School Finals Rodeo, 7.22 Pyper Lillico of Lloydminster, Saskatchewan, competes in barrel racing at the National High School Finals Rodeo on Thursday, July 22, 2021, at the Lancaster Event Center. FRANCIS GARDLER, Journal Star National High School Finals Rodeo, 7.22 Lexi Huffman from Sistersville, W. Va., cuts a tight turn in barrel racing at the National High School Finals Rodeo on Thursday, July 22, 2021, at the Lancaster Event Center. FRANCIS GARDLER, Journal Star National High School Finals Rodeo, 7.22 Rachel Kittle of Woodland, Ala., competes in barrel racing at the National High School Finals Rodeo on Thursday, July 22, 2021, at the Lancaster Event Center.
Carbon dioxide $(CO_2)$ in the blood has the potential to cause severe health detriments as well as milder cognitive effects if its concentration increases to unsafe amounts- typically either through environmental exposure or conditions affecting the respiratory system in one way or another. Astronauts, in particular, are prone to this exposure due to high ambient $CO_2$ levels aboard spacecraft and have reported some symptoms due to $CO_2$ exposure which aren’t always able to be identified as such without blood $CO_2$ monitoring. Current state-of-the-art monitoring methods like arterial blood gas analysis and capnography are limited mostly to clinical settings due to requiring invasive procedures, operator training, or bulky equipment- and aren’t practical for use aboard spacecraft. Recent studies have investigated the usage of radiofrequency (RF) resonant sensors to measure health diagnostics noninvasively- and such a sensor could potentially address this gap quite well. This thesis focused on determining if a spiral resonator would be capable of detecting changes in dissolved gas $CO_2$ due to the gas’s effect on the electromagnetic properties of water. In order to do this, a benchtop model was developed to control the amount of $CO_2$gas dissolved in water which was then measured using the spiral resonator sensor. A significant correlation between negative shifts in the sensor’s resonant frequency and an increase in dissolved $CO_2$ gas was measured, with an R2 value of 0.923. While the detection of $CO_2$ in blood will pose other challenges in accounting for factors like pulsatile blood flow and changes in other parts of blood content and discerning their effects from those of $CO_2$, this work still demonstrates the potential of this methodology to be used as a noninvasive blood gas $CO_2$ sensor.
https://soar.wichita.edu/handle/10057/23464
View This Storyboard as a Slide Show! Create your own! Copy Like What You See? This storyboard was created with StoryboardThat.com Storyboard Description Storyboard Text Happy Birthday Bella HI Grandma Your next clue is forests! HAPPY BIRTHDAYBELLA It was Bella's birthday and her mom and dad gave her a special gift, A map. This map is a special adventure to go to a treasure chest which is where her present is. The map took them to her grandma's house and then her grandma gave her the next clue. That was her last clue which carried her to her surprise. Though there was a problem they were lost in the forest. Her mom and dad stood talking over who made the mistake. the three of them were scared and confused but they continued down the path. Her surprise was a birthday party! The party was perfect and pretty like a flower.
https://www.storyboardthat.com/storyboards/08946835/the-birthday-surprise
The first four (l to r) are pahoa (daggers) and the last is a lei-o-mano (shark's lei) in a knuckle duster style. 1. Unknow if from pre or post European contact. The point is a marlin bill and the handle is made out of kauila with tiger shark teeth imbedded. Very devastating weapon. 2. Made from whale bone with baby tiger shark teeth. 3. Classic pahoa with eight tiger shark teeth imbedded in the front. 5. Lei-O-Mano in knuckle duster style. Made from uhiuhi (very rare native hardwood) with tiger shark teeth.
https://www.jennifercritesphotography.com/Stock-Photography/Hawaii/Martial-Arts/i-48NzcqJ
Chinese scientists have attempted to answer this question, by creating a computer model, demonstrating how people can settle in the Milky Way. Colonization of other planets has been on the minds of people for a long time, but what will this process look like? The research by Chinese scientists presents the most likely scenario for human settlement across the galaxy. The first challenge in settling on other planets is the enormous resources required and new technology, which according to research, will only appear in a decade’s time. Scientists considered around 100 thousand planetary systems as having potential for colonization. These experts shared the most interesting routes for settlement. They started by showing how humanity would leave the Solar system. Upon reaching a planet deemed suitable for living, the settlers would start a colony. Later, the planet would serve as a base for other space flights. The missions would make use of enormous spaceships carrying a large number of people. As these ‘trips’ might take a long time to complete, hundreds of thousands of generations will pass during a single flight, with descendants of the first travelers arriving at the final destination. The end of the simulation features a demonstration of humanity settling in the Perseus Arm of the Milky Way. Share this with your friends!
https://hitecher.com/news/chinese-scientists-explain-how-our-galaxy-will-be-colonized
CROSS REFERENCE TO RELATED APPLICATIONS BACKGROUND OF THE INVENTION BRIEF SUMMARY OF THE INVENTION DETAILED DESCRIPTION OF THE INVENTION This application is a divisional of application Ser. No. 09/383,366 filed Aug. 26, 1999 now U.S. Pat. No. 6,625,427. In addition, this application claims the benefit of the earlier filing date of Japanese Patent No. 10-240731, filed Aug. 26, 1998, the entire contents of which are hereby incorporated herein by reference. The present invention relates to a multicarrier type radio transmission apparatus for combining signals of a plurality of channels with a plurality of carrier frequencies into one signal to effect radio transmission. This application is based on Japanese Patent Application No. 10-240731, filed Aug. 26, 1998, the content of which is incorporated herein by reference. 10 1 10 2 10 1 12 1 12 2 12 10 1 10 2 10 10 1 10 2 10 10 1 10 2 10 10 1 10 2 10 The individual amplification system was first put into practice as the amplification system in the multicarrier radio transmission apparatus. In the radio transmission apparatus of individual amplification system, amplifiers -, -, . . . , -n of a number equal to the number of carrier frequencies (the number of channels) used are provided as shown in FIG. and transmission signals of the respective channels (signals from signal generators -, -, . . . , -n) are amplified by the respective amplifiers -, -, . . . , -n. Since the signals input to the respective amplifiers -, -, . . . , -n are signals each corresponding to one of the channels, there occurs no possibility that the signal of each channel will interfere with the signal of another channel. Thus, the amplifiers -, -, . . . , -n can be operated in a high-efficiency operating region. Further, it is advantageous in the heat radiation because the amplifiers -, -, . . . , -n are separately provided for n channels. 10 1 10 2 10 18 18 10 1 10 2 10 16 1 16 2 16 10 1 10 2 10 18 16 1 16 2 16 Output signals of the amplifiers -, -, . . . , -n are combined in power by a power combiner and supplied to an antenna (not shown). In order to prevent signals reflected from the power combiner from being fed back to the amplifiers -, -, . . . , -n and causing distortion in the amplified signals, it is necessary to insert isolators -, -, . . . , -n between the amplifiers -, -, . . . , -n and the power combiner so as to maintain isolation between the channels. However, if the isolators -, -, . . . , -n are inserted, there occurs a problem that great loss occurs to produce a large amount of heat. 18 18 14 1 14 2 14 14 1 14 2 14 12 1 12 2 12 14 1 14 2 14 18 Further, in order to simplify the construction of the power combiner , it is necessary to supply the amplified signals to the power combiner via band-pass filters -, -, . . . , -n having high channel selectivity. However, since the pass bands are fixed in the conventional filters, it is impossible to change the frequency of the carrier signal of each channel if the pass bands of the filters -, -, . . . , -n are set to correspond to the bands of signals of the respective channels output from the signal generators -, -, . . . , -n. In the actual transmission system, there is a request for changing the carrier frequency of each channel to a carrier frequency of another channel assigned to the system or there will be a request for changing the bandwidth of the carrier frequency in the future, but the individual amplification system cannot cope with the request. In order to cope with the request at least to some extent, the pass bands of the filters -, -, . . . , -n are set equal to each other and the carrier frequency band of all of the channels is set as the pass band. However, in this case, it is also necessary to insert the isolators in order to improve the channel selectivity. Therefore, the power combiner becomes complicated in construction. FIG. 2 12 1 12 2 12 22 24 24 24 22 24 24 In order to solve the problem of the individual amplification system, a collective amplification system was developed. As shown in , in the collective amplification system, signals of carrier frequencies of the respective channels output from the transmission signal generators -, -, . . . , -n are first combined by a power combiner and then collectively amplified by an amplifier . Thus, since the amplifier is not provided in the succeeding stage of the amplifier , it becomes unnecessary to connect an isolator in the preceding stage of the power combiner and a problem of loss and heat generation which occurs in the individual amplification system by the presence of the isolators will not occur. However, since a plurality of channel signals are simultaneously input to the amplifier , the linearity of the amplifier becomes important in order to prevent inter-modulation distortion between the channel signals (generally, the high linearity operation and the high-efficiency operation conflict with each other), but in recent years, the high-efficiency operation of a linear amplifier can be attained by various technical improvements. In this respect, the advantage in efficiency of the collective amplification system is recognized. 24 24 24 24 26 26 However, in the collective amplification system, the operation efficiency of approx. 40% at maximum can be attained when the maximum permissible number of channels are received, but if there is an unused channel, the efficiency is lowered. This is because the amplifier must be operated in a low-efficiency operating region (low input power portion) when the number of channels used is small since the input power to the amplifier is changed according to the number of channels used. Further, in the collective amplification system, since heat generation is concentrated in one portion of the amplifier , it becomes necessary to take a large-scale heat radiation measure. Since the number of accommodated channels is determined by the maximum permissible number of channels of the amplifier and the value of the maximum permissible power of the filter , there occurs a problem that it is difficult to increase the number of accommodated channels after designing of the system. Further, there occurs a problem that large permissible power becomes necessary as the specification of the filter in order to deal with a large number of channels. Accordingly, it is an object of the present invention to solve a problem of a conventional multicarrier radio transmission apparatus of individual amplification system. Another object of the present invention is to provide a radio transmission apparatus which can prevent inter-modulation distortion between channels in the amplifier and reduce the adjacent channel leakage power to attain the high-efficiency operation of the amplifier higher than the efficiency of the conventional individual amplification system. Still another object of the present invention is to provide a radio transmission apparatus which is excellent in heat radiation and can attain the high-efficiency operation irrespective of the utilization factor of the channels. Another object of the present invention is to provide a radio transmission apparatus which is highly flexible with respect to an increase or decrease in the number of accommodated channels. Another object of the present invention is to provide a radio transmission apparatus which can cope with a difference in the transmission rate with high flexibility. A radio transmission apparatus according to the present invention performs radio transmission by use of a plurality of carrier frequencies and comprises signal processing systems each including a transmission signal generator for generating a signal of one carrier frequency, an amplifier for amplifying the signal generated from the transmission signal generator, and a variable band-pass filter for permitting only the signal of the one carrier frequency among the output signal of the amplifier to pass therethrough; and a combiner for combining signals output from the variable band-pass filters of the plurality of signal processing systems into one signal and using the signal as a transmission signal. In the above radio transmission apparatus, since a signal generated from the transmission signal generator and passing through the amplifier and variable band-pass filter which constitute one signal processing system is a signal of one carrier frequency, inter-modulation distortion between the signals of respective carrier frequencies will not occur in the amplifier. Further, since only the signal of one carrier frequency is permitted to pass through the filter, the adjacent channel leakage power is suppressed so as to make it difficult to generate inter-modulation distortion of the signal between the channels. Therefore, the individual amplifiers can be operated with an efficiency higher than that of the conventional individual amplification system. Generally, the amplifier having no inter-modulation distortion is operated with a lowest efficiency. In the conventional individual amplification system, the amplifier can be operated with an efficiency higher than that of the amplifier having no inter-modulation distortion since it is required for the amplifier that the adjacent channel leakage power is limited to be lower than a predetermined level. According to the present invention, since the leakage power is suppressed by the filter, it is not required for the amplifier that the adjacent channel leakage power is limited to be lower than the predetermined level. As a result, the amplifier can be operated with an efficiency higher than that of the conventional individual amplification system. Further, since the amplifiers are separately disposed for the respective channels, the heat radiation characteristic is improved so as to make it unnecessary to provide a large-scale heat radiation structure. Since the filter can be constituted by a filter having a resistance to withstand the power of a signal of one channel, a superconductive filter, for example, can be used as the filter of the radio transmission apparatus. The transmission signal generators of the plurality of signal processing systems have carrier frequencies different for the respective signal processing systems. Therefore, interference due to inter-modulation of the signal of another carrier frequency can be prevented and it becomes possible to easily combine the powers of the signals of the respective channels by use of the combiner, thereby making it possible to save the power of the radio transmission apparatus. Further, the center frequency of the variable band-pass filter can be made variable with the bandwidth of the pass band thereof kept constant or both of the center frequency and the bandwidth of the pass band can be made variable. Therefore, it becomes possible to cope with data transmission from data transmission with relatively low rate as in the case of voice to high rate data transmission as in the case of moving picture by determining the bandwidth of the pass band of each variable band-pass filter according to the signal transmission rate. Further, if a superconductive filter is used as the variable band-pass filter, a refrigerator for cooling the superconductive filters, power monitoring means for monitoring powers of the signal outputs from the amplifiers, temperature monitoring means for monitoring the temperatures of the superconductive filters, and control means for variably controlling the operation efficiency of the refrigerator based on the power monitoring result obtained by the power monitoring means and the filter temperature obtained by the temperature monitoring means are provided. Thus, the refrigerator can be efficiently operated and power saving can be attained. Additional objects and advantages of the present invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the present invention. The objects and advantages of the present invention may be realized and obtained by means of the instrumentalities and combinations particularly pointed out hereinafter. A preferred embodiment of a multicarrier radio transmission apparatus according to the present invention will now be described with reference to the accompanying drawings. First Embodiment FIG. 3 32 1 32 2 32 34 1 34 2 34 36 1 36 2 36 38 40 is a block diagram showing the construction of the first embodiment. The first embodiment is constructed by transmission signal generators -, -, . . . , -n, amplifiers -, -, . . . , -n, filters -, -, . . . , -n, power combiner , and filter controller . 32 1 32 2 32 32 1 32 2 32 42 44 1 44 2 48 46 1 44 1 48 46 2 44 2 48 52 50 46 1 46 2 48 32 1 32 2 32 FIG. 4 Each of the transmission signal generators -, -, . . . , -n generates a transmission signal of one carrier frequency. For example, as shown in , each of the transmission signal generators -, -, . . . , -n includes a baseband signal processing circuit for converting a digital data signal to be transmitted to modulation signals I and Q, D/A converters - and - for converting the digital modulation signals to analog modulation signals, and a frequency converter for converting the analog modulation signals to a signal of a carrier frequency band for radio transmission. The frequency converter includes a local oscillator , a mixer - for mixing the output of the D/A converter - with the output of the local oscillator , a mixer - for mixing the output of the D/A converter - with the output of the local oscillator supplied via a 90° phase shifter , and an adder for adding together the outputs of the mixers - and -. The oscillation frequencies of the local oscillators are different for the respective transmission signal generators -, -, . . . , -n. 34 1 34 2 34 32 1 32 2 32 36 1 36 2 36 The amplifiers -, -, . . . , -n amplify signals of different carrier frequencies generated from the transmission signal generators -, -, . . . , -n to corresponding transmission power levels and then output the amplified signals to the filters -, -, . . . , -n. 36 1 36 2 36 40 40 36 1 36 2 36 32 1 32 2 32 36 1 36 2 36 Each of the filters -, -, . . . , -n is a variable band-pass filter and has a function of shifting the center frequency of the pass band (of the fixed bandwidth) having a constant bandwidth according to a control signal from the filter controller . The filter controller selects the center frequency of the pass band of each of the filters -, -, . . . , -n according to the control signal so as to permit only a signal of one desired carrier frequency among the plurality of carrier frequencies which can be used for transmission (that is, one of the output signals of transmission signal generators -, -, . . . , -n) to pass through a corresponding one of the filters -, -, . . . , -n. 36 1 36 2 36 36 1 36 2 36 40 1 40 2 40 As the filters -, -, . . . , -n which will be described later, a superconductive filter is used. Therefore, the filters -, -, . . . , -n are contained in refrigerators -, -, . . . , -n. Since the refrigerator is expensive, one refrigerator may be provided for containing all of the filters instead of using the refrigerators for the respective filters, but in order to suppress an influence due to the fault of the refrigerator, it is preferable to provide a plurality of refrigerators so as to containing two or more filters into one refrigerator although it is not absolutely required to provide the refrigerator for each filter. With this construction, even if part of the refrigerators becomes defective, the defective portion eliminating operation can be effected without interrupting the whole operation of the radio transmission apparatus. 32 1 32 2 32 34 1 34 2 34 36 1 36 2 36 38 36 1 36 2 36 38 A series of devices (signal processing systems) constituted by the transmission signal generators -, -, . . . , -n, amplifiers -, -, . . . , -n and filters -, -, . . . , -n are provided by a number corresponding to the number of carrier frequencies which can be used in the radio transmission apparatus and connected to the power combiner in parallel. Signals passing through the filters -, -, . . . , -n are combined in power by the power combiner to make one transmission signal which is in turn transmitted via an antenna (not shown). 36 1 36 2 36 302 300 304 306 308 304 306 310 312 314 308 304 308 310 312 314 310 304 306 312 304 314 304 FIGS. 5A and 5B FIG. 6. A An example of the construction of the filters -, -, . . . , -n is explained below. are a plan view and cross sectional view showing one example of a superconductive filter. The filter is a two-stage frequency variable filter as shown in the equivalent circuit diagram of superconductive ground conductor layer is formed on one surface of a dielectric substrate and superconductive signal conductor layers and are formed on the other surface thereof so as to constitute a microstrip line structure. A ferroelectric layer (whose dielectric constant is variable) is formed on the signal conductor layers and of the microstrip line structure. Interdigital electrodes , , and for applying a voltage to the ferroelectric layer are formed on the resonator. The signal conductor layer , ferroelectric layer , and electrodes , , and constitute a resonator. The interdigital electrodes are each disposed between the resonator and an input/output line , the interdigital electrodes are each disposed on the resonator , and the interdigital electrode is disposed between the resonators . 6 308 1 1 5 5 310 11 12 308 2 2 2 2 312 01 02 308 3 3 314 2 304 The equivalent circuit of the filter is shown in FIG. . The dielectric constant of the ferroelectric layer is varied by changing the voltages V, V′; V, V′ applied to the interdigital electrodes , and as a result, the coupling factors k, k for the input/output can be changed and the external Q can be adjusted. Further, the dielectric constant of the ferroelectric layer is varied by changing the voltages V, V′; V, V′ applied to the interdigital electrodes , and as a result, the resonance frequencies f, f can be changed. Also, the dielectric constant of the ferroelectric layer is varied by changing the voltages V, V′ applied to the interdigital electrode , and as a result, the coupling factors k between the resonators can be changed. Thus, by combining the voltage adjusting operations, it is possible to realize a superconductive filter in which the center frequency and bandwidth can be varied and high Q and high channel selectivity can be attained. FIGS. 7A and 7B 703 704 702 701 705 702 As another example of the variable band-pass filter, a filter disclosed in U.S. Ser. No. 08/653,270 (notice of allowance was issued on Apr. 26, 1999) can also be used. The content of U.S. Ser. No. 08/653,270 is incorporated herein by reference. show a first example thereof. The microstrip structure is constructed by forming input/output lines and a plurality of resonance elements on the front surface of a dielectric substrate having a ground layer formed on the rear surface thereof. Dielectric layers whose relative permittivity ∈′ is changed by application of voltage are formed on the dielectric substrate to construct a multi-stage filter. 704 705 706 705 705 In order to adjust the filter characteristic and resonance frequency of each resonance element in the high-frequency device with the above structure, it is necessary to adjust the coupling factor between the resonance elements and the coupling amount of the external Q as well as the resonance element length. When an attempt is made to adjust all of the adjustment portions and if the number of stages of the filter (the number of resonance elements ) is set to n, it is necessary to make adjustments on (2n+1) portions. Therefore, the dielectric layers are respectively formed on the stages of the filter and the effective resonance length, the coupling factor between the resonance elements and the coupling amount of the external Q are changed by application of a DC voltage from a variable voltage source to the dielectric layers via voltage application electrodes (not shown) formed on the end portions of the dielectric layers . With this structure, the resonance frequency and filter characteristic can be easily adjusted to desired characteristics. FIGS. 8A and 8B 1701 1704 1702 1707 1703 1701 1706 1703 <math overflow="scroll"><mrow><mi>δ</mi><mo>=</mo><msqrt><mfrac><mn>1</mn><mrow><mi>π</mi><mo>⁢</mo><mstyle><mtext> </mtext></mstyle><mo>⁢</mo><mi>f</mi><mo>⁢</mo><mstyle><mtext> </mtext></mstyle><mo>⁢</mo><mi>μσ</mi></mrow></mfrac></msqrt></mrow></math> show a second example of the variable band-pass filter. The high-frequency device of microstrip line structure constructed by a resonance element and this input/output lines formed on the front surface of a dielectric substrate having a ground layer formed on the rear surface thereof is provided. A dielectric layer whose permittivity is changed by application of voltage is formed on the resonance element . Voltage application interdigital electrodes having a thickness equal to or less than the skin depth δ expressed by the following equation are formed on the dielectric layer . where f denotes the frequency of the input signal of the filter, μ denotes the magnetic permeability, a denotes the electric conductivity of the electrode. 1705 1706 1703 Further, a plurality of variable voltage sources are connected to the voltage application electrodes so as to make it possible to change the permittivity of the dielectric layer in plural positions. 40 36 1 36 2 36 36 1 36 2 36 101 102 103 105 1 105 104 1 104 106 1 106 FIG. 9 In a radio transmission system such as a portable telephone, the number of carrier frequencies which can be used in each base station is varied according to the number of calls generated in a cell covered by the base station. Therefore, the filter controller changes the center frequencies of the pass bands of the filters -, -, . . . , -n as required when the number of carrier frequencies used by the base station is varied. shows an example of the pass band shifting of the filters -, -, . . . , -n and a state in which the pass band of a filter is shifted from a band to a band (the center frequency is changed) is shown. A band indicates the pass band of another filter, bands - to -n indicate channels assigned to the radio transmission apparatus and bands - to -n; - to -n indicate channels assigned to other radio transmission apparatuses. 32 1 32 2 32 34 1 34 2 34 36 1 36 2 36 36 1 36 2 36 38 38 As described above, according to the present invention, since signals generated from the transmission signal generators -, -, . . . , -n and passing through amplifiers -, -, . . . , -n and filters -, -, . . . , -n which constitute signal processing systems are of respective carrier frequencies, it becomes unnecessary to connect isolators between the filters -, -, . . . , -n and the power combiner unlike the conventional individual amplification system and a problem of the conventional individual amplification system that great loss occurs to produce a large amount of heat can be solved. Further, since the channel selectivity of the filter is high, the power combiner can be made simple in construction. 34 1 34 2 34 Thus, the present embodiment relates to the individual amplification system and a problem of the collective amplification system can also be solved. That is, since signals of a plurality of carrier frequencies are simultaneously input to one amplifier in the collective amplification system, inter-modulation distortion occurs between the signals of the carrier frequencies due to the non-linearity of the amplifier and the power of a distorted signal generated by the inter-modulation distortion interferes with and gives a bad influence on a signal of another carrier frequency. On the other hand, according to the present embodiment, since a signal input to each of the amplifiers -, -, . . . , -n is only a signal of one carrier frequency, inter-modulation distortion between the signals of the respective carrier frequencies will not occur. 36 1 36 2 36 38 Further, in the present embodiment, interference due to inter-modulation of the signal of another carrier frequency can be prevented by setting the pass bands of the filters -, -, . . . , -n of the respective systems to bands for different carrier frequencies. Therefore, it becomes possible to easily combine the signals in power of the respective channels by the combiner . 303 302 301 10 304 305 36 1 36 2 36 34 1 34 2 34 Generally, it is understood that, in the amplifier, modulation distortions appear on both sides of an original signal when a modulation signal is amplified as shown in FIG. and power leaks into the adjacent channel to cause interference with the signal of the adjacent channel. On the specification, the upper limit of the adjacent channel leakage power is determined. For example, in a modulation system in which a signal component is contained in an amplitude component such as QPSK, the amplifier is backed off and operated with low efficiency in order to securely attain the linearity of the power amplifier. On the other hand, in the present embodiment, since only a signal of one carrier frequency is permitted to pass through one of the filters -, -, . . . , -n, the adjacent channel leakage power is suppressed and it becomes difficult for inter-modulation distortion of the signal between the channels to occur. Therefore, the amplifiers -, -, . . . , -n can be operated with high efficiency. Generally, the ratio of the power consumption of the amplifier to the whole power consumption of the radio transmission apparatus is extremely large, and therefore, the high efficiency of the amplifier greatly contributes to power saving of the radio transmission apparatus. FIG. 11 34 1 34 2 34 34 1 34 2 34 Further, as shown in , for example, it is rare to use all of the channels in the whole time in each base station and only one channel may be used in some time zone. At this time, if even one channel is used in the collective amplification system, approximately the same power as that when all of the channels are used becomes necessary and the power efficiency is further lowered. In the present embodiment, the power supplies of those of the amplifiers -, -, . . . , -n which are not used are turned OFF and the power is supplied only to the channel used. As a result, more effective power saving can be attained. More specifically, although not shown in the drawing, an amplifier controller which is similar to the filter controller is provided to turn ON/OFF the power supplies of the amplifiers -, -, . . . , -n according to the number of transmission channels. 34 1 34 2 34 Further, the heat source is concentrated on one amplifier in the collective amplification system and the heat radiation measure for the heat radiation is an important subject, but in the present embodiment, the whole heat source is dispersed into a plurality of small heat sources by separately disposing the amplifiers -, -, . . . , -n for the respective channels and thus the heat radiation property can be improved, thereby making it unnecessary to provide a large-scale heat radiation structure. Further, in the collective amplification system, since the signal of each channel is amplified after power synthesis, it is required for, for example, the filter connected to the succeeding stage of the amplifier to have high power resistance. The filter is constructed by a resonator and the resonator has a function of concentrating power at the resonance frequency. Therefore, if the filter is constructed by a superconductive material, it is operated only at the power level kept within the critical power density of the superconductor. That is, if a superconductive filter is used in the collective amplification system, the superconductive filter is required to have a high power withstand characteristic and it is difficult to realize a superconductive filter having such a high withstand power characteristic in practice. On the other hand, in the present embodiment, since a filter having a resistance to withstand the power of a signal of one channel can be used, the low loss characteristic can be maintained even when a superconductive filter is used. Other embodiments of the multicarrier radio transmission apparatus according to the present invention will be described. The same portions as those of the first embodiment will be indicated in the same reference numerals and their detailed description will be omitted. Second Embodiment FIG. 12 62 32 1 32 2 32 64 34 1 34 2 34 66 36 1 36 2 36 62 64 66 68 62 64 66 shows a circuit construction for detecting a fault of signal processing systems of respective channels and interrupting the operation of the signal processing system of the channel when a fault is detected. For this purpose, a fault detector is connected to the transmission signal generators -, -, . . . , -n, a fault detector is connected to the amplifiers -, -, . . . , -n, and a fault detector is connected to the filters -, -, . . . , -n. Outputs of the fault detectors , , are supplied to a transmission controller and generation of the transmission signal of the channel having a fault is interrupted. Although not shown in the drawing, the power supply of the amplifier of the channel having a fault may be turned OFF according to the outputs of the fault detectors , , . In this embodiment, two filters are contained in one refrigerator. Thus, since this embodiment is an individual amplification system, the control operation is effected to interrupt the operation of the system of a channel including a device having a fault and continuously effect the operation by use of only the systems of the remaining correct channels. In the collective amplification system, the whole system is set into a non-usable state when the amplifier becomes defective, for example, but in this embodiment, since the operation of the systems of the correct channels can be kept effecting until the fault is eliminated, a radio transmission apparatus with high reliability can be realized. Third Embodiment FIG. 13 40 72 1 72 2 72 34 1 34 2 34 74 1 74 2 74 36 1 36 2 36 36 1 36 2 36 72 1 72 2 72 74 1 74 2 74 76 40 1 40 2 40 36 1 36 2 36 shows the construction of a radio transmission apparatus of the third embodiment. The filter controller for shifting the pass bands of the filters is omitted in the drawing for the sake of simplicity. Power monitors -, -, . . . , -n for monitoring powers of signals of individual carrier frequencies are connected to the output terminals of amplifiers -, -, . . . , -n and temperature monitors -, -, . . . , -n for monitoring the temperatures of filters (superconductive filters) -, -, . . . , -n are connected to the output terminals of the filters -, -, . . . , -n. Outputs of the power monitors -, -, . . . , -n and temperature monitors -, -, . . . , -n are supplied to a refrigerator controller to variably control the operation efficiency of refrigerators -, -, . . . , -n for cooling the respective filters -, -, . . . , -n based on the results of power measurement and temperature measurement. FIG. 13 40 In , the filter controller for shifting the pass bands of the filters is also omitted for the sake of simplicity. 36 1 36 2 36 40 1 40 2 40 40 1 40 2 40 36 1 36 2 36 As described before, the number of carrier frequencies used and the transmission power vary with time and amounts of heat generated in the filters -, -, . . . , -n also vary. Therefore, the refrigerators -, -, . . . , -n are efficiently operated by varying the abilities of the refrigerators -, -, . . . , -n according to the amounts of generated heat of the filters -, -, . . . , -n. As a result, power saving can be attained. 36 1 36 2 36 72 1 72 2 72 34 1 34 2 34 36 1 36 2 36 40 1 40 2 40 A slight time difference due to heat conduction occurs between the temperature measuring timing and the heat generation timing of the filters -, -, . . . , -n. Therefore, in a case where the refrigerator is controlled only based on the monitoring results of the power monitors -, -, . . . , -n for detecting variations in powers of the amplifiers -, -, . . . , -n leading to a fluctuation in the frequency. The constant temperature control operation cannot be attained because of the time difference. Therefore, powers passing through the filters -, -, . . . , -n are also measured and the abilities of the refrigerators -, -, . . . , -n are varied according to the monitoring result so as to stabilize the frequency. Fourth Embodiment FIG. 14 FIG. 14 72 1 72 2 72 34 1 34 2 34 76 40 1 40 2 40 32 1 32 2 32 74 1 74 2 74 36 1 36 2 36 40 As shown in , if transmission power is previously known, power monitors -, -, . . . , -n for detecting variations in powers of the amplifiers -, -, . . . , -n can be omitted and a refrigerator controller may effect the control operation to vary the abilities of the refrigerators -, -, . . . , -n based on the output timings of transmission signal generators -, -, . . . , -n and the monitoring result of the temperature monitors -, -, . . . , -n for monitoring the temperatures of the filters -, -, . . . , -n. The filter controller for shifting the pass bands of the filters is also omitted from for the sake of simplicity. Fifth Embodiment FIGS. 5A and 5B Next, a fifth embodiment of a radio transmission apparatus according to the present invention is explained. In the above embodiment, the pass bandwidth of the filter is fixed and only the center frequency is variable to shift the pass band. However, the pass bandwidth of the superconductive filter is variable as explained with reference to and the embodiment in which the pass band can be changed together with the center frequency is explained. Since the circuit diagram is the same as that of each of the first to fourth embodiments, the circuit diagram is not shown in the drawing. FIG. 15 201 202 203 205 1 205 2 205 204 1 204 2 204 206 1 206 2 206 For example, in a case wherein data of various transmission rates including data of relatively low rate such as a voice and data of relatively high rate such as a moving picture is transmitted, the circuit is constructed to variably control the diffusion bandwidth of individual carriers according to the transmission rate. shows an example of the changed pass bands of the filters and a state in which the pass band of a filter is changed from a band to a band having a larger bandwidth is shown. A band indicates the pass band of another filter, -, -, . . . , -n indicate channels (the bandwidth is variable) assigned to the radio transmission apparatus and -, -, . . . , -n; -, -, . . . , -n indicate channels assigned to apparatuses other than the above radio transmission apparatus. According to this embodiment, data transmission from data transmission of relatively low rate as in the case of voice to data transmission of relatively high rate as in the case of moving picture can be coped with. As described above, according to the present invention, since a signal generated from the transmission signal generator and passing through amplifier and variable band-pass filter corresponding to one channel is only a signal of one carrier frequency, signals of respective carrier frequencies will not cause inter-modulation distortion in the amplifier. Further, since only the signal of one carrier frequency is permitted to pass through the filter, the adjacent channel leakage power is suppressed and inter-modulation distortion of the signal between the channels becomes difficult to occur. Therefore, the individual amplifiers can be operated with high efficiency. Further, since the amplifiers are separately disposed by a number equal to the number of channels, the heat radiation characteristic can be improved and a large-scale heat radiation structure becomes unnecessary. Further, according to the present invention, interference due to inter-modulation of the signal of another carrier frequency can be prevented by effecting the control operation to select the bands corresponding to the carrier frequencies for the respective channels as the pass bands of the variable band-pass filters provided between the amplifiers and the power combiner. Therefore, it becomes possible to easily combine the signals in power of the respective channels by the combiner and attain power saving of the radio transmission apparatus. Further, according to the present invention, since a filter having a resistance to withstand the power of a signal of one channel can be used, a superconductive filter can be used as the filter of the radio transmission apparatus. Also, according to the present invention, data transmission from data transmission with relatively low rate as in the case of voice to high rate data transmission as in the case of moving picture can be coped with by determining the bandwidth of the pass band of each variable band-pass filter according to the signal transmission rate. Further, according to the present invention, the operation efficiency of the refrigerator for cooling the superconductive filter can be improved to contribute to power saving of the radio transmission apparatus. This invention is not limited to the above embodiments and can be variously modified without departing from the technical scope thereof. BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate presently preferred embodiments of the present invention and, together with the general description given above and the detailed description of the preferred embodiments given below, serve to explain the principles of the present invention in which: FIG. 1 is a block diagram showing a conventional multicarrier radio transmission apparatus of individual amplification system; FIG. 2 is a block diagram showing a conventional multicarrier radio transmission apparatus of collective amplification system; FIG. 3 is a block diagram of a first embodiment of a multicarrier radio transmission apparatus according to the present invention; FIG. 4 is a block diagram of a transmission signal generator of the first embodiment; FIGS. 5A and 5B are a plan view and cross sectional view showing one example of a variable band-pass filter of the first embodiment; FIG. 6 FIGS. 5A and 5B is an equivalent circuit diagram of the filter shown in ; FIGS. 7A and 7B are a plan view and cross sectional view showing a second example of the variable band-pass filter of the first embodiment; FIGS. 8A and 8B are a plan view and cross sectional view showing a third example of the variable band-pass filter of the first embodiment; FIG. 9 is a diagram showing the operation for varying the pass band of the filter of the first embodiment; FIG. 10 is a diagram for illustrating a reduction in the efficiency of the amplifier due to the modulation distortion; FIG. 11 is a diagram for illustrating a service condition of the channel; FIG. 12 is a block diagram of a second embodiment of a multicarrier radio transmission apparatus according to the present invention; FIG. 13 is a block diagram of a third embodiment of a multicarrier radio transmission apparatus according to the present invention; FIG. 14 is a block diagram of a fourth embodiment of a multicarrier radio transmission apparatus according to the present invention; and FIG. 15 is a diagram showing the operation for varying the pass band of a filter of a fifth embodiment of a multicarrier radio transmission apparatus according to the present invention.
The Chapter has a number of supports for CPA Candidates such as: Capstone 1 Mock Board Presentation Chapter professionals volunteer to form a mock board/boards to provide interested groups with an opportunity to: • practice presentation and Q&A skills, and • receive feedback on their presentations in a setting that approximates the actual event. Be prepared to bring any equipment necessary to make your presentation or a suitable alternative to a live PowerPoint presentation (i.e. print outs of the slides, etc.). Mock Board presentations will be held in July. will be held in July. A survey will be sent out in May/June to identify those who are interested. The personal information you share in the survey will be provided to the Chapter to arrange the presentations. Someone from the Chapter will reach out during the module, so please designate a group member for the Chapter to communicate with regarding the mock board(s). The Victoria/Southern Vancouver Island Chapter will use personal information gathered on this form so that a representative can make arrangements for this project. The chapter is part of CPABC and is separate from CPAWSB. Networking Opportunities The Chapter hosts many networking events throughout the year to encourage camaraderie amongst CPA members, students and candidates. The connections that you make at networking events can open doors for you in the the future. We encourage all future CPAs to attend your Chapter networking events. Please visit the Chapter Calendar to view all upcoming events in your area. Information about the Education Program PEP Candidate Guides: The CPA Canada candidate guides for the Professional Education Program (CPA PEP) include information about the CPA PEP module components, computer requirements and candidate responsibilities. CPA Way: Learn the methodical approach for addressing professional problems that is used throughout the CPA Professional Education Program (CPA PEP). Resources for the CPA Professional Education Program (CPA PEP). Module Resources: You can find sample cases and an exam blueprints for each module on the following pages: Core 1 – Learn more about the Core 1 module Core 2 – Learn more about the Core 2 module Assurance – Learn about the Assurance elective module Finance – Learn about the Finance elective module PM – Learn about the Performance Management elective module Tax – Learn about the Taxation elective module Capstone 1 – The CPA PEP Capstone 1 case relates to Day 1 of the Common Final Examination. The capstone modules are culminating courses; in them you demonstrate what you have learned over the course of the CPA PEP. Learn more about Capstone 1 Capstone 2 – N/A as this is a CFE prep module. See CFE section below. CFE Resources: Sample CFE cases – More information on how to prepare for the Common Final Examination (CFE) by reviewing case examples for all three days of the CFE Simulations and guides for the CFE – Following every CFE, CPA Canada publishes the CFE Board of Examiners’ report, which provides feedback on candidates’ performance and commentary from the Board of Examiners.
https://www.bccpa.ca/chapter-events/cpabc-victoria-southern-vancouver-island/resources/cpa-candidates/
DUTIES AND RESPONSIBILITIES This position requires significant organizational capabilities. The scientist must be willing to learn and understand sample tracking software, communication tools, and regulatory rules. The Research Scientist will begin with specimen processing and lab maintenance and will progress to preparing specimens for high-throughput assays. Some research scientists perform molecular assays and run experiments. Research Support: Receive, record, aliquot and store incoming specimens: - includes accessioning samples (10%) - Processing specimens for assays and biobanking (35%) - Interfacing with sample tracking software (10%) - Lab maintenance and regulatory compliance activities (25%) - Communicate with collaborators and management on progress (10%) - Assisting with downstream nucleic acids processing (10%) MINIMUM REQUIREMENT B.S. degree in biology, molecular biology, cellular biology, genetics or related discipline and three months or less of job-related experienced. Equivalent education/experience will substitute for all minimum qualifications except when there are legal requirements, such as a license/certification/registration. ADDITIONAL REQUIREMENTS - Demonstrated experience following protocols in wet-lab environment - Experience in sample handling. - Candidate should have outstanding academic and professional track record. - Candidate should be an analytical thinker with outstanding interpersonal and communication skills. - Must be highly disciplined and organized. To apply for this job please visit www.indeed.com.
https://www.openspecimen.org/job/research-scientist-engineer/?utm_source=hs_email&utm_medium=email&_hsenc=p2ANqtz-8QmQiR69H_4E-_yxTtjP43lMrLeXG1R6nSVx3W2anXF9rDYp83Mw-NxdJ5IRxO434UG17V
Canyons are some of the most interesting geological formations on Earth. Besides being interesting natural attractions to visit and enjoy some adventures, these formations play a huge role in science as they hold lots of information about the earth’s distant past. Many of these formations take millions of years to form, which makes them even more interesting to study. With many rivers flowing in different parts of the earth, there are so many canyons that have formed throughout the earth’s existence, and for those who wish to explore these natural formations, here are some epic canyons to visit around the world. 10 Verdon Gorge, France When it comes to scenery, Verdon Gorge is probably the most scenic canyon in the world. The gorge is located in Southeastern France, and its most striking feature is the charming turquoise river flowing through the massive walls of tree-covered rocks. This river flows up to 15 miles through the canyon, and engaging in a scenic float through the water is a bucket list adventure to experience while in France. 9 Gandikota Canyon, India Gandikota is also known as the Grand Canyon of India, and for starters, it is the deepest canyon in India and also one of the deepest in the world. This canyon is the product of the river Pennar, which continues to flow through the Erramala hills revealing new rocks. Besides the impressive sight of this canyon, this area also has an interesting ancient history which makes it even more worthy of a visit. 8 Antelope Canyon, United States Even though Antelope Canyon does not compare to the sheer size of the Grand Canyon, it makes up for size with unique beauty, which has made it the most photographed canyon in the world. The colorful and flowing shape that characterizes this slot canyon is the result of erosion, and this process still takes place as the canyon still experiences flash floods. Sometimes these glass floods can be very dangerous, and it has been known to trap hikers away on some occasions; hence one might have to be on alert at all times when exploring the geological formation. Oftentimes, it doesn’t have to rain in the canyon for it to be dangerous. Rain from 15 miles away or more can make it dangerous. The light beams that radiate into the canyon during periods of high sunlight also make the canyon more beautiful, and many people wait for the summer months to witness this unique phenomenon. 7 Colca Canyon, Peru Colca Canyon is one of the deepest canyons in the world, with a depth of 3270 m. That impressive feature, along with the colorful vegetation in the valley, is perhaps part of the reasons this canyon in Southern Peru remains one of the most visited attractions in Peru. Colca Canyon features a length of approximately 70 kilometers, and despite being known for its geological magnificence and exciting hiking trails, this canyon has a lot more that will interest history-loving travelers. It is said to be inhabited by pre-Inca inhabitants, and one can visit to explore these people and their traditions. 6 Todra Gorge, Morocco Morocco is not just about dunes and vast deserts. The country also hosts some impressive canyons, and one of them worth checking out is the Todra Gorge, located in the eastern Atlas Mountains. The Canyon was formed by the river Oued and the dramatic scenery, as well as the towering size, which is particularly seen in the last part, makes it worth a visit. Perhaps the best way to see this canyon is by hiking, but the scenic drive through the geological formation is also a great way to see the beauty. 5 King’s Canyon, Australia Australia has long been a place to see stunning natural wonders and otherworldly landscapes. While in this country, one can check out the picturesque Uluru after exploring the spectacular King’s Canyon. The canyon is located in Watarrka National Park, and there are lots of hikes across the desert region that offers opportunities for outdoor enthusiasts to witness the unique red rock landscape of the canyon. Helicopter rides, camel tours, and 4WD Vehicle rides are great ways to explore this canyon. 4 Fjadrargljufur Canyon, Iceland Iceland is not new to rock formations, when it comes to canyons are abundant in the country. The Fjadrargljufur Canyon, however, is one of the most impressive. Located in South Iceland, this canyon is characterized by a picturesque carpet of green grasses and views of a scenic river flowing through the rock formation. The canyon is just 2 km long and 100 m deep, although it makes up for its size with the scenery. When in Iceland, also check out some of the country’s scenic waterfalls and its impressive volcanoes, and snorkel between two continents at the Siffra Fissure to witness more interesting natural attractions in the country. Since it was featured in Justin Bieber’s video, this canyon has seen an overflow of visitors, which has caused a lot of damage to the fauna and flora of the area. For this reason, the canyon is now closed at certain times of the year to allow it to heal. Check here to know if the Canyon is open. 3 Waimea Canyon, United States With a distance of 14 miles and a depth of approximately 3,600 feet, Waimea Canyon is one of the most impressive canyons in the United States. Located on the island of Kauai in Hawaii, this canyon is also known as the Grand Canyon of the Pacific, and it is characterized by waterfalls, streams, and lots of greenery. Waimea Canyon is best seen from the road, and some trails lead to some impressive lookouts. 2 Grand Canyon, United States The Grand Canyon is, without a doubt, the earth’s most impressive canyon. Due to its immense size and popularity, it is now almost synonymous with the word “Canyon” and is the first destination that comes to mind when that is mentioned. With a length of 277 miles, a width of 18 miles, and a depth of approximately one mile, it is perhaps the best place on earth to witness the earth’s geological history. 1 The Copper Canyon, Mexico The copper canyon perhaps got its name from the unique copper/green color of the walls or the copper mines in the region. Located in the Mexican state of Chihuahua, the canyon is a combination of six different canyons, which combine to make it one of the biggest in the world, even more than the Grand Canyon. The abundance of adventures at this canyon also makes it worth a visit. At Copper Canyon, one can experience hiking, train rides, and ziplining. The canyon also features waterfalls and some interesting archaeological sites.
https://www.thetravel.com/most-epic-canyons-around-the-world/
If you plan to implement Microsoft Dynamics 365 for your business, you have to prepare for handling a complex task. A structured project management plan has to be for the deployment to happen smoothly. And, it will also call for changes from employees. To help you attain the implementation within the least possible time and efficiently, we list below the following best practices: Formulate a Strategic Plan The first step of the implementation is to formulate a strategic plan. So, you should involve subject matter experts during the planning process. The starting point should be the planning of the infrastructure. Remember that Microsoft Dynamics 365 is cloud-based, so the updates occur automatically and in real-time regularly. Prepare for the implementation strategically and consider all implementation instances to avoid any delays. Examine Current Systems and Tools Next, you should assess your existing systems and tools to ensure that they are up-to-date to handle the dynamics 365 implementation. Remember that your tools and systems should not be obsolete as they can make the implementation fail. As such, you should examine all your existing systems, databases, and applications. You can regard the implementation also as an opportunity to standardize and centralize your IT systems because that is a prerequisite for the implementation. You should assess your current infrastructure and get detailed insights into its capability of handling the implementation effectively. It will prevent delays and the possibility of a failed implementation. Understand Every Functionality After you examine your existing systems and tools for their readiness, you should understand all functionalities before adding more modules or features to your Microsoft Dynamics 365 solution. Microsoft constantly updates Dynamics 365 with new features and add-ons. So, you will not likely require all the latest features for your business. Also, there can be only a few features that can perfectly fit your business’s needs. Therefore, you should study the current features of the Dynamics 365 solutions to figure out why it is available. Then, it will help you choose custom components that suit your business needs. Build Cross-functional Teams A cross-functional team is essential to carry out the implementation. Therefore, ensure that you do not deploy a single team to implement Microsoft Dynamics 365 for your business. Instead, you should delegate the responsibility of the implementation to cross-functional teams that can find and recognize inefficiencies, bottlenecks, time gaps, and problems with quality control. It is an excellent practice to deploy a team of professionals to work on specific process adjustments and solutions to handle unexpected problems. It can also ensure the accomplishment of each job on schedule and within the budget. Involve Your IT Team in Decision-Making You can choose one of the Microsoft Dynamics implementation partners to implement Dynamics 365 for your business. But, you should not leave the entire implementation to the consultant as it is not a good practice. Instead, you should include your IT team to partner with the implementation consultant. It will help to use the right tools and methods. Your IT team should decide on the critical matters related to the deployment and integrated systems. It will ensure that your IT team can handle Dynamics 365 CRM independently after the implementation process. Documentation Finally, you should document all the relevant features and use cases of the Microsoft Dynamics implementation. It will help prevent failure if an employee does not know the proper usage. Proper documentation can help train your employees with the right usage with explanatory videos and guides. Conclusion Remember, many businesses fail to implement Microsoft Dynamics365 correctly due to poor planning. But, you can avoid such failures by actuating a strategic approach in the implementation.
https://visitmagazines.com/microsoft-dynamics-365-implementation-the-best-practices/
How is SwipeSimple different from other merchant services providers? Most point-of-sale providers offer integrated payment processing or allow you to select your own third-party processor. Since SwipeSimple doesn’t provide payment processing and its systems are only available through its partners, your processing options are limited and there isn’t a clear pricing structure. Keep reading to learn more about its features or skip ahead to compare POS systems you can purchase directly.
https://www.finder.com/point-of-sale/swipesimple-review
NEW YORK (Reuters) - After closing stores around the world to curb the spread of the coronavirus, retailers are now telling some vendors to immediately cancel orders. FILE PHOTO: The sign outside a Ross store is seen in Broomfield, Colorado February 27, 2014. REUTERS/Rick Wilking On Thursday, discount store operator Ross Stores Inc (ROST.O) sent a letter to its vendors, notifying them it would cancel all merchandise purchase orders through June 18 due to the impact the novel coronavirus has had on its business. “This is the first time in our history that we are unable to deliver exceptional merchandise to our customers,” the memo, which was reviewed by Reuters, reads. The Dublin, California-based discount store operator said it would also extend payment terms on all existing merchandise payables by 90 days. A company spokesperson was not immediately available for comment. Paul Rotstein, President and Chief Executive of Gold Medal International of New York City, which supplies accessories like socks and gloves to Ross Stores, said other retailers are making similar moves. “We’ve had pretty much 100% cancellation from all major retailers,” he said, naming Macy’s Inc (M.N), Nordstrom Inc (JWN.N), TJX Companies Inc (TJX.N), and Burlington Coat Factory BCF.UL among his customers who have pressed pause on orders. Macy’s said on its website on Wednesday that the heavy toll from coronavirus is forcing it to freeze both hiring and spending, reducing receipts and extending the terms for payment of all goods and services. The three other retailers were not available for comment. “I’ve been doing this 38 years - this doesn’t compare to anything I’ve experienced,” Rotstein said. “Just the loss of income, I mean, I think we’re looking at no income for at least 8 weeks.” More than 470,800 people have been infected by the coronavirus across the world and over 21,200 have died, according to a Reuters tally. Earlier in the day, the U.S. government reported that the number of Americans filing claims for unemployment benefits surged to a record of more than 3 million last week, as strict measures to contain the coronavirus pandemic brought the country to a sudden halt. Rotstein said that some retailers are asking him to “just pause maybe another week or two to try and figure out what the receipts are going to look like in the third and fourth quarter.” Last week, many government officials ordered all non-essential businesses to close in hopes of reigning in the spread of the virus, forcing retailers to dangle steep online discounts on clothing, shoes and accessories. Some are ceasing online operations, too. Shoppers searching TJX’s tjmaxx.com, marshalls.com and sierra.com are directed to a statement from Ernie Herrman, CEO and President of The TJX Companies, alerting them that the retailer’s thousands of store operations have been closed globally to reduce the spread of the virus. “The company is also temporarily closing its online businesses tjmaxx.com, marshalls.com and sierra.com during this time, as well as its distribution and fulfillment centers and offices, with associates working remotely when they can,” Herrman said. Yoox Net-a-Porter, which sells designer clothes, shoes, bags, and accessories online, notified customers visiting its U.S. site that it closed its warehouse “in line with local government guidelines, and for the health and safety of our community.” UK shoe retailer Schuh said on Thursday it had decided to close its online site as well as its stores to keep its staff, customers and community safe. For some, demand online “has picked up, but it hasn’t picked up nearly as much as the loss of the brick-and-mortar” sales, said Steve Sadove, a senior adviser for Mastercard and former CEO and chairman of Saks Fifth Avenue. Europe’s biggest pure online fashion retailer Zalando said on Wednesday that restrictions on public life were hitting demand. A spokeswoman said demand for athleisure and gear for yoga and running has risen as people are forced to work – and exercise – at home. Reporting by Melissa Fares in New York; additional reporting by Emma Thomasson in Berlin; editing by Edward Tobin
Hackathons are popular, but are they useful? In an essay titled, “Why Hackathons are Bad for Innovation,” a consultant and an MIT lecturer made a point about these brainstorming sessions, and about creativity in general, that’s often overlooked: “Innovation is usually a lurching journey of discovery and problem-solving. As a result, it’s an iterative, often slow-moving process that requires patience and discipline.” The precise mechanics of creativity are not well understood. But the circumstances in which people are most creative are fairly straightforward. We’ve written before about the vital and intertwined roles that hiring, and setting and culture play in an determining an organization’s capacity for innovation. The common thread running through those elements is serendipity — promoting the likelihood that people with different backgrounds and perspectives will share ideas and collaborate. (Slate recently reported on the chance meeting and “failed experiment” that led to a breakthrough treatment for burn victims, for which MIT Professor Ioannis Yannas was recently inducted into the National Inventor’s Hall of Fame.) But ideas are just starting points. It’s one thing to imagine an invention. It’s quite another to clear the dozens, hundreds, perhaps thousands of obstacles that stand between idea and viable product. “Creative thinking requires our brains to make connections between seemingly unrelated ideas,” writes James Clear in “Why Creativity Is a Process, Not an Event.” Fittingly, given that headline, he goes on to explain: One of the most critical components [of personal creativity] is how you view your talents internally. More specifically, your creative skills are largely determined by whether you approach the creative process with a fixed mindset or a growth mindset. The differences between these two mindsets are described in detail in Carol Dweck’s fantastic book, Mindset: The New Psychology of Success. The basic idea is that when we use a fixed mindset we approach tasks as if our talents and abilities are fixed and unchanging. In a growth mindset, however, we believe that our abilities can be improved with effort and practice. It’s impossible to overstate the importance of persistence in innovation. This is one reason why we support bringing back shop classes in high schools — and not just in robotics and other headline-grabbing tech. Working with wood and metal is hard, but for those willing to struggle, the results will improve, and that’s the important lesson. No one truly learns this in a lecture; they learn it by experiencing it themselves. (Read more about growth mindset here.) But even mindset, while important, is still just the halfway point between forming ideas and acting on them. The last step, work, is the one that determines whether you can innovate. And it’s not enough to work when inspiration strikes — in fact that’s a terrible habit. In Manage Your Day-to-Day: Build Your Routine, Find Your Focus, and Sharpen Your Creative Mind, a collection of essays on productivity, writer Gretchen Rubin explains the often misunderstood relationship between inspiration and effort: You’re much more likely to spot surprising relationships and to see fresh connections among ideas, if your mind is constantly humming with issues related to your work. … Creativity arises from a constant churn of ideas, and one of the easiest ways to encourage that fertile froth is to keep your mind engaged with your project. When you work regularly, inspiration strikes regularly. We like the term “creative collisions” because it hints at the messy nature of innovation. Collisions are violent. Objects that collide often break and need to be put back together. And it’s the same with ideas; creative collisions are not a goal but a means to an end. You can’t just weld a snowblower onto a lawnmower and expect either to work properly (or anyone to buy it). But with patience and persistence, you can build just such a hybrid, and keep on building it until you’ve devised something genuinely unique.
https://www.nottinghamspirk.com/musings/creative-collisions-the-spark-of-inspiration-for-vertical-innovation
Sports Minister Carál Nà ChuilÃn has officially opened the 2011 CPISRA Boccia World Cup at the University of Ulster's Jordanstown campus. The tournament is part of the qualifying rounds for the London 2012 Paralympic Games. Boccia is a target ball sport belonging to the same family as bowls and is a Paralympic sport for athletes with severe disabilities affecting motor skills, and is one of only three Paralympic sports not to have an Olympic counterpart. Speaking at Saturday night's event, Minister Nà ChuilÃn said: “This is great news for sport and it is an honour for the north of Ireland to host the 2011 Boccia World Cup. Approximately 500 athletes, coaches and officials from 33 countries will compete for qualifying positions in the 2012 Paralympic Games. “My Department, through Sport NI, has been working with the governing bodies of sport and local councils for some time, targeting teams and athletes to visit the north of Ireland for events and training in the run-up to 2012. Many of these discussions are now bearing fruit. In December we will host the International Badminton Championships which is a pre-qualifying event for the 2012 Olympic Games and of course the Boccia World Cup which we are opening tonight. We have also hosted several positive visits from a range of London2012 Olympic and Paralympic Committees including the Chinese gymnastics team and the Indonesian badminton squad.” This is the first time the event has been held in Ireland or the UK and it is estimated the event will result in a cash injection of over £1million into the local economy. The Boccia World Cup will be held from 20 – 26 August 2011 at the University of Ulster's Jordanstown campus. With free entry for spectators, the event promises to be great entertainment and is also a fantastic opportunity to raise the profile of Boccia and increase participation in the sport, particularly in the run up to London 2012. Professor Richard Barnett, Vice Chancellor of the University of Ulster, said: “The University of Ulster extends a warm welcome to all of the participants of the CPISRA 2011 Boccia World Cup. “The University is at the forefront of sporting endeavour and we are committed to making this the most successful Boccia World Cup yet. “With qualifying places at stake for the 2012 Paralympic Games in London, the competition is set to be fantastic and we are looking forward to some excellent matches from the world’s elite Boccia talent. “This wonderful event will leave a lasting legacy for sport for those with disabilities by serving as a catalyst for the development of the sport in Northern Ireland and the involvement of people of all abilities in sport in general. “Bringing the Boccia World Cup to the University of Ulster shows the lengths that we, along with our partner organisations, have gone to, to bring the spirit of the Olympic and Paralympic Games to Northern Ireland. “I would like to extend my thanks to those people who have given up their time to commit themselves to delivering a world-class event at a world-class venue.” Disability Sport NI's Kevin O'Neill, said: "At Disability sports we have been very careful to plan a genuine legacy from the event. Over the last ten years we have worked hard at developing the sport of Boccia in Northern Ireland and as a result the sport is now widely played at a recreational level throughout the country by people with physical, sensory and learning disabilities. However the next stage is to develop a squad system and performance pathway for young people who show some talent in the sport, and we are confident we can achieve this as a direct legacy of the Boccia World Cup."
https://www.ulster.ac.uk/news/2011/august/sports-minister-opens-boccia-world-cup
The neural architecture in the auditory cortex — the part of the brain that processes sound — is virtually identical in profoundly deaf and hearing people, a new study has found. The study raises a host of new questions about the role of experience in processing sensory information, and could point the way toward potential new avenues for intervention in deafness. The study is described in a June 18 paper published in Scientific Reports. The paper was written by Ella Striem-Amit, a postdoctoral researcher in Alfonso Caramazza’s Cognitive Neuropsychology Laboratory at Harvard, Mario Belledonne from Harvard, Jorge Almeida from the University of Coimbra, and Quanjing Chen, Yuxing Fang, Zaizhu Han, and Yanchao Bi from Beijing Normal University. “One reason this is interesting is because we don’t know what causes the brain to organize the way it does,” said Striem-Amit, the lead author. “How important is each person’s experience for their brain development? In audition, a lot is known about [how it works] in hearing people, and in animals … but we don’t know whether the same organization is retained in congenitally deaf people.” Those similarities between deaf and hearing brain architecture, Striem-Amit said, suggest that the organization of the auditory cortex doesn’t critically depend on experience, but is likely based on innate factors. So in a person who is born deaf, the brain is still organized in the same manner. But that’s not to suggest experience plays no role in processing sensory information. Evidence from other studies has shown that cochlear implants are far more successful when implanted in toddlers and young children, Striem-Amit said, suggesting that without sensory input during key periods of brain plasticity in early life, the brain may not process information appropriately. To understand the organization of the auditory cortex, Striem-Amit and her collaborators first obtained what are called “tonotopic” maps showing how the auditory cortex responds to various tones. To do that, they placed volunteers in an MRI scanner and played different tones — some high frequency, some low frequency — and tracked which regions in the auditory cortex were activated. They also asked groups of hearing and deaf subjects to simply relax in the scanner, and tracked their brain activity over several minutes. This allowed the researchers to map which areas were functionally connected — essentially those that showed similar, correlated patterns of activation — to each other. They then used the areas showing frequency preference in the tonotopic maps to study the functional connectivity profiles related to tone preference in the hearing and congenitally deaf groups and found them to be virtually identical. “There is a balance between change and typical organization in the auditory cortex of the deaf” said Bi, the senior researcher, “but even when the auditory cortex shows plasticity to processing vision, its typical auditory organization can still be found.” The study raises a host of questions that have yet to be answered. “We know the architecture is in place — does it serve a function?” Striem-Amit said. “We know, for example, that the auditory cortex of the deaf is also active when they view sign language and other visual information. The question is: What do these regions do in the deaf? Are they actually processing something similar to what they process in hearing people, only through vision?” In addition to studies of deaf animals, the researchers’ previous studies of people born blind suggest clues to the puzzle. In the blind, the topographical architecture of the visual cortex (the visual parallel of the tonotopic map, called “retinotopic”) is like that in the sighted. Importantly, beyond topographic organization, regions of the visual cortex show specialization in processing certain categories of objects in sighted individuals; the same specialization in the congenitally blind when stimulated through other senses. For example, the blind reading Braille, or letters delivered through sound, process that information in the same area used by sighted subjects in processing visual letters. “The principle that much of the brain’s organization develops largely regardless of experience is established in blindness,” Striem-Amit said. “Perhaps the same principle applies also to deafness.”
Introduction ============ Chronic or noncommunicable diseases are a major public health challenge and result in 38 million deaths worldwide each year ([@R1]). The proportion of mortality attributed to chronic conditions worldwide is expected to increase from 59% in 2002 to 69% in 2030 ([@R2]). In the United States, the prevalence of chronic conditions is rising, and by 2020 an estimated 157 million Americans will have at least 1 chronic condition ([@R3],[@R4]). In recent years, the number of people living with multiple chronic conditions (MCC) has increased ([@R4]). MCC is defined by the US Department of Health and Human Services as the presence of 2 or more chronic conditions ([@R5]). In the United States, the prevalence of MCC increased from 21.8% in 2001 to 26% in 2010 ([@R3],[@R6]), and an estimated 71% of all health care spending is allocated to the care of people with MCC ([@R7]). In recent years, the number of studies on the management and treatment of people with MCC has also increased ([@R8]); people with MCC constitute a growing health care and financial burden on the health system ([@R9]). MCC is more often present among the older population, among those with lower levels of education, among people living alone or in a home for the elderly ([@R10]), and among those living in deprived areas ([@R11]). The primary objectives of our study were to describe the prevalence and correlates of MCC among the Israeli population overall and to compare MCC in the nation's 2 main population groups (Jewish and Arab). A secondary objective was to examine time trends in MCC. Methods ======= We used data from the Israeli National Health Interview Survey (INHIS), a cross-sectional population-based telephone survey conducted periodically by the Israel Center for Disease Control. The main source of data for our analysis was the most recent INHIS survey, INHIS-III, conducted in 2014--2015 on a random representative sample of 4,325 Israeli adults. For the trend analysis, we used data from the previous 2 surveys, INHIS-I (2003−2004) and INHIS-II (2007--2010), which used the same methodology in data collection and sampling as INHIS-III. Data collection and procedures of INHIS-I are detailed elsewhere ([@R12]). The survey questionnaire is based on the European Health Interview Survey framework initiated by the World Health Organization (WHO) Regional Office for Europe in 2000 ([@R13]). For the INHIS-III, a random sample of telephone numbers of 19,692 Jewish households and 10,799 Arab households was extracted from a computerized list of all household landlines in Israel. Inclusion criteria included households with residents aged 21 years or older, who were able to communicate in Hebrew in the Jewish sample or Arabic in the Arab sample. In 4,325 households an eligible resident was contacted and interviewed. Oral informed consent was obtained from each participant. Definitions ----------- Chronic conditions were assessed by asking the participant whether he or she had ever received a diagnosis from a physician for any of the following 10 conditions: asthma, arthritis, cancer, diabetes, dyslipidemia, heart attack, hypertension, migraine, osteoporosis, or thyroid disease. Persons who responded yes to having a physician-diagnosed chronic condition were considered to have a chronic condition. MCC was defined as having 2 or more self-reported physician-diagnosed chronic conditions ([@R3],[@R4],[@R6]). The questionnaire included questions on sociodemographic characteristics (age, sex, population group, marital status, monthly household income \[in US dollars\], and number of years of schooling) and smoking status (never smoked, past smoker, or current smoker). Body mass index (BMI) was calculated by dividing reported weight by the square of reported height (kg/m^2^) and categorized, according to WHO guidelines, as underweight, normal weight, overweight, and obese ([@R14]). Physical activity was calculated according to WHO guidelines for physical activity, which recommend at least 150 minutes of moderate-intensity aerobic physical activity per week ([@R15]). Statistical analysis -------------------- For the descriptive analysis, we calculated prevalence and 95% confidence intervals (CIs) for chronic conditions and MCC. We calculated the weighted prevalence of chronic conditions; weights were calculated from the general population for each year of the survey: INHIS-I (2004), INHIS-II (2010), and INHIS-III (2014). We conducted bivariate analysis to explore associations between MCC and sociodemographic characteristics, household income, smoking status, and BMI; the χ^2^ test was used for categorical variables. We conducted multivariate analysis using a weighted logistic regression model and calculated adjusted prevalence rate ratios (PRRs) and 95% CIs. We examined time trends in MCC by comparing the age-adjusted prevalence of MCC in INHIS-III with the age-adjusted prevalence of MCC in INHIS-I and INHIS-II ([@R16],[@R17]). The population of Israel in 2010 was used as the standard population to estimate the age adjusted rates of MCC. *P* for trend was calculated by using the Cochran--Armitage trend test for proportions. We did not include thyroid disease in the trend analysis because it was not included in INHIS-I. Percentage change was calculated as the difference between INHIS-III and INHIS-I rates, divided by the INHIS-I rate and multiplied by 100. The number of chronic conditions was grouped into 4 categories: none, 1 chronic condition, 2 or 3 chronic conditions, and 4 or more chronic conditions. Finally, we conducted a sensitivity analysis using the following 5 clusters to explore prevalence and trends: 1) asthma or migraine; 2) hypertension, dyslipidemia, heart attack, or diabetes; 3) arthritis or osteoporosis; 4) cancer; and 5) thyroid disease. We conducted all statistical analyses using SAS version 9.1 (SAS Institute, Inc). A *P* value of \<.05 was considered significant. Results ======= In INHIS-III, 69.3% of respondents were Jewish and 30.7% were Arab. The average age was 47.2 years (standard deviation, 16.3 y), with a median age of 53; most respondents were married or living with a partner (80.7%); 50.4% were men; 55.5% had completed more than 12 years of schooling; 37.6% reported a monthly household income of \$2,000 or less; 19.6% were current smokers, 22.1% were past smokers, and 58.3% had never smoked; 21% had a BMI of 30.0 or more; and 33.8% reported at least 150 minutes per week of physical activity. Prevalence and correlates of chronic conditions and MCC ------------------------------------------------------- In INHIS-III, 54.3% (95% CI, 52.3%--56.3%) of respondents reported at least 1 chronic condition. The prevalence of MCC was 27.3% (95% CI, 25.7%--28.8%) ([Table 1](#T1){ref-type="table"}). In the bivariate analysis, the prevalence of MCC was significantly associated with older age, female sex, being Jewish, having a monthly household income of \$2,000 or less or a monthly household income of \$2,001 to \$3,000, having 12 years of schooling or fewer, current or past smoking, and overweight or obesity ([Table 1](#T1){ref-type="table"}). After adjusting for age, prevalence rates of MCC were higher among the Arab population than in the Jewish population in INHIS-I, INHIS-II, and INHIS-III ([Figure](#F1){ref-type="fig"}). ###### Prevalence of Multiple Chronic Conditions (≥2 Chronic Conditions) Among Israelis Aged ≥21 Years, by Selected Demographic Characteristics, Israeli National Health Interview Survey, 2014--2015 Characteristic No. (Weighted %) \[95% CI\] *P*Value[a](#T1FN1){ref-type="table-fn"} ------------------------------------------------------- ----------------------------- ------------------------------------------ **Total** 1,579 (27.3) \[25.7--28.8\] --- **Age, y** 21--34 40 (8.3) \[5.6--11.0\] \<.001 35--49 234 (16.2) \[14.1--18.2\] 50--64 544 (40.6) \[37.7--43.5\] ≥65 761 (64.1) \[61.3--66.9\] **Sex** Male 726 (23.2) \[21.2--41.2\] \<.001 Female 853 (31.2) \[28.9--33.5\] **Population group** Jewish 1,127 (27.9) \[26.1--29.7\] .02 Arab 452 (24.4) \[22.7--26.7\] **Monthly household income, US\$** ≤2,000 243 (41.3) \[35.4--47.3\] \<.001 2,001--3,000 744 (27.9) \[25.7--30.3\] 3,001--4,000 159 (21.0) \[17.6--24.4\] \>4,000 191 (23.7) \[20.1--27.4\] **Years of schooling** \>12 751 (23.5) \[21.6--25.4\] \<.001 ≤12 801 (33.3) \[30.8--35.8\] **Marital status** Married or living with a partner 1,263 (27.4) \[25.7--29.1\] .32 Unmarried 316 (27.2) \[23.7--30.6\] **Smoking status** Never 829 (23.9) \[22.0--25.7\] .001 Current 276 (26.0) \[22.5--29.5\] Past 454 (39.4) \[35.8--42.9\] **BMI[b](#T1FN2){ref-type="table-fn"}** Underweight (18.5) 10 (16.5) \[4.0--28.9\] \<.001 Normal weight (18.5 to \<25.0) 377 (18.1) \[15.9--20.2\] Overweight (25.0 to ≤29.9) 627 (30.9) \[28.4--33.4\] Obese (≥30.0) 422 (42.7) \[38.6--46.9\] **Physical activity[c](#T1FN3){ref-type="table-fn"}** \<150 min per week 1,050 (27.3) \[25.4--29.1\] .72 ≥150 min per week 529 (27.4) \[24.9--29.9\] Abbreviations: ---, not applicable; BMI, body mass index; CI, confidence interval. Determined by weighted bivariate analysis. Calculated by dividing reported weight by the square of reported height (kg/m^2^) ([@R14]). Physical activity guidelines of World Health Organization ([@R15]). ![Age-adjusted prevalence of chronic conditions in the A) Jewish population and B) Arab population in Israel, by number of chronic conditions, Israeli National Health Interview Survey, 2003--2004, 2007--2010, and 2014--2015. *P* for trend was \<.05 for all chronic conditions and for multiple chronic conditions (MCC). MCC was defined as 2 or more chronic conditions.\ Survey YearNone, %1 Condition, %2 or 3 Conditions, %≥4 Conditions, %MCC (≥2 Conditions), %Jewish population2003--200459.123.714.82.217.02007--201054.524.417.73.020.62014--201548.827.819.53.923.5Arab population2003--200460.617.817.04.521.52007--201053.421.920.34.224.62014--201546.525.720.47.427.8](PCD-14-E64s01){#F1} In the multivariate logistic regression, MCC was associated with older age, female sex, a monthly household income of \$3,000 or less, current or past smoking, and overweight or obesity. The risk for MCC was 17 times higher among respondents aged 65 or older than among respondents aged 21 to 34. It was higher among women (adjusted PRR, 1.6) than among men, among respondents with a monthly household income of \$2,000 or less (adjusted PRR, 1.7) or \$2,001 to \$3,000 (adjusted PRR, 1.3) than among respondents with a monthly household income of more than \$4,000, among current smokers (adjusted PRR, 1.4) and past smokers (adjusted PRR, 1.3) than among nonsmokers, and among overweight (adjusted PRR, 1.9) and obese (adjusted PRR, 3.0) respondents than among normal-weight respondents ([Table 2](#T2){ref-type="table"}). ###### Factors Associated With Multiple Chronic Conditions (≥2 Chronic Conditions) Among Israelis Aged ≥21 Years, Israeli National Health Interview Survey, 2014--2015[a](#T2FN1){ref-type="table-fn"} Characteristic Adjusted Prevalence Rate Ratios (95% CI) *P*Value ------------------------------------------------------- ------------------------------------------ ---------- **Age, y** 21--34 1 \[Reference\] 35--49 2.1 (1.6--2.8) \<.001 50--64 6.5 (4.9--8.7) \<.001 ≥65 17.0 (12.6--23.1) \<.001 **Sex** Male 1 \[Reference\] Female 1.6 (1.3--2.0) \<.001 **Population group** Jewish 1 \[Reference\] Arab 1.0 (0.8--1.3) .72 **Monthly household income, US\$** ≤2,000 1.7 (1.2--2.5) .005 2,001--3,000 1.3 (1.0--1.7) .04 3,001--4,000 1.0 (0.7--1.3) .83 \>4,000 1 \[Reference\] **Years of schooling** \<12 1 \[Reference\] ≥12 1.0 (0.8--1.2) .59 **Marital status** Married or living with a partner 1 \[Reference\] Unmarried 1.1 (0.9--1.4) .22 **Smoking status** Never 1 \[Reference\] Current 1.4 (1.1--1.8) \<.001 Past 1.3 (1.0--1.7) .01 **BMI[b](#T2FN2){ref-type="table-fn"}** Normal weight (18.5 to \<25.0) 1 \[Reference\] Underweight (\<18.5) 1.0 (0.5--1.9) .91 Overweight (25.0 to ≤29.9) 1.9 (1.5--2.2) \<.001 Obese (≥30.0) 3.0 (2.4--3.9) \<.001 **Physical activity[c](#T2FN3){ref-type="table-fn"}** \<150 min per week 0.9 (0.7--1.1) .21 ≥150 min per week 1 \[Reference\] Abbreviations: BMI, body mass index; CI, confidence interval. Determined by weighted multivariate logistic regression. Calculated by dividing reported weight by the square of reported height (kg/m^2^) ([@R14]). Physical activity guidelines of World Health Organization ([@R15]). Most prevalent chronic conditions and combinations (dyads and triads) --------------------------------------------------------------------- In 2014, the most prevalent chronic conditions were dyslipidemia (29.6%; 95% CI, 28.3%--31.4%), hypertension (20.6%; 95% CI, 19.3%--21.9%), thyroid disease (8.9%; 95% CI, 7.9%--9.9%), migraine (8.8%; 95% CI, 7.6%--9.9%), diabetes (9.3%; 95% CI, 8.4%--10.1%), asthma (7.2%; 95% CI, 6.5%--8.5%), and arthritis (5.5%; 95% CI, 4.8%--6.2%). Our sensitivity analysis indicated that the most prevalent disease cluster was hypertension, dyslipidemia, heart attack, or diabetes (40.5%; 95% CI, 38.7%--42.3%). The most prevalent dyad was dyslipidemia and hypertension (24.0%; 95% CI, 20.7%--27.3%) ([Table 3](#T3){ref-type="table"}), and the most prevalent triad was dyslipidemia, hypertension, and diabetes (17.9%; 95% CI, 13.9%--21.9%). In the analysis of dyads by sex, the most prevalent dyad was dyslipidemia and hypertension for both men (31.8%; 95% CI, 26.5%--37.2%) and women (17.5%; 95% CI, 13.5%--21.5%). The most prevalent triad was dyslipidemia, hypertension, and diabetes for both men (29.5%; 95% CI, 21.7%--37.2%) and women (10.9%; 95% CI, 6.7%--15.0%). The most prevalent dyad of disease clusters was asthma or migraine/hypertension, dyslipidemia, heart attack, or diabetes (31.5%; 95% CI, 27.9%--35.2%) ([Table 4](#T4){ref-type="table"}). The most prevalent triad of disease clusters was asthma or migraine/hypertension, dyslipidemia, heart attack, or diabetes/arthritis or osteoporosis (28.4%; 95% CI, 21.2%--35.7%) ###### Prevalence of the 5 Most Prevalent Dyads and Triads of Multiple Chronic Conditions (≥2 Chronic Conditions) Among Israelis Aged ≥21 Years, by Sex, Israeli National Health Interview Survey, 2014--2015 Dyad or Triad, by Sex Weighted % (95% Confidence Interval) ------------------------------------------------- -------------------------------------- **Dyads** **Overall** Dyslipidemia and hypertension 24.0 (20.7--27.3) Dyslipidemia and thyroid disease 6.6 (4.7--8.6) Dyslipidemia and migraine 6.1 (4.2--7.9) Dyslipidemia and diabetes 5.9 (4.1--7.8) Dyslipidemia and asthma 5.8 (4.0--7.7) **Men** Dyslipidemia and hypertension 31.8 (26.5--37.2) Dyslipidemia and asthma 10.0 (6.5--13.5) Dyslipidemia and diabetes 7.7 (4.6--10.8) Dyslipidemia and cancer 4.6 (2.2--7.1) Diabetes and hypertension 4.0 (1.7--6.3) **Women** Dyslipidemia and hypertension 17.5 (13.5--21.5) Dyslipidemia and thyroid disease 9.2 (6.1--12.2) Dyslipidemia and migraine 8.1 (5.3--11.1) Dyslipidemia and arthritis 4.5 (2.3--6.7) Dyslipidemia and diabetes 4.4 (2.3--6.6) **Triads** **Overall** Dyslipidemia, hypertension, and diabetes 17.9 (13.9--21.9) Dyslipidemia, hypertension, and thyroid disease 7.0 (4.4--9.7) Dyslipidemia, hypertension, and heart attack 5.3 (2.9--7.7) Dyslipidemia, hypertension, and osteoporosis 4.7 (2.5--7.0) Dyslipidemia, hypertension, and cancer 4.5 (2.3--6.7) **Men** Dyslipidemia, hypertension, and diabetes 29.5 (21.7--37.2) Dyslipidemia, hypertension, and heart attack 12.7 (7.0--18.4) Dyslipidemia, hypertension, and cancer 8.5 (3.7--13.3) Dyslipidemia, hypertension, and asthma 3.1 (0.1--6.0) Dyslipidemia, diabetes, and asthma 3.0 (0.1--3.9) **Women** Dyslipidemia, hypertension, and diabetes 10.9 (6.7--15.0) Dyslipidemia, hypertension, and thyroid disease 10.3 (6.3--14.4) Dyslipidemia, hypertension, and osteoporosis 7.0 (3.6--10.4) Dyslipidemia, hypertension, and migraine 5.0 (2.1--7.9) Dyslipidemia, hypertension, and arthritis 3.1 (0.7--5.4) ###### Prevalence of the 5 Most Prevalent Dyads and Triads of Clusters of Chronic Conditions (≥2 Chronic Conditions) Among Israelis Aged ≥21 Years, by Sex, Israeli National Health Interview Survey, 2014--2015 Clusters, by Sex Weighted % (95% Confidence Interval) ---------------------------------------------------------------------------------------------------- -------------------------------------- **Dyads** **Overall** Asthma or migraine/hypertension, dyslipidemia, heart attack, or diabetes 31.5 (27.9--35.2) Hypertension, dyslipidemia, heart attack, or diabetes/arthritis or osteoporosis 25.5 (22.1--28.9) Hypertension, dyslipidemia, heart attack, or diabetes/thyroid disease 17.5 (14.5--20.5) Hypertension, dyslipidemia, heart attack, or diabetes/cancer 12.5 (9.6--14.8) Asthma or migraine/thyroid disease 4.5 (2.8--6.2) **Men** Asthma or migraine/hypertension, dyslipidemia, heart attack, or diabetes 39.7 (33.3--46.1) Hypertension, dyslipidemia, heart attack, or diabetes/cancer 22.6 (17.2--28.1) Hypertension, dyslipidemia, heart attack, or diabetes/arthritis or osteoporosis 19.3 (14.2--24.5) Hypertension, dyslipidemia, heart attack, or diabetes/thyroid disease 12.2 (7.9--16.5) Asthma or migraine/arthritis or osteoporosis 1.9 (1.1--3.6) **Women** Hypertension, dyslipidemia, heart attack, or diabetes/arthritis or osteoporosis 29.3 (24.6--33.5) Asthma or migraine/hypertension, dyslipidemia, heart attack, or diabetes 26.9 (22.5--31.2) Hypertension, dyslipidemia, heart attack, or diabetes/thyroid disease 20.6 (16.6--24.5) Asthma or migraine/thyroid disease 6.8 (4.3--9.3) Asthma or migraine/arthritis or osteoporosis 6.4 (4.3--9.3) **Triads** **Overall** Asthma or migraine/hypertension, dyslipidemia, heart attack, or diabetes/arthritis or osteoporosis 28.4 (21.2--35.7) Hypertension, dyslipidemia, heart attack, or diabetes/arthritis or osteoporosis/thyroid disease 22.6 (15.9--29.4) Asthma or migraine/hypertension, dyslipidemia, heart attack, or diabetes/thyroid disease 15.8 (9.9--21.6) Asthma or migraine/hypertension, dyslipidemia, heart attack, or diabetes/cancer 12.2 (6.9--17.4) Hypertension, dyslipidemia, heart attack, or diabetes/arthritis or osteoporosis/cancer 11.7 (6.6--16.9) **Men** Asthma or migraine/hypertension, dyslipidemia, heart attack, or diabetes/cancer 22.4 (6.1--37.9) Asthma or migraine/hypertension, dyslipidemia, heart attack, or diabetes/arthritis or osteoporosis 21.3 (6.0--36.5) Asthma or migraine/hypertension, dyslipidemia, heart attack, or diabetes/thyroid disease 18.9 (4.3--33.4) Hypertension, dyslipidemia, heart attack, or diabetes/arthritis or osteoporosis/cancer 13.7 (0.8--26.7) Hypertension, dyslipidemia, heart attack, or diabetes/cancer/thyroid disease 11.6 (0.0--21.6) **Women** Asthma or migraine/hypertension, dyslipidemia, heart attack, or diabetes/arthritis or osteoporosis 30.0 (21.9--38.2) Hypertension, dyslipidemia, heart attack, or diabetes/arthritis or osteoporosis/thyroid disease 25.5 (17.7--33.3) Asthma or migraine/hypertension, dyslipidemia, heart attack, or diabetes/thyroid disease 15.1 (8.7--21.4) Hypertension, dyslipidemia, heart attack, or diabetes/arthritis or osteoporosis/cancer 11.8 (6.1--17.6) Asthma or migraine/hypertension, dyslipidemia, heart attack, or diabetes/cancer 9.3 (4.1--14.5) Trend analysis for MCC and chronic conditions --------------------------------------------- The prevalence of MCC increased with each administration of the INHIS among both Jewish and Arab populations, and the prevalence of having no chronic conditions correspondingly decreased ([Figure](#F1){ref-type="fig"}). The age-adjusted prevalence of MCC increased by 6.7% between 2003--2004 and 2014--2015. Among the Arab population, the age-adjusted prevalence of MCC increased from 21.5% in 2003--2004 to 24.6% in 2007--2010 and to 27.8% in 2014--2015 (*P* \< .001). Among the Jewish population, the age-adjusted prevalence of MCC increased from 17.0% in 2003--2004 to 20.6% in 2007--2010 and to 23.5% in 2014--2015 (*P* \< .001). An increase was observed among all age groups (Appendix Table 1). The prevalence of each chronic condition changed significantly over time, except for arthritis and osteoporosis (Appendix Table 2). The chronic condition with the greatest increase in prevalence was cancer (a 76.0% increase), followed by dyslipidemia (a 75.1% increase). The disease cluster with the greatest increase in prevalence (a 44.6% increase) was dyslipidemia, hypertension, heart attack, and diabetes. Discussion ========== Our study explored prevalence, trends, and factors associated with MCC in Israel. The weighted prevalence of MCC in 2014--2015 was 27.3%. Although the prevalence of having 1 chronic condition in our study (24.4%) was almost similar to the prevalence in a US study in 2012 (22.3%) ([@R18]), the prevalence of MCC was lower in our study (27.3%) than in the US study (33.8%). The prevalence of MCC was lower in our study also than the prevalence (37.2%) in a study conducted in Yorkshire, England, between 2010 and 2012, among a population aged 24 or older ([@R11]). A study in Hong Kong in 2012 found a prevalence of MCC of 13.4%; this low rate may have been partly due to various levels of accessibility to health services of various social strata ([@R19]). In addition, each of these studies used a different list of chronic conditions to estimate MCC, and each used a different method of data collection, which might have affected the results. For example, higher estimates of MCC in some countries may have resulted from including chronic conditions such as fatigue and insomnia, which we did not include. We found that MCC was associated with older age, female sex, a monthly household income of \$3,000 or less, being a current or past smoker, and being overweight or obese. The association with older age is consistent with findings of the US National Health Interview Survey ([@R3]). In our study, a significantly greater percentage of women than men reported MCC. This finding is in accordance with findings in several other countries ([@R3],[@R18],[@R20],[@R21]).The association of lower household income with MCC found in our study is also consistent with findings from other studies, such as a study in the mid-south region of the United States ([@R22]). The age-adjusted prevalence of MCC was significantly higher among the Arab population than among the Jewish population. This finding is consistent with known population-group differences in health status and the prevalence of risk factors among the Israeli population. The prevalence of obesity and diabetes is higher among the Arab population than among the Jewish population ([@R23],[@R24]); physical inactivity (ie, not being physically active for 20 minutes at least 3 times per week) is prevalent (77.5% in 2010) among the Arab population, and smoking rates among Arab men are high (47.2% in 2010) ([@R16],[@R17]). In the multivariate analysis, after we controlled for various explanatory variables, the significant difference between the Jewish and Arab population in the prevalence of MCC disappeared, indicating that the observed difference may be explained by social and behavioral factors rather than biological or ethnic predisposition to disease. Population group-differences in risk factors for chronic diseases were reported in other studies; for example, the Racial and Ethnic Approaches to Community Health (REACH) 2010 risk factor survey conducted in the United States among 4 racial/ethnic minority populations (black, Hispanic, Asian/Pacific Islander, and American Indian) found that these populations had greater risks for disease compared with the general population living in the same area. The excess risks were attributed to differences in the distribution of risk factors, chronic conditions, and use of preventive services ([@R25]). As in the REACH study, our study did not find an association between ethnicity and MCC, after controlling for social and behavioral risk factors. Our trend analysis indicated a significant increase in age-adjusted prevalence rates and in age-specific rates of MCC from 2003--2004 to 2014--2015. These trends were evident in both the Jewish and Arab populations. Because chronic conditions result from lifetime exposures and other risk factors, it follows that longer survival would increase the number of people living with chronic conditions ([@R26]). On the other hand, the increases could be attributed, at least partially, to changes in diagnostic criteria during the study period; for example, the diagnostic criteria for diabetes changed ([@R27]). In addition, our data on increases in rates of MCC over time are consistent with US data, which show a significant increase in MCC between 2001 and 2010 ([@R3]). The increase in MCC was evident among all age groups, consistent with current research. An increasing number of young and middle-aged adults are reporting more than 1 chronic condition ([@R28]). One study found that the younger population is more likely to have clusters of associated diseases than to have isolated diseases, which may explain the high prevalence of MCC among young adults ([@R10]). The increase in MCC may be attributed in part to increasing rates of risk factors associated with chronic conditions, such as obesity, a known risk factor for diabetes, coronary heart attack, elevated blood pressure, and certain types of cancer ([@R29],[@R30]). The prevalence of obesity increased from 15.9% to 21.1% (an increase of 33%) among the Israeli adult population from 2003 to 2014 ([@R16],[@R17]). In parallel, the prevalence of self-reported physician-diagnosed diabetes increased by 49.3% during the same time ([@R16],[@R17]). The increase in MCC may also be attributed to an increase in awareness and use of health services for the early detection of chronic conditions. For example, from 2003 to 2015, mammography screening in Israel increased by 23%, blood pressure screening by 29%, and cholesterol testing by 94% ([@R16],[@R17]). Screening for early detection of cancer also contributes to increased incidence of cancer, which may be reflected in the rise in incidence of breast cancer among Israeli Arab women ([@R31]). The disease combinations found in our study are different from the combinations found in other studies. The 2 most common dyads among men in our study were dyslipidemia and hypertension and dyslipidemia and asthma, whereas in the United States, the 2 most common dyads were hypertension and arthritis and hypertension and diabetes ([@R3]). We found similar differences for women. On the other hand, our study found that hypertension and dyslipidemia were the most prevalent chronic conditions that occur together with other chronic conditions, and this outcome is consistent with the findings of a study showing that hypertension and hyperlipidemia were the most prevalent chronic conditions in MCC ([@R32]). A German study among older adults found that the most common triads of the most prevalent chronic conditions were hypertension, lipid metabolism disorders, and chronic low back pain and diabetes mellitus, osteoarthritis, and chronic ischemic heart attack ([@R33]), whereas in our study, among the population aged 65 or older, we found hypertension dyslipidemia, and diabetes and hypertension, osteoporosis, and thyroid disease to be the most common triads. The differences in combinations may be attributed in part to differences in the chronic conditions that were investigated and to differences in type of data source. For example, survey data (generally self-reported) are likely to yield different combinations of chronic conditions than are clinical data. Also worth noting, each chronic disease has a different impact on health-related quality of life and daily functioning. For example, osteoarthritis of the knee has a particularly great impact on the health-related quality of life of Chinese patients ([@R34]). To compare the prevalence of MCC across studies, a standardized list of chronic conditions and common criteria for data sources needs to be created. Although our study adds to the growing evidence that MCC is becoming a greater burden on the health care system and that its prevalence is increasing among the young adult population, this outcome needs to be further investigated and validated by using data from health maintenance organizations in addition to self-reported survey data. As far as we know, our study is the first study to explore the prevalence and correlates of MCC in 3 administrations of a large population-based survey in Israel. A major limitation of our study is the cross-sectional design, which precludes assumptions of causality. However, because our study had a large sample, it yielded stable estimates. Additionally, we could not assess the severity and duration of the chronic conditions or changes in conditions. On the other hand, because our study consisted of data on chronic conditions diagnosed by a physician, problems of recall bias were less likely to occur. With the steady increase in the population aged 65 or older, the prevalence of MCC will continue to increase. A comprehensive approach is needed to reduce the burden of chronic conditions, including intervention programs targeting populations at risk. Because the increase in MCC was observed across all age groups, preventive strategies need to be tailored for the younger population as well as for the older population. In addition, because our study indicated that the most prevalent chronic conditions in Israel are hypertension and dyslipidemia, a principal focus of preventive intervention in Israel needs to be directed toward healthy lifestyle promotion. To enable the comparison of data across studies of MCC, the list of chronic conditions investigated and the definition of MCC need to be standardized. The authors received no funding for the research described in this article and have no conflicts of interest to declare. The opinions expressed by authors contributing to this journal do not necessarily reflect the opinions of the U.S. Department of Health and Human Services, the Public Health Service, the Centers for Disease Control and Prevention, or the authors\' affiliated institutions. *Suggested citation for this article:* Hayek S, Ifrah A, Enav T, Shohat T. Prevalence, Correlates, and Time Trends of Multiple Chronic Conditions Among Israeli Adults: Estimates From the Israeli National Health Interview Survey, 2014--2015. Prev Chronic Dis 2017;14:170038. DOI: <https://doi.org/10.5888/pcd14.170038>. ###### Table 1. Trends in Prevalence of Multiple Chronic Conditions (≥2 Chronic Conditions) Among Israelis Aged ≥21 Years, by Age Group, Israeli National Health Interview Survey (INHIS), 2003--2014 Age Group, y INHIS-I (2003--2004), No. (Weighted %) \[95% Confidence Interval\] INHIS-III (2014--2015), No. (Weighted %) \[95% Confidence Interval\] -------------- -------------------------------------------------------------------- ---------------------------------------------------------------------- 21--34 53 (4.1) \[3.0--5.3\] 34 (7.1) \[4.6--9.6\] 35--49 240 (13.4) \[11.7--14.9\] 203 (14.0) \[12.1--15.9\] 50--64 698 (34.9) \[32.7--37.2\] 499 (36.6) \[33.8--39.4\] ≥65 837 (47.7) \[45.2--49.8\] 726 (61.0) \[58.1--63.8\] ###### Table 2. Trends in the Prevalence of Chronic Conditions (≥2 Chronic Conditions) Among Israelis Aged ≥21 Years, Israeli National Health Interview Survey (INHIS)^a^ Condition or Cluster INHIS-I (2003--2004), No. (%) (N = 9,509) INHIS-II (2007--2010), No. (%) (N = 10,331) INHIS-III (2014--2015), No. (%) (N = 4,351) \% Change From 2003--2004 to 2014--2015 *P* for Trend^b^ ------------------------------------------------------- ------------------------------------------- --------------------------------------------- --------------------------------------------- ----------------------------------------- ------------------ **Condition** Asthma 539 (5.7) 572 (5.8) 312 (7.2) 26.3 .008 Hypertension 1,636 (15.3) 2,472 (19.8) 1,196 (20.6) 34.6 \<.001 Dyslipidemia 1,724 (16.9) 3,033 (24.7) 1,630 (29.6) 75.1 \<.001 Heart attack 361 (3.5) 420 (2.9) 218 (2.9) −17.1 .01 Arthritis 578 (5.2) 495 (3.7) 347 (5.5) 5.7 .08 Osteoporosis 545 (5.1) 639 (5.4) 293 (5.0) −1.9 .17 Migraine 674 (6.8) 762 (7.0) 361 (8.8) 29.4 \<.001 Diabetes 683 (6.1) 1,089 (8.3) 578 (8.4) 37.3 \<.001 Cancer 256 (2.5) 448 (3.5) 257 (4.4) 76.0 \<.001 **Clusters** Asthma or migraine 1,147 (11.9) 1,259 (12.2) 633 (15.2) 27.7 \<.001 Hypertension, dyslipidemia, heart attack, or diabetes 2,933 (28.0) 4,353 (35.3) 2,249 (40.5) 44.6 .001 Arthritis or osteoporosis 993 (9.1) 1,025 (8.2) 565 (9.2) 1.1 .63 Cancer 256 (2.5) 448 (3.5) 257 (4.4) 76.0 \<.001 ^a^ Percentages were weighted for age, sex, and population group. ^b^ For weighted percentage.
- Published: Ethnic differences in the association of fat and lean mass with bone mineral density in the Singapore population BMC Proceedings volume 6, Article number: P43 (2012) - 1127 Accesses - Introduction Obesity and osteoporosis are two global health problems with pronounced morbidity and mortality. While body weight appears to mitigate the development of osteoporosis, whether excess body fat promotes or protects against osteoporosis remains a conundrum. The effect of ethnicity on these associations has also been understudied. We hypothesize that (1) fat mass (FM) and lean mass (LM) are independently associated with bone mineral density (BMD) and that (2) ethnic differences exist in the association of FM and LM with BMD among Chinese, Malay and Indian subjects. Methods We evaluated 150 overweight male subjects aged ≥21 years with body mass index ≥25 from 3 ethnic groups (Chinese =73; Malays =41; Indians =36). BMD in five regions (lumbar spine, femoral neck, total hip, ultra-distal radius and one-third radius), FM and LM were measured by dual-energy X-ray absorptiometry (DEXA) using a Hologic Discovery Wi densitometer. Whole abdomen subcutaneous and visceral fat volumes were determined by magnetic resonance imaging (MRI) and a validated segmentation algorithm. Linear regression models were developed to test the association of FM/LM with BMD, and univariate ANOVA was used to test for interaction between ethnicity and FM/LM with BMD. Results After adjusting for age and height, LM was positively correlated with BMD in all three ethnicities, but in different skeletal sites: weight-bearing regions (femoral neck, hip) in Chinese, and non-weight-bearing regions (ultra-distal and one-third radius) in Malays and Indians. A negative correlation between FM and BMD was observed consistently in all regions for the Indians, especially at the hip. Visceral fat was negatively correlated with BMD, being most pronounced among Chinese and least for Malays. The interaction models revealed that with each unit of LM, Malays showed a greater increment in BMD than Chinese and Indian subjects at the ultra-distal radius. Conclusions Our findings suggest that FM and LM affect BMD in opposite directions, with different physiological reasons modulating this relationship. Substantial ethnic differences were observed in the association of FM and LM with BMD. These results may help explain the variation in hip and wrist fracture rates by ethnicity, and may warrant ethnic-specific clinical recommendations. Rights and permissions This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. About this article Cite this article Teo, A., Ng, A., Venkataraman, K. et al. Ethnic differences in the association of fat and lean mass with bone mineral density in the Singapore population. BMC Proc 6, P43 (2012). https://doi.org/10.1186/1753-6561-6-S4-P43 Published:
https://bmcproc.biomedcentral.com/articles/10.1186/1753-6561-6-S4-P43
This article looks at what you can do to dust yourself off and develop resilience in life that will serve you. What is Resilience? A Definition “Resilience is that ineffable quality that allows some people to be knocked down by life and come back stronger than ever. Rather than letting failure overcome them and drain their resolve, they find a way to rise from the ashes.” In a nutshell, resilience can be defined as the ability – and tendency – to “bounce back.” What’s the Meaning of Bouncing Back? “Bouncing back” is what we do when we face disappointment, defeat, and failure, but instead of wallowing or letting things keep us down, we get back up and continue on with our lives. According to the APA Help Center, it’s “the process of adapting well in the face of adversity, trauma, tragedy, threats or significant sources of stress” (APA, n.d.). You might say someone bounces back when they experience a traumatic car accident and sustain serious injuries, but stay positive and optimistic through a long physical therapy journey. Resilience and Mental Toughness: What’s the Difference? Aside from the term “bouncing back,” there are many more similar concepts that resilience is often associated with. For instance, resilience is frequently used interchangeably with “mental toughness.” So what is mental toughness? Mental toughness is “a personality trait which determines in large part how individuals deal with stress, pressure and challenge irrespective of circumstances” (Strycharczyk, 2015). It’s part hardiness (optimism and predisposition towards challenge and risk), part confidence, and it is what allows people to take whatever comes in stride, with a focus on what they can learn and gain from the experience. While the association with resilience is understandable, it’s also easy to see where they differ: resilience is what helps people recover from a setback, but mental toughness can help people avoid experiencing a setback in the first place. As Doug Strycharczyk puts it, “All mentally tough individuals are resilient, but not all resilient individuals are mentally tough” (2015). Those who are mentally tough are not only able to bounce back, they are more likely to see hardship as a welcome challenge and greet it with a smile. Resilience vs. Grit Another commonly used synonym for resilience is grit, but is grit really a synonym for resilience? According to Professor Guy Claxton’s Building Learning Power organization, grit is not just a synonym for resilience: “Grit is a more recent import, much researched by Angela Duckworth, and is defined as the tendency to sustain interest and effort towards long term goals. It is associated with self control and deferring short term gratification” (n.d.). Resilience is more narrowly defined, although it is related to the same experiences, skills, and competencies. One simple way to think about the differences between resilience and grit is that resilience more often refers to the ability to bounce back from short-term struggles, while grit is the tendency to stick with something long-term, no matter how difficult it is or how many roadblocks you face. It’s great to have both resilience and grit, but it’s clear that they refer to two different traits. Mental Endurance: Yet Another Synonym? Another construct that is similar to resilience is mental endurance. Mental endurance refers to the mental or inner strength that we use to deal with our challenges. It requires willpower, self-discipline, and perseverance to develop and maintain mental endurance (Sasson, n.d.). Although it is not specific to “bouncing back” from trauma or adversity, it is related in the sense that both traits help us deal with difficulty in our lives. What is the Meaning of Fortitude? Finally, there’s fortitude—yet another word that is often used in tandem with or in lieu of “resilience.” Merriam-Webster’s dictionary defines fortitude as “strength of mind that enables a person to encounter danger or bear pain or adversity with courage.” This shares some obvious similarities with the other constructs mentioned above, namely mental toughness and mental endurance. All three are rooted in this idea of inner strength, a reserve of mental power that we can draw upon to get us through the most difficult times. The Psychology of Mental Strength Although you might read about resilience (and all of the many, many traits related to it) and think that it applies to only the most inspiring, impressive, and awesome among us, resilience is surprisingly common. As the APA Help Center’s piece on resilience states, “Research has shown that resilience is ordinary, not extraordinary. People commonly demonstrate resilience.” Resilience isn’t about floating through life on a breeze, or skating by all of life’s many challenges unscathed; rather, it’s about experiencing all of the negative, difficult, and distressing events that life throws at you and staying on task, optimistic, and high-functioning. In fact, developing resilience basically requires emotional distress. If we never ran into disappointment in the first place, we would never learn how to deal with it. When you think about it in those terms, it’s easy to see that we all display some pretty impressive resilience. Some of us are more resilient than others, but we have all been knocked down, defeated, and despondent at some point in our lives; however, we kept going—and here we are today, stronger and more experienced. Demonstrating Resilience as an Individual So what does it look like to demonstrate resilience? The APA outlines a number of factors that contribute to and act as markers of resilience, including: - The capacity to make realistic plans and take steps to carry them out. - A positive view of yourself and confidence in your strengths and abilities. - Skills in communication and problem-solving. - The capacity to manage strong feelings and impulses (n.d.). Author and resilience expert Glenn Schiraldi (2017) provides even more examples and characteristics of resilient people, listing strengths, traits, and coping mechanisms that are highly correlated with resilience: - Sense of autonomy (having appropriate separation or independence from family dysfunction; being self-sufficient; being determined to be different—perhaps leaving an abusive home; being self- protecting; having goals to build a better life) - Calm under pressure (equanimity, the ability to regulate stress levels) - Rational thought process - Self-esteem - Optimism - Happiness and emotional intelligence - Meaning and purpose (believing your life matters) - Humor - Altruism (learned helpfulness), love, and compassion In addition, these characteristics are also mentioned by Glenn Schiraldi: - Character (integrity, moral strength) - Curiosity (which is related to focus and interested engagement) - Balance (engagement in a wide range of activities, such as hobbies, educational pursuits, jobs, social and cultural pastimes) - Sociability and social competence (getting along, using bonding skills, being willing to seek out and commit to relationships, enjoying interdependence) - Adaptability (having persistence, confidence, and flexibility; accepting what can’t be controlled; using creative problem-solving skills and active coping strategies) - Intrinsic religious faith - A long view of suffering - Good health habits (getting sufficient sleep, nutrition, and exercise; not using alcohol or other substances immoderately; not using tobacco at all; maintaining good personal appearance and hygiene) To summarize, if a person has awareness (both of the self and of the environment around them), they manage their feelings effectively, keep a handle on their thoughts, emotions, and behaviors, and understand that life has its inevitable ups and downs. Why is Being Resilient so Important? You hear a lot about growing and developing resilience – both in ourselves and in children – for good reason. Therapist and counselor Joshua Miles lists a few of the wide range of reasons that resilience is a great trait to have: - Greater resilience leads to improved learning and academic achievement. - Resilience is related to lower absences from work or school due to sickness. - It contributes to reduced risk-taking behaviors including excessive drinking, smoking, and use of drugs. - Those with greater resilience tend to be more involved in the community and/or family activities. - Higher resilience is related to a lower rate of mortality and increased physical health (2015). The Effects of Psychological Strength on Overall Health Although every point in that list is a good reason to pay attention to resilience, the last one may be most important of all. Resilience has a powerful impact on our health (and vice versa, in some ways). A recent review of the research on resilience suggested that resilience leads or contributes to many different positive health outcomes, including: - The experience of more positive emotions and better regulation of negative emotions - Less depressive symptoms - Greater resistance to stress - Better coping with stress, through enhanced problem-solving, a positive orientation, and re-evaluation of stressors - Successful aging and improved sense of well-being despite age-related challenges - Better recovery after a spinal cord injury - Better management of PTSD symptoms (Khosla, 2017). Further, resilience experts Harry Mills and Mark Dombeck point to research that resilience boosts immune system functioning. Resilient people are able to better manage negative emotions and experience more positive emotions, which leads to objectively good health outcomes like more immune system cells and better immune functioning in cancer patients, and more favorable mortality rates in marrow transplant patients (n.d.). Growing Mentally Strong as a Person Since we know that being resilient is such a helpful trait to have, the next logical question is: How do we develop it? Luckily, resilience is not an immutable, “you have it or you don’t” sort of trait. There may be a genetic component to a person’s base level of resilience, but you are always able to improve upon the resilience you have. This add-on resilience is often referred to as “self-learned resilience.” How Self-Learned Resilience Works Self-learned resilience, as the name implies, is the resilience that you build up in yourself through concerted effort. It is the result of being aware of the opportunities for self-development and the courage to take advantage of them. There are many ways to build up your own reserve of self-learned resilience. Below are just a few ways to go about it from three different sources. From Dr. Carine Nzodom on using a loss or stressful event to grow: - Allow yourself to feel a wide range of emotions. - Identify your support system and let them be there for you. - Process your emotions with the help of a therapist. - Be mindful of your wellness and self-care. - Get some rest or try to get an adequate amount of sleep. - Try your best to maintain a routine. - Write about your experience and share it with others (2017). From VeryWell Mind author Kendra Cherry: - Find a sense of purpose in your life, which will help boost you up on difficult days. - Build positive beliefs in your abilities to help you increase your self-esteem. - Develop a strong social network of people who support you and who you can confide in. - Embrace change as the inevitability that it is, and be ready for it. - Be optimistic—you don’t need to ignore your problems, just understand that it’s all temporary and that you have what it takes to make it through. - Nurture yourself with healthy, positive self-care—get enough sleep, eat well, and exercise. - Develop your problem-solving skills through strategies like making a list of potential ways to solve your current problem. - Establish reasonable goals by brainstorming solutions and breaking them down into manageable steps. - Take action to solve problems rather than waiting for the problem to solve itself. And remember: Keep working on your skills and don’t get discouraged if it takes a while to get to the level of resilience you desire (2018). From Kira M. Newman at the University of California at Berkeley’s Greater Good Science Center: - Change the narrative by free writing about the issue or deciding to focus on the positives. - Face your fears and challenge yourself; expose yourself to things that scare you in increasingly larger doses. - Practice self-compassion; try to be mindful, remind yourself that you’re not alone, and be kind to yourself. - Meditate and practice mindfulness; the Body Scan is a good way to work on your meditation and mindfulness skills. - Cultivate forgiveness by letting go of grudges and letting yourself off the hook (2016).
https://skyline.org.au/the-importance-of-resilience-in-life/
by IAFF Center of Excellence for Behavioral Health Treatment and Recovery Resilience is often thought of as the ability to bounce back from life’s adversities or to withstand loss or change. Given the high rate of occupational trauma, the inherent stress of the job, and the toll on family life, we know that fire and EMS personnel are an incredibly resilient population. However, during a pandemic, even the most seasoned crew members may begin to crumble under stress, while others seem to thrive. What makes someone resilient in the face of adversity, severe stress or trauma? Decades of research has identified several key protective factors that predict human resilience. - Choosing to maintain an optimistic outlook - Facing fear, rather than avoid it - Seeking and accepting social support - Creating meaning and opportunity from adversity - Prioritizing physical fitness and strength Other personal factors that researchers have found to predict human resilience include having a clear moral compass, relying on spirituality or a higher power, having defined role models in life, and the ability to maintain flexible thinking (Southwick and Charney, 2018). To assess your current level of resilience, see the Brief Resilience Scale (BSI), an informal assessment tool designed to measure your ability to bounce back from stress. Steps to Building Resilience During COVID-19 Another important aspect of resilience is the ability to accept what’s out of our control, while refocusing energy on what we can control. Beyond physically protecting yourself from virus spread, there are plenty of concrete actions you can take today to protect your emotional well-being and build resilience, both on and off the job. Consider these self-care strategies for you and other crew members: - Hunt for something good. When communities are overwhelmed by widespread illness and economic stress, it’s easy to get weighed down. To keep things in perspective, force yourself to find something good each day. This could be noticing a crew member’s job well done, enjoying extra quality time with family or simply reflecting on the fact that you are healthy. Tell someone about the good. - Limit exposure to news and social media. If you’re interested in reducing daily stress and creating a more optimistic outlook, limit the number of times per day you check the news or social media. The 24/7 media cycle is one of the biggest modern-day triggers for anxiety and rumination. It’s also completely within our control to turn it off. - Use video chat to stay connected. It’s simple, but true. Social distance doesn’t mean social isolation. Find a way to stay connected to people in your normal routine who are supportive. Just a few minutes a day can go a long way to reduce feelings of isolation. - Maintain personal boundaries. Although we remain physically separated, in many ways our world has never been more connected. In the age of social media, texting and video chats, we are constantly accessible. If constant connection leaves you feeling drained or stressed, give yourself permission to unplug on some days. - Get moving. Strive for 20-30 minutes of physical activity every day. Whether it’s an app-based exercise or walking around your neighborhood while maintaining social distance, movement is absolutely essential. Exercise helps boost mood, improves concentration and strengthens the immune system. - Find purpose in a challenge. Try looking at a problem at work or home from a different lens. For example, if your crew is completely overwhelmed, imagine how manageable the job will seem when call volumes stabilize. If your department is lacking good leadership, now is the time to let your leadership qualities shine. If cancelled social and sporting events have left you feeling restless, consider what neglected hobby you can reconnect to at home. - Use telemental health services as needed. If you are struggling with behavioral health problems, such as depression, anxiety, PTSD or grief, seeking mental health services has never been easier or more private. Telemental health services are mental health services provided over the phone, a mobile app or an interactive website. Due to the COVID-19 outbreak, most commercial insurance plans now cover these services.
https://elpasofirewire.com/2020/07/09/how-to-bend-not-break-building-resilience-during-a-pandemic/
Sometimes we encounter the opinion of owners of plots of land that no price or technical analysis is necessary, because “the customer will buy it anyway”. It is obvious that every plot of land will sooner or later find its buyer, but rather later and rather at a price which differs from the expected one. Assuming that the buyer himself obtains the necessary data can only work in very few situations. Lack of reliable information on the plot’s potential results in an incorrect estimate of its value. In this case, there will usually be no interested buyers or the offers made will be only a fraction of the plot’s value. The most popular plots of land are most often located in the areas covered by the local spatial development plan (LSDP). Such plans determine the basic parameters such as the maximum height of development, the number of storeys, the intensity of development, the biologically active area, the number of parking spaces, the binding and impassable development lines, etc. On the basis of these parameters, preliminary land absorption analyses are prepared, based mainly on numerical factors. In order to gain a more precise knowledge of a plot’s potential, it is recommended to prepare an analysis taking into account additional requirements such as: distance from other buildings, distance from the plot boundary, fire safety requirements, communication system on the plot and in the neighbourhood, zones of influence on the adjacent plots (shading, noise, etc.), terrain, type of soil, nature protection zones, flood hazard, landslides, air and ground pollution, course of power lines or other linear objects, availability of utilities, planned changes to the LSDP. Owners of plots which are not included in the LSDP (i.e. about 70% of owners in Poland) are in an unfavourable situation. In this case, in order to consider a successful and sensible transaction, it is necessary to apply for a Land Development Decision The Decision determines development possibilities of the plot on the basis of an urban planning analysis. Sometimes the final result of the analysis differs from what the owner of the plot expected. Due to the necessity to notify parties to the administrative proceedings affected by the planned investment or parties having the so-called “legal interest”, obtaining a Land Development Decision is a time-consuming procedure. Once the administrative decision is obtained, an absorption analysis should be carried out, as in the case of the plots included in the LSDP. Properly performed absorption analyses make it possible to conduct a business analysis for those interested in buying and to make a potential decision to purchase a plot of land. Investors are willing to pay a good price if they have sufficient knowledge about it. If the time to make a business decision is excessively long or even difficult to determine, they will generally not take the risk of buying a plot, or will offer a price significantly below the market price.
https://remaxcapital.pl/en/how-to-obtain-a-better-price-for-a-plot-of-land
In broad terms, psychology can be defined as the science of mind and the study of consciousness. Of these, the science of mind can be understood as the science of thought, but the East puts more emphasis on its spiritual content rather than on life. Therefore, a distinction is sometimes made between pure psychology as understood in the East, and psychology as a study of mental phenomena as understood in the West. Psychology which encompasses all the deepest levels of our mind, fits with the Eastern concept of psychology where psychology is considered as a means for the perception of truth related to all aspects of life. It starts from the premise that “there are no ranges of life or mind which cannot be reached by a methodical training of will and knowledge, - Radhakrishnan. It tacitly recognizes the connection between body, mind and spirit, and establishes that connection through different levels of consciousness. However, psychology, as commonly understood, does not and cannot cover the whole course of psychological journey from body to the spirit unless we undergo qualitative transformation of the mind. The first of these is psychological transformation which is often followed by spiritual transformation. As some well known Western philosophers have noted, the contribution of the East has been more in the field of philosophy and metaphysics rather than in psychology because, in the East, philosophy and metaphysics together encompass the whole range of psychological studies, but psychology as understood in the West is not free from mental attributes or from our psychological structure which depends on the combinations and permutations of our mental attributes i.e. feelings, emotions, opinions, reactions, prejudices etc. It is when the mental phenomena are described mainly in terms of physiological and/or psycho-physiological experiences which are ever fleeting. The underlying assumption is that the body cannot be made open to higher reality because we often get stuck at the level of psycho-physical experience, and cannot transcend it because the psycho-mental patterns obstruct our inner growth. But it is so only because we are not yet able to link psychology to our inner being. This link is established through psychological transformation. Psychological transformation is transcending the psychological structure consisting of our mental attributes i.e. mind, body, feelings, emotion etc, as these are ever changing and have no permanent existence, and arise because of our lack of understanding the psychological processes which is a never ending thought process. So long as we do not realize this, we cannot will them out, and cannot free ourselves from the limitations of psychological structure and mental attributes. But once we realize that these mental attributes have no permanent existence, we can free ourselves from them. For example, anger is not permanent, and exists only in our compulsive mind. So instead of being angry we can be the awareness that is prior to and deeper than any thought about anger, and thus anger can be controlled. It can be transcended through psychological experience that accrues when we transcend the limitations of anger. One may even feel positive biological responses indicating that there is a connection between the divine and the human. This psychological experience comes first as a psychological solace or psychological peace of mind which is still of psycho-physical nature, and so isnot yet final, because we cannot know ourselves through psychological analysis which is a process. But this psychological experience orients our mind towards the higher mind and ultimately leads to the psychological transformation of the mind when one becomes able to go beyond temporary psychological solace. Psychological solace is not yet free from mental phenomena, but is not without some positive effects. It has been found useful in psychiatry but psychology understood from this angle is different from its Eastern interpretation. One of the indications that one has undergone psychic transformation is that he has risen above the bodily experience or above the pain body as termed by E. Tolle, and feels a certain degree of peace within which St. Paul calls “the peace of God.” This felt peace within which is deeper than any psychological solace constitutes the first building block for spiritual transformation as the mind then abides in “peace that passeth all understanding”. It is beyond all thinking, beyond all mental combinations and permutations of mental attributes. Accepting a level of psychological mind that is superior to the rational mind does not mean that we are free of all kinds of psychological drama that we create in our life when we fall prey to the compulsive mind. Spiritual transformation is also needed in order to have the direct experience of Truth through Consciousness where Consciousness also means pure consciousness or ultimate truth. Consciousness can be understood as a profound transformation of ordinary human consciousness into the timeless state of intense conscious presence or a state of spiritual enlightenment. It is sometimes understood as the fundamental state of our joy or the radiant joy of Being or a natural state of felt oneness with the Being which is endowed with spiritual power. But since the state of spiritual enlightenment arises from beyond the mind, the first condition for entering into this enlightened state or a state of enlightened awareness is to free ourselves from the enslavement of mind or from incessant thinking through psychological transformation, but this transformation is not a simple evolutionary progression but an evolutionary leap, because firstly it is a transformation of the mind into what is beyond it, and secondly it is not just going beyond the mind but also growing up with the mind/spirit. In the East, this enlightened awareness is called Consciousness or pure consciousness or an equilibrium state of the three qualities sattwa, rajas and tamas, but then it also involves spiritual transformation. We then become the awareness itself, sometimes called the conscious Presence, - the witness of all these states or “the awareness that is prior to and deeper than any thoughts and emotions”- E. Tolle. Thoughts, feelings and emotions, reactions, anger etc., are usually of dualistic nature, and may become the cause of unhappiness particularly when we identify with them. Identification with emotions makes us egoistic, and the experience becomes personalized and limited, but it is also possible to disengage ourselves from them, and know that emotions could also be impersonal. Emotions can then be understood as depersonalized attributes that exist separately from our individual self not related to you or to me or to any particular self. Anger is usually personalized. Anger, for example, is always personalized psychological form of negative energy which is temporary but if we continue to identify with it, anger becomes personalized me; it cannot exist apart from me, then I have no control over anger. It is because I have not yet gone beyond anger. So it is destructive, but once we disidentify ourselves from anger there is no angry me here, and anger becomes harmless. Emotions could be positive and depersonalized, and be the medium for our internal journey but the change from personal to depersonalized/impersonal states of emotions occurs through awareness i.e. when we become the true arbitrator of our emotions. The negative aspects of other mental attributes can be overcome in similar way by disidentifying ourselves from them, but this is possible only when we become aware of our true nature that guides our thoughts, actions and behavior. That true guide is Consciousness. Psychological transformation occurs from within i.e. we consciously take part in the process of psychological transformation. We do this when we realize that we are not our mind, not our feelings and emotions and other mental attributes but their controller meaning that our life is not limited by our mind and its attributes. Being psychologically aware of the limitations of our mental attributes is the first and the far reaching step of psychological transformation. Although most of us are immersed in our thoughts, feelings etc., the reality is that they have no existential reality as they consist of shifting patterns of thoughts, feelings etc., which come and go without touching the “I” because the “I” knows that these shifting patterns can change only the outer world and cannot touch the “I”. From this point of view, our life is only a psychological reality that looks real. We then end up mistaking our life for this psychological drama or mistaking the inner man for his psychological structure constituted by his mental attributes, but a time does come when the thinking man comes out of his thought structure, and opens to his inner mind. The internal journey moves on by growing into the inner mind leaving behind all the attributes of the compulsive mind. Moving inward is not yet revelation; it is just the beginning of the spiritual quest, but it can change our psychological structure, and lead to psychological transformation. The psychological structure can indeed be outgrown by going beyond body, mind and its attributes. It is now an accepted fact that, through yogic training, body can be made an access point to higher reality, that the outward directed mind can be made inward directed. The ordinary mind is then raised above the limits of the compulsive mind and its attributes, and undergoes psychological transformation, and becomes the vehicle for spiritual transformation when one not only grows into the inner world but also grows up with the inner world. Psychological experiences have different grades, and include those experiences that are above direct mental apprehension, and psychological transformation is the first necessity for our spiritual transformation as it is the psychological transformation of the mind that can make our body an access point to higher reality, and free us from bodily limitations. It is said that initially psychological experiences come in the form of psychological afterglows that are often persistent. The mind sees something beyond itself, through itself, and finds itself at peace, and feels that it is looking inwards or “is coming home”. It opens itself to the inner mind; goes behind the surface mind and turns towards the Divine or towards the Spirit. The mind is then ready for psychological transformation which begins when we open ourselves to the Divine through devotion/bhakti, love, compassion, self-surrender etc., and which results in psychic change of our nature. The psyche (meaning “soul” as different from mind and vital) comes in front from behind and guides the mind, -Sri Aurobindo. This is the major or perhaps the single most important achievement of psychological transformation which consists of transcending the psychological structure by opening ourselves to the higher mind. It is something the rational mind cannot quite grasp, but is something that is experienceable. It is the transformation of the ordinary mind under the supervision of the inner/higher mind. Therefore, the experience of this psychological transformation is related not to the ordinary mind but to the higher mind which helps us in transcending the limitations of our psychological structure. Peace, love, compassion etc., are important mediums both for psychological as well as for spiritual transformation, but they do not bring permanent qualitative change or qualitative transformation of our nature unless peace, love, compassion etc., descend on us as grace from above, but they do not descend on us unless we make efforts from below. So they are not sufficient in themselves. For example, compassion is extremely necessary for achieving Nirbana, but it is not yet Nirbana. Compassion constitutes the fundamental requirement for achieving Nirbana, but it is not the highest goal. Similar comments can be made about peace and love. Persons who consciously take part in their psychological transformation also do so in their spiritual transformation, and the first condition is that they have to open themselves to the Spirit so that the power of the spirit governs their mind, life and body. The basic difference between the two types of transformation is that while psychological transformation occurs “when all is in contact with the Divine through individual psychic consciousness, spiritual transformation occurs “when all is merged in the Divine in cosmic consciousness- Sri Aurobindo. Spiritual transformation accrues when one has raised himself to the level of experiencing Peace, Ananda, unity of heart, an experience of Illuminated mind. While one goes inwards and tries to establish the relation between the psychic being and the outer nature in the first case, he goes upward to the Divine and tries to bring down the Divine into his own nature in the second case. In fact, this double movement, an ascent from below and descent from above constitutes the essence of Sri Aurobindo’s Integral Yoga. But neither of them is complete in isolation. Both of these movements must work in tandem, but the effort from below is considered more important. Psychology or pure psychology as understood in the East has its roots in Consciousness which is the objective of the science of mind, but differs qualitatively from its Western interpretation. The basic difference between the two schools of thought is that while psychology in the East is inward looking, contemplative and synthetic, it is more rationalistic, deductive and discursive in the West. It tends to focus more on small particulars of the world, and misses the sense of the wholeness. For example, while God is imminent, immanent and transcendent in the East, philosophy in the West has to work hard to prove God’s existence. Many Western philosophers conceived God as unknowable and unthinkable and beyond reason but easily reachable through faith and faith alone. It is perhaps from this point of view that Jung wrote: “Psychology therefore holds that the mind cannot establish or assert anything beyond itself. This has created a gap between man and God. The stories of Prometheus and Hercules seem to put man against God. The basic difference in interpreting psychology in the two hemispheres is that while psychology in the West is essentially the study of our outer personality and emphasizes on five-sensory perceptions of the human values and behaviors, psychology in the East accepts that mind cannot be limited to five-sensory perceptions and that higher realms of understanding that transcend intuitions, insights should also be taken into consideration. Pure psychology is first of all detaching ourselves from thoughts, feelings, emotions and actions and entering into the state of revelation through the opening of the heart and the illumination of the mind. It is the science of soul-discovery but what we normally know about ourselves is only a small fraction of what we must know, and what we know is related to our superficial activities which are concerned only with our external being. Pure psychology deals with the techniques as to how to go behind, below and above the external being, but it is not complete unless it also deals with other levels of mind the details of which are given in The Synthesis of Yoga by Sri Aurobindo, chapter VI.
https://www.spotlightnepal.com/2022/03/04/science-mind-psychology-eastern-perspective/
Disease Resistance of Wheat Varieties: Can Private Varieties Withstand the Pressure? Dyson School of Applied Economics and Management, Cornell University, Ithaca, NY 14853-7801, USA Received 6 December 2010; Revised 18 March 2011; Accepted 13 April 2011 Academic Editor: Jean Paul Chavas Copyright © 2011 William Lesser and Deepthi Elizabeth Kolady. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Abstract US wheat varieties are examined for differential disease resistance between public and private varieties, an issue for critics of plant intellectual property. Analysis using disease resistance rankings of wheat varieties from Kansas and Texas indicate that private varieties are as or more resistant. This finding was further confirmed with two years of Texas data. Thus, the results from the study reject the criticism of private breeding activities that they are more susceptible to disease compared to public varieties. However, private varieties resistance is incorporated from public offerings so that productive private wheat breeding is partly derivative. 1. Introduction Among the issues in the ongoing debate over the application of Intellectual Property Rights (IPR) to plants is the question of the productivity of private and/or protected varieties. In a recent paper, Kolady and Lesser refuted the charges of cosmetic breeding—breeding which adds no traits of agronomic value—by showing the private wheat varieties in Washington state are more productive than public ones, in counter distinction to some prior analysis. That study focused on wheat varieties because of the significant involvement of both the public and private sectors in developing varieties, thus allowing for a meaningful comparison. The significant involvement of the public sector in wheat breeding also means there is a substantial comparative variety testing program from which performance data are available. The analysis effectively focused on Plant Variety Protection (PVP) as to date relatively few wheat varieties have been patented. The present study also focuses on wheat varieties for the same reason as the earlier analysis, the involvement of both the public and private sectors in plant breeding, as well as the availability of comparative trials. The issues examined are general as to the crop, but due to data availability they are evaluated here only for wheat. What this study adds is an evaluation of another dimension of variety performance in the form of yield stability. In particular, we examine disease and pest resistance, a key aspect of stabilizing yields. The often repeated example of poor resistance-based crop losses is that of the 1970 southern corn leaf blight. Losses were estimated at 15 percent of the US crop, or somewhere between $ 500 million and 1 billion, and would have been far larger if not limited to varieties grown in the southeast . Wheat for its part is susceptible to a variety of rusts and mildews as well as pest attacks (Hessian fly) which can significantly reduce yields in hard hit areas. As Duvick wrote regarding wheat at a time when the corn blight experience was still fresh, “Wheats have a history of epidemics of, for example, leaf and stem rusts, in the United States as well as elsewhere. Cycles of epidemics, development and release of varieties with specific single gene resistance, increase of new races of the pathogen, and then new epidemics have been well documented.” In many cases, genetic resistance is the only effective control, or at least the only economically viable control . Yield is clearly an important variety dimension for any farmer, and hence breeders, and US wheat breeders have been responding with an overall average annual increase in bushels per acre of one percent over the 20 periods beginning in the mid-1970s . Improved genetic materials are widely credited with contributing about half the observed yield increases; the remaining half is attributable to enhanced inputs, including management . Yield stability is important as well, but can be subliminal to yield as a criterion particularly in the case of diseases which are rare and unpredictable even if costly when they do occur. For breeders, prioritizing multiple disease resistance in variety creation complicates and delays new releases compared to a single criterion like yield. (“Fundamental tradeoffs in breeding decisions typically involve yields, disease resistance, and quality. Gains in one area often involve losses in another.” [7, page 82]). Therefore it is a legitimate question the degree to which wheat breeders emphasize disease-resistance, and particularly if the foci differs between public and private sector breeders. On the one hand, private sector seed companies are rewarded in proportion to seed sales. That creates a direct incentive to release promising varieties capable of increasing market share. If private sector breeders must decide between emphasizing higher yielding and disease resistant varieties, as to a degree they always must, the short-term sales incentive of promoting higher yielding varieties may take precedence over more disease resistant ones. As Frey [8, page 6] notes, “Private sector [breeding] goals, however, are short-term and profit-motivated so its contribution to genepool enrichment generally will be dedicated to individual genes with IPR protection potential. General genepool enrichment for multigenic traits is high-risk and long-term in scope, and therefore, must be done by the public sector.” And the Office of Technology Assessment [9, page 72] observed, “Insect resistance has not been a significant component of commercial breeding programs, and none of the new commercial wheats has resistance to Hessian fly.” Of course, the private sector also has the incentive to promote disease resistance if only to avoid a potentially reputation-damaging major disease-based crop loss. But at the margin, the private sector can have a greater incentive to emphasize yield over disease resistance compared to the public sector. Conversely, public sector breeders are not rewarded directly according to the adoption of their varieties. Thus it can be argued they have greater flexibility to hold back the release of a variety with disease resistance levels considered to be inadequate. We are therefore testing here the hypothesis that the disease resistance levels of public and private varieties are equal. As the null hypothesis, a two-tailed test is chosen because while the specification of a difference in incentives is clear, the direction of the difference is less so. This analysis is therefore an empirical one using reported rankings of disease resistance levels for individual diseases. Since the hypothesis is focused on the variety owner, the protection status of varieties is not considered. As a point of fact, most private varieties are protected while public varieties are both protected and unprotected. It should be emphasized that the objective of this paper is a simple and narrow one, yet important. We are examining only if private commercial wheat varieties are on average equally disease resistant as those developed by the public sector. We are not considering the initial source of the disease resistance, that is, if the private sector varieties utilize disease resistance previously developed by and delivered in public sector varieties. (e.g., stripe rust resistance in the [Kansas] region leans heavily on Jagger, a public variety. Tan spot resistance is largely derived from Jagger and Karl 92, both public varieties.” Personal communication, Dr. Allan Fritz, Professor of Wheat Breeding, Kansas State University. Note that the Plant Variety Protection system allows the use of protected varieties as breeding materials.) The source of disease resistance is a potentially important issue when evaluating the overall contribution of public sector breeding and, indeed, when contemplating the sustainability of productive private sector wheat breeding. The authors believe that public sector crop breeding continues to provide an important source of ingenuity and competition especially during a period of privatization. However, our objective here is much narrower as it is solely focused on the empirical question of the comparability of disease resistance rankings between public and private sector varieties. 2. Literature Review The Office of Technology Assessment did note that back in the 1970s Hessian-fly-resistant wheats in Kansas and Nebraska declined to 42 percentage of acreage from 66 percent over a four-year period. But generally the literature treats the issue under study here as a component of production risk particularly as regards yield stability. Disease resistance is a component of yield stability but of course but one of multiple factors. The issue in the literature is focused on the role of yield risk reduction in farmer variety selection decisions, here limited to studies in developed countries. The focus does not imply that farmers in developing countries do not face very similar variety selection decisions as their developed country colleagues. However, exogenous factors such as seed availability, knowledge of production traits, and the availability of cash or credit can affect variety selection choices between the two groups of farmers. The seminal study of variety adoption was done by Griliches who studied the spread of hybrid corn adoption in the USA since its widespread availability in the 1930s. His analysis emphasized the importance of profitability in adoption decisions, but noted the numerous small choice factors faced by individual farmers. Barkley and Porter used Kansas wheat production data to analyze farmers’ variety selection criteria. They found that disease resistance (using the same 1–9 ranking scale employed here) particularly to rusts and mosaics were significant explainers of variety selection choices, but much less so than relative yields (relative to the production district means used as the unit of analysis). In their simulation, a one-point improvement (11%) in leaf rust resistance increases statewide planted acreage in wheat by 0.33 percent. By way of comparison, a 10 percent increase in the relative yield leads to a 0.93 percent increase in planted acres in one year so in a very rough sense relative yield is three times more important in variety selection than is resistance to a single major disease. The Barkley and Porter study however is not constructed to answer the question posed here, the relative disease resistance between public and private varieties. That is because the analysis uses as the dependent variable: the share of planted area while relative yield, disease rankings, and public/private identifier are all explanatory variables. Moreover, another variable was yield stability (significant at the 10 percent level), which combines both weather and variety characteristics like resistance. Notably the study was conducted at a time when public varieties accounted for 85 percent of acres planted in Kansas. Duvick and Cassman similarly determined that both yield and yield stability drive corn hybrid variety selection in the central USA and Musser and Patrick [12, page 544] in a survey article found that just over one third of cotton and corn farmers are unwilling to give up any (emphasis in original) of their current average to stabilize year to year yields. In a more recent analysis of Kansas wheat variety selection decisions, Barkley and Peterson applied portfolio analysis to determine if mixes of wheat seed varieties in Kansas would increase profits. The use of systematic selected seed mixes or “blends” there increased from nothing as recently as 1997 to 10 percent in 2006, reaching a high point of 15.2 percent in 2004. Their analysis indicates that state average yields could have increased by 2.87 (about 7%) bushels an acre over a 13-year period. However, the varieties to include in an optimal blend must be selected using data and statistical information (such as provided from variety trials) rather than the typical choices based on “variety descriptions, intuition, and average yields…”. Dahl, et al. compared variety adoption decisions between Canadian and USA wheat farmers. Using results from the Tobit model (only slightly smaller than for the linear version although for the US stem rust was not a significant selection criteria) they found that leaf rust in the US and stem rust in Saskatchewan (the only of the three studied provinces for which a disease resistant variable was used) led to reductions in the share of acerage planted of a variety. The marginal effects (1999, Table 3) are far stronger in Canada where a one-point decline in stem rust resistance (on a three point scale) reduced the acreage planted share by 18 percent, but only 0.88 percent in North Dakota. Conversely, USA farmers rate relative yield 10 times over what is done in Canada. Finally, USA farmers prefer public varieties to private ones by three to one; typically private varieties had lower end use quality rankings. That variable is not included in the Canadian regressions. These results suggest that while disease resistance is important to US farmers, it is far less so, and relative yield much more so, than for the Canadian provinces analyzed. Care though must be used in evaluating these results for there are several notable differences in law and regulations between the countries. Canada imposes a “visually distinguishable” grain quality standard absent in the US which limits variety availability there; plant variety protection standards there are also higher, with similar results (see also ). Conversely, in the US deficiency payments are a large issue with a bias to yields of lower quality wheats while a scab outbreak there late in the data analysis period likely made those farmers more cognizant of the importance of disease resistance. While the recent domestic literature on wheat varietal selection criteria is limited, the available studies do confirm that farmers place most selection attention on yield. Disease resistance is a selection criterion as well, but typically only one of several components of yield stability. That factor combined with the limited statistical data on yields and yield variability used by most farmers means that breeders have some latitude in minimizing disease resistance in a breeding program in favor of average yields. This study evaluates not the absolute levels of attention to disease resistance as a breeding characteristic but rather any relative differences between public and private sector varieties. 3. Methodology and Data 3.1. Disease Resistance Ranks The validity of this analysis depends heavily on the published disease resistance rankings so it is important to have some understanding of how those rankings are developed and reported. The rankings used here are all on a 1–9 scale (9 the lowest resistance). Rankings reported on a 1–5 scale are interpolated to the 1–9 ranking, something which is widely done by pathologists even if lacking a strict systematic justification (Personal communication, Professor Mark Sorrels, small grains breeder, Cornell University). Disease rankings are initially set at the variety test field level in comparison with a reference variety of known susceptibility. Researchers sample the field to count the percent of affected plants and then assign a severity value compared to the reference variety. The two numbers are then multiplied together as a basis for the resistance rankings, which are then assigned . There are several aspects of this approach which are relevant. First, the rankings can and do vary yearly as a result of the presence and virulence of a disease, which is affected by weather and other exogenous factors. However, because the rankings are relative to a reference variety which is also affected by disease, large annual variations in incidence are unlikely, which translate into small changes in the rankings. That is, a particular variety may in an absolute sense be moderately susceptible to a disease, but it is relatively the best available; if disease x is present then the best choice is variety y even if not as resistant to x as a wheat farmer might hope for. Second, while the data collection process is systematic, the assigning of rankings is inherently subjective. Third, a second subjective component is injected when the rankings from multiple trial locations combined into a single statewide value. A statewide value of course may over or understate resistance levels in any particular location. While these subjective aspects are potentially perplexing, there is no reason to believe that the ownership status (public or private) affects the outcome so no systematic bias is expected. The potential local versus statewide value bias can be partially assessed by doing analysis on substate areas where diseases are reported to be more or less problematic (see below). 3.2. Data Sources Of the several state reports available, the one from Kansas is the most detailed. Resistance values for six individual diseases are provided over an eight-year period along with a description of the prevalent diseases for each year analyzed . The initial part of our analysis utilizes the Kansas data for the six most common diseases. The analysis includes eight years of data to capture annual variations (2001–2008). Variations in resistance are not great compared to weather-affected yield studies, but they do occur. The second part of the analysis uses two-year data on wheat resistance rankings from Texas, but only a smaller number of diseases (three) are reported. Only winter wheats (both hard and soft reds and a few whites) are reported in these two-state trials. Spring and Durham wheats which grow in different climatic areas with potentially different disease susceptibilities are therefore not included in our analysis and could lead to different conclusions. A final data need is to categorize the varieties identified only by variety name as publically or privately owned. The GRIN (Germplasm Resources Information Network) data base contains accessions from public sector breeders (and some private as well) searchable by cultivar name for identifying the variety owner (search available at http://www.ars-grin.gov/npgs/acc/acc_queries.html. Last visited 3/16/11). Protected varieties, both public and private, can be searched, again by cultivar name, through the Plant Variety Protection Office data base for identifying ownership (search available at http://www.ars-grin.gov/cgi-bin/npgs/html/pvplist.pl. Last visited 3/16/11). If a variety ownership could not be established the variety was excluded from the analysis (about 8 percent of the data file). We use a two-tailed t-test to examine the equality of means of disease resistance rankings between private commercial and public sector varieties. The hypothesis we are testing here is whether on average private varieties are equally disease resistant as those from the public sector. Since a priori we are not sure whether private sector varieties are more or less resistant than public sector varieties, we test the equality of group means. We do the analysis using both statewide data (pooled data set over various years and regions), and region-specific data within a state, wherever possible. Group mean comparison and testing of hypothesis is done for each disease separately. (We used STATA for the analysis.) 4. Results 4.1. Analysis of Resistance Data from Kansas Results from the analysis of comparison of resistance rankings of public and private varieties from Kansas are presented in Table 1. In this analysis we focus on six diseases based on their economic importance to the farmers in the state . The diseases we selected are barley yellow dwarf (BYD), wheat streak mosaic (WSM), soil borne mosaic (SBM), leaf rust, stripe rust, and powdery mildew. The varieties included in Table 1 represent 81.7 percent of planted acreage in Kansas and 10 of the leading 10 varieties . Except for stripe rust and powdery mildew, the results of statewide analysis presented in Table 1 suggest that the disease resistance rankings are significantly different between public and private varieties, leading to rejecting the null hypothesis. Somewhat surprisingly to us, the results show that private varieties are more resistant in most cases. The statewide analysis however risks a bias against public or private varieties if either group is targeted to a substate area where specific diseases are more or less virulent. In their recommendation to Kansas wheat farmers DeWolf et al. , note that, “Diseases and pests differ considerably in the magnitude of yield loss that they cause and in their prevalence across the state. Therefore, it is important to consider regionally important diseases and pests when selecting wheat varieties.” In order to address this potential local versus statewide value bias, we conducted a substate analysis with focus on the North East Kansas and the South East Kansas. However, note should be taken that the disease ratings are presented for the entire state and hence do not necessarily reflect the ratings for any particular subarea. For that reason, we direct readers to the entire disease rating report available from Kansas State University . For the five years, for which the data are available , the North East was relatively disease-free compared to the South East where diseases such as BYD and leaf rust were reported to be more problematic in certain years, especially 2006, 2005, and 2004 (Kansas Wheat Performance Tests reports). Results presented in Tables 2 and 3 show a similar trend (private varieties are more resistant albeit not statistically significant) as in Table 1, implying that there is no local versus statewide bias in resistance rankings. 4.2. Analysis of Resistance Data from Texas Results from the group mean comparison of disease resistance rankings of public and private varieties from Texas are reported in Table 4. We could access data for two years (2006-2007) only and for three diseases: powdery mildew (PM), stripe rust, and leaf rust (Texas A&M University). In the case of stripe rust, private varieties are more resistant than public ones at the one percent level. In the case of PM and leaf rust public varieties are slightly more resistant than private varieties, albeit not statistically significant. Results from the analysis using only two years of data from Texas therefore suggest a similar pattern as that from Kansas, that is, when there is statistically significant difference between disease resistance of private and public varieties, private varieties are more resistant. 5. Conclusions Our analysis indicates that disease resistance of public and private wheat varieties is equal if indeed private varieties are not slightly more resistant, as measured using assigned relative resistance rankings. The analysis is based largely on data from a single state (Kansas) over eight years so the standard cautionary note of the need for additional state and years of data when those data become available applies here. The area planted by the included varieties represents more than 80 percent of wheat acres and all of the leading 10 varieties so that the representativeness of the analysis is good. However, using the available data an additional criticism of private breeding activates, that they are more susceptible to disease compared to public varieties, is found to have no statistical basis. We are unable to provide insights into the sustainability of this conclusion beyond a period when public breeders provide much of the resistant germplasm which is then incorporated into private varieties. A larger private breeding investment would seem to be called for which may or may not be forthcoming under private wheat seed production profitability. In general though it is the public sector not the private which invests in the lengthy and costly background or development breeding process to transfer resistance from germplasm collections to commercial varieties. Our empirical conclusions are limited to the relatively dry states of Kansas and Texas, and to the winter wheats (both hard and soft reds) grown there. Results for damper regions where diseases may be more prevalent, as well as for spring and Durham wheats, which may have different disease susceptibilities as well, could be different. However as the results show the private sector is more effective than the public in incorporating disease resistance into commercial varieties when both use the same sources of disease resistance there is no inherent reason why the situation would differ for other regions or wheat types. References - D. E. Kolady and W. Lesser, “But are they meritorious? Genetic productivity gains under plant intellectual property rights,” Journal of Agricultural Economics, vol. 60, no. 1, pp. 62–79, 2009. View at Publisher · View at Google Scholar - J. Walsh, “Genetic vulnerability down on the farm ( corn),” Science, vol. 214, no. 4517, pp. 161–164, 1981. View at Google Scholar - D. N. Duvick, “Major united states crops in 1976,” Annals of the New York Academy of Sciences, vol. 287, pp. 86–96, 1977. View at Google Scholar - E. D. DeWolf and P. E. Sloderbeck, “Wheat Variety Disease and Insect Ratings 2008,” Kansas State U Ag, Exp. Station, MF-991, 2008, http://www.oznet.ksu.edu/library/plant2/mf991.pdf. - J. M. Alston and R. J. Venner, “The effects of the US Plant Variety Protection Act on wheat genetic improvement,” Research Policy, vol. 31, no. 4, pp. 527–542, 2002. View at Google Scholar - D. N. Duvick and K. G. Cassman, “Post-green revolution trends in yield potential of temperate maize in the north-central United States,” Crop Science, vol. 39, no. 6, pp. 1622–1630, 1999. View at Google Scholar - B. L. Dahl, W. W. Wilson, and D. D. Johnson, “Valuing new varieties: Trade-offs between growers and end-users in wheat,” Review of Agricultural Economics, vol. 26, no. 1, pp. 82–96, 2004. View at Publisher · View at Google Scholar · View at Scopus - K. J. Frey, “National plan for genepool enrichment Of U.S. crops,” National Plant Breeding Study-III, Special Report 101, Iowa State University, Ames, Iowa, USA, 1998. View at Google Scholar - U.S. Congress, Office Technology Assessment, Pest Management Strategies in Crop Protection, Vol. I, 1979http://www.fas.org/ota/reports/7912.pdf. - Z. Griliches, “Hybrid corn: an exploration of the economics of technological change,” in Technology, Education and Productivity: Early Papers with Notes to Subsequent Literature, pp. 27–52, Basil Blackwell, New York, NY, USA, 1988. View at Google Scholar - A. P. Barkley and L. L. Porter, “The determinants of wheat variety selection in Kansas, 1974 to 1993,” American Journal of Agricultural Economics, vol. 78, no. 1, pp. 202–211, 1996. View at Google Scholar - W. N. Musser and G. F. Patrick, “How much does risk really matter to farmers?” in A Comprehensive Assessment of the Role of Risk in Agriculture, R. E. Just and R. D. Pope, Eds., chapter 24, pp. 537–556, Kluwer Academic, Dodrecht, The Netherlands, 2002. View at Google Scholar - A. Barkley and H. H. Peterson, “Wheat variety selection: an application of portfolio theory to improve returns,” in Proceedings of the NCCC-134 Conference on Applied Commodity Price Analysis, Forecasting, and Market Risk Management( '08), St. Louis, Mo, USA, 2008. - B. L. Dahl, W. W. Wilson, and W. W. Wilson, “Factors affecting spring wheat variety choices: Comparisons between Canada and the United States,” Canadian Journal of Agricultural Economics, vol. 47, no. 3, pp. 305–320, 1999. View at Google Scholar - W. Lesser, “Canadian seeds act: do they mimic plant breeders' rights legislation?” Canadian Journal of Agricultural Economics, vol. 36, pp. 519–529, 1988. View at Google Scholar - L. Cadle-Davidson, M. E. Sorrells, S. M. Gray, and G. C. Bergstrom, “Identification of small grains genotypes resistant to Wheat spindle streak mosaic virus,” Plant Disease, vol. 90, no. 8, pp. 1045–1050, 2006. View at Publisher · View at Google Scholar - “Kansas State University, Kansas Performance Tests with Winter Wheat Varieties,” Reports of Progress 930, 947, 967, 982, 999 and 1018, Manhattan, 2004-2008, http://www.agronomy.ksu.edu/extension/p.aspx?tabid=92. - J. A. Appel, E. D. DeWolf, W. W. Backus, R. L. Bowden, and T. Todd, “Kansas Cooperative Plant Disease Survey Report: Preliminary 2008 Kansas wWheat Disease Loss Estimates,” 2008, http://www.ksda.gov/includes/document_center/plant_protection/Plant_Disease_Reports/2009KSWheatDiseaseLossEstimates.pdf;. - Kansas Department Agriculture, Division Statistics,, “Wheat Variety,” 2008, http://www.nass.usda.gov/Statistics_by_State/Kansas/Publications/Crops/Whtvar/whtvar08.pdf.
https://www.hindawi.com/journals/ecri/2011/575192/
We back entrepreneurs seeking to transform the country through technology, partnering with founders at the first venture capital round (seed or Series A) and supporting them throughout their growth. We are a partnership managed and funded by entrepreneurs and operators, seeking to turbo-charge the Brazilian startup ecosystem. We aim to partner with founders early in their journey, typically as the first source of institutional capital. We aim to eliminate the friction commonly found in the startup investment process. We respect founders’ time by being agile, respectful, and transparent in every interaction.
https://canary.com.br/
Browse through a list of USGS environmental health news and budget items. Flood Redistributes Mercury in Grand Canyon Aquatic Food Webs Scientists coupled the concepts of energy flow through food webs with measurements of mercury in organic matter and animals to estimate mercury fluxes and fate during an experimental flood in the Colorado River. The flood redistributed mercury in simple,... Bioaccumulation of Mercury in Fish Varied by Species and Location in the Chesapeake Bay Watershed—Summary of Existing Data and a Roadmap for Integrated Monitoring Fish mercury data from State monitoring programs and research studies within the Chesapeake Bay were compiled and summarized to provide a comprehensive overview of the variation in fish mercury concentrations among species and habitats within the watershed... Review of Cyanobacterial Neurotoxins—Information for Prioritizing Future Science Directions The current state of knowledge on the modes of action, production, fate, and occurrence of the freshwater cyanobacterial neurotoxins, anatoxin-a and saxitoxin, was reviewed and synthesized to identify gaps and critical research needs to better... Conceptual Model Developed to Understand Contaminant Pathways between Aquatic and Terrestrial Ecosystems A conceptual model, based on contaminant properties and ecotoxicological principles, was developed to understand the transfer of contaminants from aquatic to terrestrial ecosystems and the effects of various classes of contaminants on terrestrial... Identifying Potential Contaminant Exposure to California Condors in the Pacific Northwest Potential reintroduction of the endangered California Condor to parts of its historic range in the Pacific Northwest would benefit from information on possible threats that could challenge recovery efforts. Exposure to environmental contaminants is a key limiting factor for condor recovery in its southern range. In Orlando, USGS Science on the Health of the Environment is on Display Studies on the aquatic food web, tree swallows, and the spread of contaminants take center stage at SETAC 2016. When the Whole is Less than the Sum of Its Parts Environmental Ratios of Cadmium and Zinc are less Toxic to Aquatic Insects than Expected Comprehensive Study finds Widespread Mercury Contamination Across Western North America Mercury contamination is widespread, at various levels across western North America in air, soil, sediment, plants, fish and wildlife. EarthWord–Morbidity When you’re not dead yet, but aren’t feeling well either, there’s an EarthWord for that... Evidence of Unconventional Oil and Gas Wastewater Found in Surface Waters near Underground Injection Site These are the first published studies to demonstrate water-quality impacts to a surface stream due to activities at an unconventional oil and gas wastewater deep well injection disposal site. Despite Long-Lasting Pollutants, Ospreys Thrive in US’ Largest Estuary The world's largest breeding population of ospreys is coping well with the long-lasting residues of toxic chemicals that were banned decades ago but remain in the Chesapeake Bay food chain at varying levels, such as the pesticide DDT and insulating chemicals known as PCBs. EarthWord – Medical Geology Medical Geology is an earth science specialty that concerns how geologic materials and earth processes affect human health.
https://www.usgs.gov/ecosystems/contaminant-biology/news
Are drones disturbing marine mammals? Marine researchers have made sure that their research drones aren't disturbing their research subjects, shows a report in Frontiers in Marine Science. And they're hoping that others will follow their example to help protect wildlife in the future. We've all seen the videos - drones and wildlife don't always get along. Unmanned aerial vehicles (UAVs) offer unparalleled scientific footage and insight, but how can wildlife researchers be sure t...
https://www.electronicspecifier.com/companies/aarhus-university
China launched the world's first AI-operated 'mother ship,' an unmanned carrier capable of launching dozens of drones - China launched a crewless ship capable of carrying dozens of drones. - The ship, named Zhu Hai Yun, uses an artificial intelligence system to navigate autonomously. China has launched the world's first crewless drone carrier that uses artificial intelligence to navigate autonomously in open water. Beijing has officially described it as a maritime research tool, but some experts have said the ship has the potential to be used as a military vessel. The autonomous ship, the Zhu Hai Yun (pictured here) is around 290 feet long, 45 feet wide, and 20 feet deep and can carry dozens of air, sea, and submersible drones equipped with different observation instruments, according to the shipbuilder, CSSC Huangpu Wenchong Shipping Co. It describes the vessel as "epoch making" and the "world's first intelligent unmanned system mother ship." —Venkatesh Ragupathi (@venkatesh_Ragu) May 28, 2022 "The most immediate benefit to China is likely data collection," Matthew Funaiole, senior fellow of China Power Project at the Center for Strategic and International Studies, told Insider. "From a purely science standpoint, which is the angle China is promoting, we could see Chinese drones (both surface and subsurface, and launched from the Zhu Hai Yun) contributing to disaster mitigation, environmental monitoring, etc." However, the drone mothership could also be used by China's military to gather intelligence in the contested South China Sea, which several countries have made competing territorial claims over. In recent years, China has made increasingly assertive claims of sovereignty over the sea, and has been building up its military presence. "When dealing with China, we rarely have perfect insight into their intentions, but as we have seen with its activities in the South China Sea, scientific ventures can be a precursor or otherwise support military objectives," Funaiole said. "Technology, especially information collection systems, often have dual use applications. Data collected by China from autonomous systems could aid with surveillance, domain awareness, help PLA [People's Liberation Army] submarines navigate, enhance China's ASW [anti-submarine warfare] capabilities, etc." The ship was first unveiled in May, but is expected to be delivered by the end of 2022 after completing sea trials, according to the South China Morning Post. Unmanned platforms could be the "future of warfare" The vessel uses the world's first AI system called Intelligent Mobile Ocean Stereo Observing System, developed by the Southern Marine Science and Engineering Guangdong Laboratory, according to the South China Morning Post. The ship will be controlled remotely, and can travel at a maximum speed of 18 knots, or around 20 miles per hour, according to the shipbuilder. Chen Dake, the director of the laboratory, told the state-run Science and Technology Daily in 2021 that the ship is a new "marine species" that will revolutionize ocean observation. China is already the world's biggest shipbuilder, and has ambitions to become a "maritime great power". Although this vessel's capabilities and uses remain to be seen, militaries worldwide have increasingly been focusing on developing drones and unmanned vehicles. Funaiole noted that China has invested considerable resources into various unmanned platforms, such as drones and autonomous vehicles, to strengthen the position of its navy. "This will be part of the future of warfare," he said. - A 29-year-old woman found a mark on her head and was diagnosed with a fungal infection. It turned out to be invasive skin cancer.
https://www.businessinsider.in/international/news/china-launched-the-worlds-first-ai-operated-aircraft-carrier-an-unmanned-vessel-capable-of-launching-dozens-of-drones/articleshow/92146830.cms
Transit riders experience challenges in Bismarck Updated 1:30 pm, Sunday, April 15, 2018 BISMARCK, N.D. (AP) — Efforts to stabilize transit in the Bismarck area have resulted in reduced service that has left some residents with disabilities without a ride. Some riders have dropped from routes because they live outside city boundaries while others are experiencing long wait times. Some transit users argue that the issues have been ongoing. The Bis-Man Transit Board and Bismarck City Commission have proposed measures to resolve paratransit issues in the coming months, but some are skeptical of real change, The Bismarck Tribune reported . "It's been very — for lack of a better term — discriminating towards people with disabilities, because it really puts huge barriers on them to work," said Darcy Severson, who runs the community-based vocational department for Pride Inc. Sheryl Stradinger is the mother of two daughters with disabilities. She said she's doing her best to transport her daughter, but that it's a challenge. "Transportation is something that all of us take for granted except for (people with disabilities), they can't," said Stradinger, who said she wasn't notified of losing service until she tried to book a medical appointment her daughter. Steve Heydt is the president of the Bis-Man Transit Board. He said the board is continuing to look for solutions for the lost service and has formed a committee to help. The North Dakota Protection and Advocacy Project, is an independent state agency that advocates for the rights of people with disabilities. It's been receiving similar concerns about paratransit, according to Pam Mack, director of program services.
(This article was first published in Finnish and with different photos in Suomen Luonto -magazine 3/2015) Plastic debris is already a global problem. A large part of it ends up in the oceans and disintegrates into micro particles in the ecosystems causing disaster. Until last summer a young 154 meter long sei whale sieved its food, small crustaceans, from the open waters of the Atlantic. Accidentally it swallowed also a fragment of a dvd case, which tore its stomach. The whale swam to die at Chesapeake Bay in the east coast of the United States and could not be rescued any more. This story is only one of thousands. It arose a lot of attention, because the species is endangered and the whale came to shallow waters in a populated area, which was unusual behavior for it. The seas hide part of the rubbish in their secret sinks. Part of it ends up being the destiny of birds, fish, turtles, seals and other sea mammals. In addition to the ecological problems the sea debris hampers fishing, maritime, beach-residents and tourism. In spite of the developments in waste management, the seas of the world continue to be the largest landfill on Earth. Marine researchers and environmentalists are vigorously trying to find ways to protect the richness of life in the oceans. Extreme litter picking in the deep seas English marine and molecule ecologist Lucy Woodall videotaped the bottoms of the deep seas during the research trips of the London Natural History Museum. She was primarily researching the fauna of the seamounts, but found also other things: – In the Atlantic we saw everything from a 17th or 18th century clay pot to wine bottles, modern fishing gear and plastic rubbish. In the Indian Ocean we found mostly fishing gear, and among other things also cast-off engine parts. All ten research areas lay over 500 meters deep, and most of them more than 1000 kilometers off the shore. – Marine debris exists everywhere from the poles to the equator, from the shores to the open seas and from the surface to the bottom sediments, Woodall says. The amount of rubbish ending up in the seas is estimated to be around eight million tons a year. Only long-lived materials are counted as marine debris, so the amount of rubbish in the seas grows larger year by year. A small part of the rubbish is visible to the eye and the vast majority is minuscule chaff, microplastics. The newest research by Woodall´s group demonstrated that both are accumulated in the seabed. The bottom sediments of different oceans contain huge amounts of especially fiber shaped microplastics. Ghost nets may keep fishing for years The loosely drifting ”ghost nets” and other escaped fishing gear might continue catching for years. The trash carried by the sea currents may also spread alien species to new habitats, where they invade space from original plants and animals. Littering might be the last nail in the coffin of the most endangered marine species, states a report by the international Convention on Biological Diversity (CBD). The report is especially worried of the critically endangered Hawaiian monk seal, the loggerhead turtle of the warm seas, the rare northern fur seal living in the north of the Pacific Ocean and the white chinned petrel of the southern seas. What can researchers, consumers, organizations or governments do to stop the littering of the oceans? There is no definite global data of the sources and amounts of sea debris, but regional studies are made in many countries. Surveying the quantity and quality of marine debris Ecologist Denise Hardesty from Australia’s national science agency (CSIRO) managed the world´s largest systematic marine debris survey this far. The three year project financed by Shell surveyed the beaches of the whole continent. Transects, laboratory analyses and inquiries to local officials were done every hundred kilometers. A grand waste management budget did not necessarily mean a low amount of beach rubbish. Investing in campaigns against littering, in recycling and in the waste management of the beaches reduced the rubbish clearly more than investing in waste management facilities. – The problem has to be solved at sources, not where the rubbish accumulates in the ocean and on beaches. Working together with industry partners, governments and private consumers for whole of life approaches to packaging would be a big step towards the solution, states Hardesty. Alleviation with cleanups and super hoovers The rubbish problem can be alleviated also in a traditional way: by cleaning. American marine biologist Nicholas Mallos from the marine conservation organization Ocean Conservancy says, that in the coastal cleanups organized by them all over the world during 27 years, over 200 million pieces of rubbish have been collected: cigarette buds, food wrappers, caps or lids, plastic bottles, plastic bags, beverage cans, rope… Also large technical appliances have been planned for cleaning. Probably the most famous of them is the huge garbage trap originated by dutch student Boyan Slat. The sea currents would steer the rubbish floating in the waste gyres of the oceans to the device´s throat with the help of long beams. The machine would bale the waste, and the bales would be fetched ashore for recovery. The Ocean Cleanup project lead by Slat has done pilot studies, replied the questions of critics and gathered crowdfunding for building a pilot unit. It is most important, however, to prevent the escalation of the problem and to concentrate on the sources of waste: products, waste management and practices in all functions where rubbish gets lost. Plastic bag bans are spreading Most attention and restrictions of rubbish-generating products have been globally targeted at plastic bags. And for reason: according to one estimate two million of them are used every minute. The plastic bag tax of Ireland reduced the use of plastic bags to a fraction. Great reduction was achieved also in Australia, when a payment for plastic bags was imposed. The state of California in the United States has decided to ban throwaway plastic bags in the next few years. Also the European Union accepted in November a directive obliging the member states to restrict the use of plastic bags. Perhaps a small act is one beginning in the cleaning of the vast oceans. Microplastics sneak into foodchains ”Marine microplastics are a novel media for the transport of chemical pollutants in the environment,” says marine ecologist Chelsea Rochman of the University of California. Rochman studied the absorption of toxic chemicals in plastic debris in the heavily polluted San Diego bay and the consequences to the health of fish eating this rubbish. – We fed microplastic that had been deployed in the sea to medaka fish. More chemicals accumulated in the fish that ate it than in the fish that ate ”clean” non-deployed plastics or a non-plastic diet. Also liver damage and signs of endocrine disruption appeared in the fish that had eaten the marine plastic. – Plastic debris accumulates chemicals from the water column and the sediment. It absorbs e.g. nickel, lead, PAH-subtances, PCBs and PBDEs (brominated flame retardants). Once in the food chain these chemicals are transferred from plankton to small fish and from them to the big predatory fish. Rochman´s research team comments on the results radically: Classify plastic waste as hazardous substance to allow existing laws to begin to mitigate the problem! The research on microplastics has intensified only a few years ago. Even the definition is unofficial: particles less than five millimeters of dimension. Microplastics research on the rise Now the research of microplastics is so popular, that senior researcher Outi Setälä of the Finnish Environment Institute (FEI), Marine Research Centre, says she gets messages of the subject weekly from around the world. – The environmental consequences of marine debris and microplastics are a rising area of study internationally, says also Peter Kershaw, leader of the United Nations marine debris working group. Within the United Nations the subject is being researched and evaluated in co-operation by the Environmental Program (UNEP), The Intergovernmental Oceanographic Commission (IOC-UNESCO), the International Maritime Commission (IMO) and the Food and Agriculture Organization (FAO). Microplastics form when bigger rubbish disintegrates into small particles by the impact of waves, light and time. In addition minuscule plastic particles come from industrial waste waters and are dissolved from different consumer products like toothpastes, shampoos and skin scrubs. One wash of a fleece garment detaches over 1000 fibers in the water. The great majority of microplastics is minuscule and can only be seen through efficient microscopes. The micro-sized particles are measured in thousandths of millimeters and the nano-sized in millionths of millimeters. The FEI and The Helsinki Region Environmental Services (HSY), together with the City of Helsinki Environment Centre, have researched where microplastics come from and how much the Baltic Sea contains them. In the sea area around Helsinki they were found in all sieved surface water samples and in almost all sediment samples. The FEI is trying to solve especially, how microplastics are carried in the lowest stages of the food chain. – When animal plankton unintentionally eats plastic particles, they can move further in the food web, says Outi Setälä. – The chances of microplastics to be carried along the food chain have been testified in laboratory conditions. In free waters microplastics are still quite sparse. How much microplastics and the poisons they contain are accumulated in nature, is as yet a theoretical question. Prevention is most important Marine biologist Julia Talvitie is doing her doctoral dissertation at the Aalto University on how much of the microplastics can be removed in wastewater treatment plants and with which technology. Until now the removal of microplastics has not been taken into account when developing the treatment technology. – For the rubbish visible to the eye, which accumulates on the shores, something can still be done, but it is impossible to collect back the microscopic particles that have ended up in water. That´s why their entry to the waters should be prevented. Talvitie wishes for measures to restrict the sources of microplastics, so that the solution of the problem would not remain solely upon the consumers. Fulmars reveal the state of the North Sea Marine ecologist Jan Van Franeker with his IMARES research group has monitored the state of the North Sea for over 30 years with the help of a common and important indicator, the northern fulmar. – The northern fulmars prey from the surface of the sea and by mistake they feed on floating sea debris, mostly pieces of plastic. The plastics are slowly ground in their muscular stomach and accumulated in their body in balance with their intake of debris. According to van Franeker this makes the northern fulmars a good indicator of the prevalence of plastics in the sea – but hampers their feeding, weakens their health and may finally kill them. The IMARES research group (Institute for Marine Resources & Ecosystem Studies) works in Wageningen, Holland. The group collects and analyzes regularly the dead fulmars, which have drifted on the coasts of the North Sea. The newest survey revealed plastics in the stomachs of 95 % of the birds, 33 small fragments on the average. In relation to body weight, this would mean a hundred times more for humans. – The amount of rubbish in the North Sea has not changed much in the 2000s. Even though the waste politics and the protection of the seas have developed, the increase in shipping and use of plastics have invalidated their effect. Van Franeker says that the EU-accepted target of the good state of the seas by the year 2020 cannot be reached at this rate.
https://naatti.net/articles/plastic-debris-spoils-the-oceans-2015
Position Summary: The Preschool Paraprofessional Teaching Assistant is responsible for assisting the classroom teacher and instructing students based on Utah’s Early Childhood Early Learning Standards, developmentally appropriate curriculum, and requirements of Guadalupe School’s mission, goal, and objectives. Working as a member of the Guadalupe School agency, paraprofessionals follow the mission, goal, and objectives of the agency when interacting with students, parents, staff, and the community. Qualifications: - A record free of criminal violations that would prohibit school employment - Current CDA credential or willing to obtain a CDA credential within 1 year of employment - Willingness to learn and stay knowledgeable in current educational practices - Ability to be sensitive, appropriately accepting, and caring towards children and other adults - Bilingual in English and Spanish preferred Essential Functions: The following are typical work responsibilities. A reasonable accommodation may be made to enable a qualified individual with a disability to perform essential functions. CLASSROOM RESPONSIBILITIES: - Work closely with lead teachers to plan lessons and provide age appropriate instruction based on Utah’s Early Learning Standards and the needs of individual students - Conduct classroom activities in whole group, small group, or one on one settings - Manage groups of up to 25 students in the classroom, outside on playground structures during recess duty, in the lunchroom, and during transitions between classes, etc. - Support school wide events as appropriate - Communicate with classroom teachers and center director student needs, concerns, injuries or other important information requiring attention. Documents all incidents as required by program policies and law - Perform other duties as assigned OTHER: - Participate in staff meetings and professional growth opportunities as directed - Wear work attire appropriate for student interactions and season of the year - Sit/stand/navigate stairs and maneuver classroom spaces (e.g. between child-sized chairs) as needed while assisting students - Lift up to 30 lbs. - Demonstrate dependability, regular attendance, professionalism, consistent punctuality, and efficient management of work schedule - Work independently and in a team setting towards a team goal ABILITIES REQUIRED: The following personal characteristics and skills are important for the successful performance of assigned duties.
https://guadschool.org/job-posting/preschool-paraprofessional-teaching-assistant/
The reorganisation of the Asia-Pacific management structure at Pinsent Masons aims to further boost its energy and infrastructure business by tapping into the regional shift towards sustainable projects. The seven Asia-Pacific appointments in April underlined the firm’s focus on the region’s infrastructure and energy industries, which are the specialised areas of practice for the new regional leaders. The management team comprises James Morgan-Payler, head of Asia-Pacific; Matthew Croagh, head of Australia; Ian Laing, head of Singapore; Alvin Ho, the representative for Hong Kong; Kanyi Lui, the representative for mainland China; Melanie Grimmitt, global sector head for energy; and Hammad Akhtar, global practice group head for transactional services. “Our energy business represents approximately 14% of our global revenue, which represents a significant and rapidly growing part of our business,” Lui, who has extensive experience in advising energy and infrastructure projects on development and financing, told China Business Law Journal. Energy co-operation is a cornerstone investment of the Belt and Road Initiative. Pinsent Masons has advised a series of Belt and Road projects sponsored by Chinese state-owned enterprises, such as the world’s largest concentrated solar plant in Morocco, the largest oil and gas investment in East Africa, and the Asian Development Bank phasing out coal-fired power plants across Southeast Asia. In 2017, with China releasing three official documents to encourage programmes endorsing the 2030 Agenda, which includes the UN sustainable development goals and the Paris Climate Agreement, the Belt and Road projects have gradually transitioned to “small-and-beautiful” and sustainable-oriented themes. “Renewable energy will continue to be at the forefront of energy sector growth across the region, driven by the demand for clean energy contributing to net-zero and ESG targets,” said Morgan-Payler. “He expected a rise in projects endorsing the clean energy transition with a strong interest in harnessing offshore wind across Southeast Asia.” The newly created role of Asia-Pacific head was one of the restructured changes in the spotlight. The firm said “it enters its next phase of strategic growth across the region”. China’s clean and renewable energy markets are also growing after the central government set a “dual carbon” target of capping CO2 emissions before 2030 and achieving carbon neutrality by 2060. Nevertheless, promoting clean energy increases the difficulty of managing legal issues, while Lui cautioned that the energy business is experiencing inflation, economic downturn, geopolitical risk and other factors. “Projects take longer to put together and receive approval, [although] good projects can be highly competitive, execution can be fraught with risk due to rapidly changing policies. Supply chain issues, competing compliance requirements and price escalation are particular concerns for clients,” said Lui. Morgan-Payler believed priorities in the coming year will be improving client service delivery with a purpose-led professional services business with law at the core, as challenges ranging from the pandemic to legislative, geopolitical and technological advancements will remain. The firm is also seeing continued investment and opportunities in technology, science and industry – fast-moving sectors even during the pandemic. “The scope for market growth across the Asia-Pacific region provides significant opportunities for the entire project life cycle and most practice areas,” said Morgan-Payler.
https://law.asia/pinsent-masons-restructuring-tap-sustainable-energy-projects/
China’s Personal Information Protection Law (“PIPL”) came into effect on 1 November 2021. Accompanying the PIPL, the Cyberspace Administration of China (“CAC”) also published draft Measures for the Security Assessment of Outbound Data for public consultation. In most cases, multinational companies with operations in China will involve some communication going back and forth between China and the overseas headquarters. These companies need to collect and process information from their existing and prospective employees, from the recruiting process to the end of the employment. Therefore, it is crucial to study the relevant provisions in the PIPL that affect how employers collect and process employees’ personal information. We can foresee a few scenarios where employers should be extra careful: - Information transmission between China subsidiaries and overseas related companies involves personal information of employees; - The Company’s ERP data, including personal information of China employees, are hosted or being backed up on an overseas server; - A third-party service provider is managing China-based employees’ insurance and other benefits outside of China; - The subsidiary will be sold or acquired, and the potential new owner is arranging the transaction through a third party overseas; - An internal investigation is being carried out, requiring access to the electronic equipment of employees. In practice, there are certainly more complex cases where a detailed analysis should be conducted. Based on the current development of the data privacy framework, we advise employers to take the following actions: Explicitly notify employees and obtain their written consent on processing their personal information It is already common for many employers to obtain “general consent” from the employees during the hiring and induction process. The old practice might merely involve a general statement on the employee’s contracts or staff handbook. However, these clauses are not valid anymore to cover all scenarios. Separate notices should be given to the employees when the employer intends to disclose employee information to a third party, transfer to a location outside China, or process sensitive personal information. In the written consent, employers should explicitly notify employees of specific items: - Name and contact information of the data controller; - The purposes and methods of processing of personal information; - Categories and retention periods of personal information to be processed; and - Methods and procedures for employees to exercise their rights enshrined in the PIPL. Even though the PIPL provides additional grounds to process employee data in certain circumstances without the need to obtain consent, the precise scope of these exceptions is yet to be clarified. Therefore, the best policy is for employers to be prudent in all cases. Undertake a security impact assessment before transmitting personal information abroad According to the draft Measures for the Security Assessment of Outbound Data, before employers provide employee’s personal information overseas, they should first carry out data export risk self-assessments, focusing on assessing the following matters: - Legality, appropriateness, and necessity of the purpose, scope, and methods of exporting the data and of the overseas’ recipients handling of the data; - Volume, scope, types, and sensitivity of the exported data, and protentional risks to national security, public interests, or the lawful rights and interests of individuals and organizations that might be brought on by exporting the data; - Management and technical measures, and the capacity of data handlers to prevent risks such as data leaks and destruct; - Responsibilities and obligations that the overseas recipient has pledged to undertake, as well as their management and technical measures, and the capacity for performing the responsibilities and obligations, and whether they can ensure the security of outbound data transfer; - Risks of leaks, damage, tampering, and abuse of data after the data is transmitted abroad and further transferred; - Whether the individuals whose data is transmitted abroad can easily access the channels to maintain their rights and interests in personal information protection. - Whether the agreements signed with the overseas recipient fully specify responsibilities and obligations in protecting data security. In addition to self-assessments, if the amount of personal information exceeds a certain threshold according to the Measures for the Security Assessment of Outbound Data, a mandatory security impact assessment through the provincial level of CAC will be triggered. Sign cross-border data transfer agreements with overseas data recipients The China subsidiary should sign a cross-border data transfer agreement with each of its overseas data recipients. The agreement should provide the responsibilities and obligations for data security protection, including: - Purposes and methods of transmitting the data abroad and the scope of the outbound data; - Purposes and methods of data processing by the overseas recipient; - Location and duration of overseas storage of the data; - How to deal with the data after the storage period expires, the purpose agreed upon is completed, or the contract is terminated; - Restrictive clauses restricting the overseas recipient from re-transferring the data transmitted abroad to other organizations or individuals; - Security measures that shall be taken in case of any substantial change in the actual control right or business scope of the overseas recipient, or any change in the legal environment of the country or region where the overseas recipient is located, which makes it difficult to guarantee data security; - Liability for breach of the data security protection obligation, and binding and enforceable dispute resolution clauses; - Clauses about properly carrying out emergency measures in case of data leaks and other risks; - Clauses about ensuring the smooth channels for individuals to safeguard their personal information rights and interests. Review and update the company’s data storage and backup policy in China HR managers in China should work with the IT department to review the current IT infrastructure on data storage, protection, and backup policy. Pay special attention to assessing whether the company should improve the China data backup policy, database management system, data masking, and remote access mechanism. For more complex cases, it is recommended to conduct a risk analysis to avoid liability. Employers would need to rethink their policies and implement corrections to align with China’s data privacy framework. Amendments should be planned according to each company’s situation to reduce liability exposure and generate trust from the employees.
https://www.cwhkcpa.com/protecting-china-employees-personal-information-key-points-for-hr-management/
Since the September 11, 2001 attacks, the United States has placed an increased focus upon government and private agencies to engage in surveillance practices in order to combat terrorism. The passing of the United States PATRIOT ACT (2001) expanded the surveillance capabilities of law enforcement officials thus allowing both federal and state agencies to legally wiretap a range of communication devices. Under the justification of “fighting terrorism,” federal and state agencies now have more access to sensitive data on/about a range of persons including subjects of interest. Legal scholars (Bam, 2015, as well as the American Civil Liberties Union (ACLU), have questioned the constitutionality of the advancement of surveillance practices in government agencies including the role private agencies play in assisting federal agencies in criminal investigations. Even so, research dedicated to how the public understands the expansion of state and federal surveillance capabilities, and connections to private entities, is under studied. Using the Globalization of Personal Data (GPD) survey questionnaire from Surveillance, Privacy, and the Globalization of Personal Information by Elia Zureik, the goal of this research project is to identify how individuals in the United States perceive the transfer of their personal data between government and private agencies. Through non-probability online quota sampling methods (Singleton and Straits, 2005), responses from participants stratified into five different racial stratums are analyzed and used to examine the extent to which citizens in the United States are either concerned or unconcerned about surveillance practices used by government (state and federal) and private agencies. In order to examine the impact that levels of knowledge and awareness of current surveillance technology and legislative policies has on citizens’ concerns, this research project also seeks to examine important socio-demographic differences between respondents. Ultimately, this research represents an attempt to establish a dialogue for future policy makers discussing how citizens perceive the “dataveillance” capabilities of government and private agencies, and whether current legislation goes far enough to protect citizens from unreasonable government intrusions. Keywords datamining; privacy; surveillance Disciplines Sociology File Format Degree Grantor University of Nevada, Las Vegas Language English Repository Citation Kaplan, Stephanie E., "Societal Opinon of Government and Private Agencies’ Surveillance Capabilities Post 9-11" (2017). UNLV Theses, Dissertations, Professional Papers, and Capstones. 2994.
https://digitalscholarship.unlv.edu/thesesdissertations/2994/
Join Ken Wilber (via video link) and Doshin Roshi in the Universal Hall at Findhorn as they honour the gifts of the transformational communities of today, recognise some of the challenges they are currently facing and explore what the medicine to heal and evolve could be from an Integral perspective. - Honouring the role communities have played in the transformation of the world up until now – the movement from modernity to postmodernity. - Highlighting the pain and struggles within these organisations – beliefs, values and preferences that contribute to a sense of ‘stuckness’. - Creating curiosity about the medicine needed to support a breakthrough to the next level – what might communities of tomorrow look like once they have embodied this medicine? - Sharing inspiration and support to expand the potential of each community – deepening the transformational impulse around the world Join in as this free event will be streamed live to communities, holistic centers and ecovillages around the world. Click here to sign up to the event via Findhorn Live Growing Up and Waking Up This dialogue follows on from Ken’s recent publication The Religion of Tomorrow which explores A Vision for the Future of the Great Traditions—More Inclusive, More Comprehensive, More Complete. A single purpose lies at the heart of all the great religious traditions: awakening to the astonishing reality of the true nature of ourselves and the universe. At the same time, this core insight has become obscured and communities have developed in the last five decades offering alternatives, however, they too face their own blocks. In this live dialogue, Ken Wilber and Doshin Roshi explore the Transformational Communities of Tomorrow. Ken Wilber is one of the most important philosophers in the world today. He describes Doshin as one of the leading spiritual teachers of our time. Both emphasise the importance of state and stage development as part of spiritual awakening – growing up and waking up. Ken and Doshin offer this in the spirit of Bodhichitta for the benefit of all beings Join us in this groundbreaking dialogue between them, as together they honour the role spiritual, holistic and eco communities have played in the transformation of the world up until now, the challenges of our time and the wisest ways to respond.
https://www.centersnetwork.org/2017/09/12/transformational-communities-of-tomorrow/
Most quantity measurements in the US can break down into teaspoons, tablespoons and cups. Whereas it’s tempting to skip measuring, an excessive amount of or too little of an ingredient can destroy a dish, particularly for baking. Understanding what number of tablespoons are in 1/4 cup makes it simple to transform forwards and backwards. Reading: How many tablespoons in a 1 4 cup What Does “Tablespoon” Imply? Tablespoon refers to a big spoon, like a soup spoon however with a selected quantity measurement equal to 1/2 fluid ounce, 15 mL or 3 teaspoons. Read more: Mini cookie ice cream sandwich You may even see tablespoons listed as T, Tb, Tbs, and tbsp in recipes. In distinction, teaspoons will seem in lowercase as tsp or t. Find out how to Convert Tbsp to Cups There are 16 tablespoons in a single cup, so the conversion system is Tablespoons = Cups x 16. For reference, here’s a tbsp to to cup converter chart: Notice that that is for US measurements and recipes. If you’re utilizing a UK recipe, seek advice from the worldwide part beneath. Find out how to Convert Tbsp To tsp? Tablespoons and teaspoons are used for various functions. Usually you’ll see tablespoons used for bigger portions like cooking oil or soy sauce, whereas teaspoons are for smaller portions like spices and seasonings. Also: London broil how to grill If you don’t have a tablespoon useful, you should utilize an equal variety of teaspoons. Here’s a converter chart: There are 3 teaspoons in a single tablespoon, so the conversion system is Tablespoon = 3 x teaspoons. Measuring Suggestions A number of easy ideas will assist to make measuring a breeze: - At all times degree your tablespoons or cups utilizing a knife. Heaping quantities are incorrect except particularly known as for within the recipe.. - Sift dry elements earlier than measuring for probably the most correct quantities. That is particularly essential when doubling or tripling recipes. - Use pyrex liquid measuring cups to measure liquid quantities. Don’t use espresso mugs or spoons with out measurement indicators. Often Requested Questions - How Many Tablespoons are in a Cup: There are 16 tablespoons in a cup. - How Many Dry Tablespoons are in a Cup:One cup is transformed to 16 tablespoons for dry items akin to sugar, flour and cocoa. - How Many Tablespoons are in a 1/4 cup:There are 4 tablespoons in ¼ cup. - How Can I Convert Tablespoons to Teaspoons:One tablespoon comprises 3 teaspoons. - How Many Teaspoons are There in a Cup:There are 48 teaspoons in a single cup. - How Many Oz Are in 1 Cup: There are 8 fluid ounces in a single cup. - Find out how to Convert tbsp To ml: One tablespoon is equal to fifteen ml. - What number of Tbs are in 2/3 cup of butter: There are about 11 Tbs in 2/3 cup of butter. - What do T, Tb, Tbs, and tbsp stand for:These are all normal abbreviations for “tablespoon” Worldwide Cups and Tablespoons Recipes from completely different international locations or areas might use metric or older imperial requirements which might be barely completely different. Normally, you don’t want to make changes except you might be doubling or tripling a recipe, the place the variations might begin to add up. Nonetheless, listed below are the variations for the sake of completeness:
https://bellsfamilyfun.com/how-many-tablespoons-in-a-1-4-cup/
Dear Brothers and Sisters, Tomorrow it is the first day of Navaratri. There are more, but generally there are two Navaratri’s celebrated generally in India. In many places around this vast country where Sanatan Dharma ( The eternal Law) is deeply imbedded in life, the Shrimad Devi Bhagwatam is read and spoken about by acharya’s ( teachers of Vedic knowledge). Intense battles between Goddesses and demons are the main theme throughout this scared book. These demons are sort of personified tendencies of lower human nature like greed, attachment, jealousy, ignorance, me & mine etc that are slain one after the other with various aspects of the Divine Mother, the Mother of all forms, in Her where all life takes place. This contains a hidden practice that has been valid since the birth of humanity to raise human consciousness, evolving the entire human race. Babaji said that that was His mission and this is why we celebrate Navaratri in our ashrams happily, vividly and constantly. In the ashrams several intricate puja’s and sacred fires are offered, scriptures are read and explained and lot of sacred chanting throughout the day and in particular in the evening. Now that we cannot go to the ashrams all that is not really available for most of us we can still celebrate wherever we are, concentrating on the essence of Navaratri. Throughout the ages, self reflection and mindfulness have been practiced by very few but in these dark times there is an advantage, that self-awareness is becoming a much more common occurrence. Many wonderful teachers are now spreading the practice far and wide and thru the internet with lightning speed. Practicing this next to any other practice is putting a super turbo charge on it. Because of the energies during the auspicious days of Navaratri, the radiant energy of the Divine Mother is fully in our support to become aware of what is holding us back in surrendering to Her Divine harmony. Our lower tendencies, mentioned above are forcing us unconsciously in other directions than towards The Divine Light, our Home, Paradise, our original essence or however you want to call it. The first step to such deep healing is to recognise this inside of ourselves and how it directs our life and secures us of all sorts of hardship to let the light shine into the dark corners of our being, we can let the Divine light shine in the dark corners of the world. For those who have a little shrine or a picture set up……keep an oil lamp going or a candle all the time….Offer a flower, a glass of water and an incense and call on that Divine force in your way , in your words. For nine days stop the vices and mental escape & entertainment of TV etc and shower your mental activity in sacred mantra’s. Sleep and eat less so you have more time to pray and focus inside in these special days. Test the Divine by giving it your best to see if there is a response, show that you are hungry. The Divine Mother cannot ignore the call from an earnest calling child. For many of you I am saying nothing new. I am not trying to tell you something just describing my personal process and pray that it may be inspirational to continue, start or deepen your own. In the Vedic tradition the name for the all encompassing Divine Mother is Durga and the Sanskrit meaning for Durga is a place that is protected and cannot be reached by evil forces. The word Durga also means invincible, unbeatable and undefeated. She is considered to be the combined forms of Goddess Kali, Goddess Lakshmi and Goddess Saraswati. Devi Durga is also known as Mahishasurmardini which translates to slayer of a mighty evil buffalo demon called Mahishasur, which represents tamo-guna, the dark quality of inertia, ignorance, and laziness. The mantra that is commonly used to honor and worship Devi Durga is “Aum Dum Durgayai Namaha”. This is an extremely powerful mantra to harness the energy of planets and prevent them from afflicting us. Meaning and Explanation Aum – The primordial sound of the universe; The essence of everything. Dum – The seed mantra (sound) of Devi Durga. Durgayei – To Devi Durga Namaha – We bow down to you or we surrender to you. More popular and also more practiced at our collective puja’s in Babaji’s ashram is Her mantra : “Om Aing Hreeng Kleeng Chamundaayai Viche”. It has the same object of fighting evil forces, fulfilling earthly desires and the realisation of the self. Going for darshan and celebrating Navaratri in Babaji’s ashrams is reinvigorating the Divine connection we have in our heart. It is good to remember that now the circumstances have limited us this time to go to one of Babaji’s ashrams, that going there is not really making you a better devotee. This comes from the practice you do and the concentration you have. That may come easier in an ashram or center, but it does not seems that these are times to take it easy. It is believed that these are auspicious days where your daily practice is a lot more potent and beneficial to reach your desired outcome. With the madness that is now celebrated around the world it may be very wise to take a few steps back from that for the next 10 days and concentrate on your Divine essence in your heart. Perhaps it is even a blessing in disguise not to be distracted by all the rituals, all the people, beautiful surrounds etc and be fully concentrated on the essence of beauty and well being, The Divine Mother.
https://jai-ho.org/navaratri-starts-tomorrow/
So is an earthly master. for the sake of righteousness, to be a giver of the good thoughts of the actions of life towards Mazda; and the dominion is for the lord (Ahura) whom he (Mazda) has given as a protector for the poor Note the striking resemblance to the shlokas of the Gita. This is because these hymns are written in a language called Avestan, an Indo-European language that was used in Iran, two thousand and five hundred to three thousand years ago corresponding to the Sanskrit of the Hindus. Like Sanskrit, Avestan was never used for inscriptions or charters, the edicts of Achaemenid kings being written in a more simpler spoken language called Old Persian in a cuneiform script written left-to-right and probably adapted from Akkadian, just like how kings like Ashoka used the different Prakrits and those of later kings, the Parthians and Sassanians inscribed in Middle Persian in a right-to-left script called Pahlavi derived from the Semitic Aramaic. The Avestan is named so because it is the language used in the Avesta, the holiest book of Zoroastrianism, which is often mistakenly referred to as the Zend-Avesta. In fact, the true name of the book is Avesta and the Zend only denotes the commentary (usually written in Pahlavi, a Middle Persian language used from the 3rd to the 10th century AD) that accompanies the Avestan hymns. These commentaries brought the Avesta within the ambit of the common man to whom, the Avestan language was unintelligible. But various irregularities existed even in these commentaries due to the prevalent custom of using outdated Aramaic logograms for corresponding Middle Persian words. For example, the Persian would write malkan malka in Pahlavi script but read the same as shahenshah, its exact Persian equivalent meaning "king of kings". Born and brought up in a proud and orthodox South Indian city which had, but a sprinkling of Parsis tucked away in solemn prayer in an old, decaying quarter content with their cultural anonymity in an overwhelmingly alien society, I first heard of this religion from the television as a boy of eight or nine. That was when the famous pop singer Alisha Chinai revealed in an interview that she belonged to the faith and I soon caught a fascination for the noun. But this little nugget of learning was soon confined to the recesses and had to wait for years to be rediscovered. Actually I knew next-to-nothing about the Parsi dogma till early 2001, when I stumbled on the basics of the religion in an internet article called "Antiquity and Continuity of Indian history" by one Prasad Gokhale. While Gokhale's article was remarkable for its lack of objectivity and dismissed outright by academics as revisionist Hindu right-wing propaganda nevertheless, there were still a few claims about Zoroastrianism and its historical relationship with Hinduism which intrigued me back then. For example, Gokhale writes that a Zoroastrian work called the Vendidad lists seventeen lands that were created by the Parsi God Ahura Mazda and that the sixteenth of the seventeen lands was India which was mentioned as Hapta-Hindu (which I found from neutral sources to be true). Excited, as if I had stumbled upon some little-known civilization, I waited till the end of my Matriculation board exams and then, downloaded the whole of the Vendidad from the internet followed shortly later, by the rest of the Avesta. Fortunately, my college had an excellent library and I spent all my time acclimatizing with Iranian history and discovering strange, pre-Islamic kings with names that you now typically associate with Islam (I later learnt that they were called the Sassanians) from the Encyclopaedia Britannica, when I should actually be reading programming languages and electronics. A few years later, social networking sites like Orkut cropped up and I was able to interact with expatriate Iranians living in the UK, US, and elsewhere, some of them, acclaimed scholars. So, that's how my interest in Zoroastrianism grew and it has been almost seventeen years since I had begun this fascinating and thoroughly satisfying journey. Now, Zoroastrianism is a monotheistic faith which stresses upon the importance of harboring good thoughts, speaking good words and indulging in good deed (Humata, Huktha and Huvarashta, in the sacred tongue of the Zoroastrians, the prefix hu- meaning good and mata meaning thought being cognate with the I-E word mann or mind and Pendar-i-nek, Goftar-i-nek and Kardar-i-nek in modern Persian. These are the three principal tenets of Zoroastrianism and hence, central to the faith). But Zoroastrianism itself has not for long been a monotheistic religion; in fact, it started as a polytheistic religion like Hinduism before the teachings of Zoroaster or Zardusht reformed the faith removing the intermediation of the Kavis or saint-seers (who like the great bards of Celtic Britain) and thus recasting it in a dualistic mould. (There are even many who question the suitability of the name "Zoroastrianism" instead suggesting that the religion should actually be named Mazdayasnism or "worship (yasna being cognate with the Sanskrit yagna) of Mazda"). Thus, Zoroastrianism was revived and not founded by Zoroaster, who played a role akin to Jesus and Muhammad by opposing the powerful clergy of the land. But unlike Jesus and Muhammad, the religion Zoroaster founded did not vanquish the prevalent pagan faith instead absorbing it completely. (Ironically, Zoroastrianism which was purpotedly founded as a protestant faith would also fall prey to a powerful clergy within a generation of Zoroaster's death and superstitions that were condemned by him would make their reappearance in a different form. Strangely, the part played by religion and the clergy in Iranian politics is nothing new and has been in vogue for upwards of two thousand years). Among the many deities of the pagan faith that were adopted by Zoroastrianism was Mithra who became an angel or yazata (Mod. Pers. yazd). Mithra travelled westwards as part of the cultural exchange that accompanied the expansion of the Roman Empire and spearheaded a fast-growing cult which rivalled that of the Egyptian god Isis in its heyday that lasted four centuries. Let us trace the antecedents of this great land and its unique religion back in history. Ironically, the origins of a people whose self-designation means "the Aryan" in English could be traced back to a 5,000 year old civilization which is believed by scholars to have spoken a Dravidian tongue. The Elamite second person singular ni, and second person plural num resemble the Dravidian ni-. The oldest known form of the Elamite language has come down to us from the Behistun inscription of Darius the Great. Given the fact that the Elamite civilization thrived in exactly the same region where the Persians lived a thousand years later, I find no reason why the Persians should not be their lineal descendants though they did use an Indo-European language in their inscriptions and edicts. To support my claim, I would like to quote the example of the Medes of North-western Iran who founded the first historically-attested Iranian kingdom by wresting control of the eastern provinces of the Assyrian Empire in 612 BC. The post of Zoroastrian high-priest maubad or magus has always been hereditary and drawn exclusively from this tribe thereby indicating that they were the original heirs to Zoroaster and that the people of Persia proper had a status not much superior than that of a conquered people. Traditions identify the birthplace of Zoroaster in the vast thinly-populated region between Media and the Afghan city of Balkh (where a Buddhist temple and a Zoroastrian fire altar survived side-by-side till the 9th century AD; the importance given to Balkh in Zoroastrian religion texts is so high that it could have very well been the "Mecca" of the Parsi faith); legends also assert that the Iranians emerged millenia ago from a sacred homeland Airyanem Vaejo (which became Iran-Vej in Middle Persian consequently giving rise to the name of the country Iran) which the holy books usually located in Northern Iran-Southern Turkestan., Mede hegemony, however, did not last long and like the non-Aryan natives of India who began asserting themselves a few centuries later, the sea-dwelling Persians overthrew Median rule under their king Cyrus the Great (Kamil V. Zvelebil, the renowned linguist, made a controversial suggestion that the ancestors of present-day Dravidians, too, might have emerged from the mountains thereby contradicting the established view held by historians and archaeologists like Iravatham Mahadevan, who have repeatedly argued in favour of a coastal origin based on the aru-min legend depicted in the Indus seals. He cites the example of the Brahuis and observes that many prominent Dravidian linguistics groups had the self-designation "mountain-people" even deriving the Persian word for mountain koh with the Dravidian root kunru.) There have been disputes over the date of Zoroaster with many even asserting that he was a purely mythological character who never existed. The dates given range from 1700 BC to 500 BC but scholarly consensus leans towards the latter date, which, I too feel is the most likely making him a near contemporary of the Buddha and the Mahavira. An apocryphal tale describes how an Indian sage named Changrachanchah journeyed his way to Iran to have a theological disputation with Zoroaster and conceding defeat embraced the new faith. It is not known who this Changrachanchah was but fanciful theories link him to the Shankaracharya as both the names sound strikingly similar. The chronological lists of the Kanchi mutt too place Adi Shankara in the 5th century BC. In any case, the tale of Changrachanchah is likely an innovation of a much later date and the Kanchi mutt's chronology has been discredited by historians. The most sacred book of Zoroastrianism is the Avesta. The Avesta is not a revealed book but a compilation prepared over a long period of time though much of it is indeed made up of the revelations from Ahura Mazda, the Supreme God to Prophet Zoroaster. But a significant portion of the Avesta is also made up of hymns in the sense that though Zoroastrianism is often treated in the same vein as Abrahamic religions like Islam and Christianity, the book is in fact more closer to Hindu holy texts like the Mahabharata. Consider for example the case of Vendidad. Vendidad is the only one amongst the 21 Nosks making up the Avesta that has survived in its entirety. In its structure, the Vendidad most closely resembles the Atharva Veda. Much of the text is made of charms, spells and incantations and a small though significant portion on the wills and whims of Ahura Mazda. As the various hymns of the Avesta vary in nature and style, they also vary in age. It has been universally accepted amongst scholars that the portion of the Avesta that is called the Gathas is trhe oldest with some liberal estimates dating these verses based on their language, to the 15th century BC. Do these verses belong to a time anterior to that of Zoroaster? Cannot say for sure! But they, indeed, depict the earliest form of the Zoroastrian religion. The core of the Avesta, on the other hand, belongs roughly to the 6th or 5th centuries BC when the two of the greatest monarchs in the world, Cyrus II and Darius the Great were ruling over Iran. Proof! No, I don';t have any! But it is just an assumption based on the fact that the most complete book of the Avesta - the Vendidad lists Hapta-Hindu or the Punjab among the sixteen nations created by Ahura Mazda. This is not quite possible unless North-West India itself was a part of Iranosphere while these were being written and only Persian Emperor in the millenia before Christ to stamp his authority over these parts was Darius the Great. Some parts of the Avesta could be far younger; there are allusions even in the Vendidad to the solemn, self-mortifying cult of the Mazdakids who recommended regular fasting in stark contrast to Zoroastrianism that prescribed for its adherents a happy, joyous and bountiful life and according to whom, austerity was taboo According to these hymns, "the ungodly Ashaemaogha who does not eat" was an ally of Angra Mainyu , the Zoroastrian Satan (who is known as Ahriman today). The Mazdakite cult was founded by a godman named Mazdak and reached its apogee in the beginning of the 6th century AD when the Sassanian king Qobad I became Mazdak's disciple and embraced the new cult. But its dominance lasted only a generation. At the end of Qobad I's reign, his son Khusro (Yeah, the same Khusro Noushirvan, the most famous king of the Sassanian Dynasty and in whose court, the Panchatantra was translated into Persia and chess adopted and adapted from India. Noushirvan or Anushirvan finds some space in Nehru's Discovery of India and was even the subject of a tribute from Prophet Muhammad who considered himself fortunate to have been born in the reign of such a just king), then a young prince, got Mazdak murdered and brutally suppressed the nascent religion. But, Mazdakism is only the second of the major heresies of the Sassanian period. There was the Christian-Zoroastrian syncretist Mani of the 3rd century AD who suffered the same fate. No direct references to him are found in the Avesta but veiled attacks and curses found here and there and Mani is generally considered the intended recipient. One mobed or high-priest named Kertir went a step further and authored an inscription boasting of the killing of Buddhists (the Persian word for idols buth probably derives from the Buddha), Christians, Manichaeans and Hindus in the kingdom under the patronage Sassanian Emperor Bahram II. His predecessor Bahram I was also a devout follower of Kertir and it was probably at Kertir's insistence that Mani was flayed alive and his skin displayed on the gates of the palace at Ctesiphon. (But Manichaeism displayed amazing resilience and adaptability and survived the death of its founder by many centuries. It remained a minority religion in Iran often confused by the authorities with Nestorian Christianity till the Islamic invasions drove it eastward towards Central Asia and China. By the 13th century, Manichaeism was finally believed to be dead but it again resurfaced in the eastern China in the 16th century AD where it was again confused for a sect of Nestorian Christianity. Manichaeism is regarded to be well and truly dead now but doubts still remain). And again, the allusions themselves are not unambiguous! What if the Ashaeomaogha hymns were actually a reference to the Jain practise of sallekhana. Certainly not improbable! There has been a strong Persian influence in north-western India from the time Darius the Great conquered the region and many Sassanian kings led expeditions into Punjab and the Sindh. But some of the worst curses are reserved for Iskandar or Alexander the Great who is described as the "accursed" and "ally of the evil one". Alexander sadistically persecuted Zoroastrian priests, burnt their scriptures and destroyed their fire-temples probably because the tenets of monotheistic Zoroastrianism were dead opposed to Greek polytheism of which Alexander was an adherent. But on the whole, Alexander's treatment of the non-religious Iranian and the landed gentry were generally liberal and there were many intermarriages of Greek soldiers into Iranian aristocratic families. But unfortunately, Zoroastrian scriptures only remember Alexander as an oppressor. The Zoroastrian religion was almost wiped out of existence by the persecutions of Alexander the Great and it recovered only in the middle of the Parthian period. Vologasses I (Valkhash) who reigned from 51 to 78 AD commissioned the first compilation of the Avesta laying the seeds for a Zoroastrian revival. We know that this monarch had a brother (Tiridates) who was a mobed or Zoroastrian high-priest. Priests now began a frantic search for books and fragments that had escaped Alexander's orgy of destruction. The work took centuries to complete and the sequence of events eventually culminated in the rise of Ardeshir I who founded the Sassanian Empire in 226 and declared Zoroastrianism as the state religion of Iran. Ardeshir's inscriptions proclaimed him to be a champion of Zoroastrianism and portrayed the Parthians, ironically, as the villains. Under Ardeshir and his immediate successors an orgy of intolerance and persecution was let out on religious minorities - probably a spontaneous self-defence approach adopted by a still insecure Zoroastrianism. But once Zorostrianism had scuttled all rivalry and silenced opposition, it entered its most glorious phase. This was when the legendary mobed Adarbad Mahraspandan lived. According to legends, his faith was tested with molten bronze being poured upon his chest and the mobed emerged unscathed and instantly became a celebrity. According to pseudo-prophetic Zoroastrian hymns called Yashts, the faith was overthrown thrice and restored thrice - first by Alexander the Great and restored by Ardeshir I, the founder of the Sassanian Dynasty, then by the prophet Mani before being restored by Adarbad Mahraspandan and lastly, by the Arabs, it will be restored by Saoshyant who shall come at the end of time just like the Kalki Avatar of the Hindus and the Buddhists. Much of what we know about Zoroastrianism has come to us from the period following the Islamic invasions. The oldest surviving copies of Zoroastrian religious texts date from the 4th to the 10th century AD and are written in the Pahlavi script. The oldest complete hagiography of Zoroaster the Zardusht-Namak dates from the 12th or 13th century AD when Iran was under Mongol rule and Zoroastrianism was almost extinct. Of some works, the oldest extant copies that we know of are Gujarati manuscripts from 15th century India. By then, the numbers of adherents was already dwindling due to jizya and other taxes. There was a national and cultural revival when Firdausi wrote his magnum opus Shah-nameh in the 10th century AD but by then, the damage had been done and it was impractical to expect Zoroastrianism to oust Islam considering that apostasy in an Islamic country was punishable by death. Still, while Iran could not revert to Zoroastrianism, it celebrated Zoroastrian heroes like Rustam and Jamshed as its national symbols. There were rebellions both by Zoroastrian as well as Muslim Iranians who hated Arab domination. The three and half centuries from the time of Arab conquest to the rise of Mahmud Ghazni saw a great deal of cultural interchange between the Semitic and the Indo-Iranian world. Soon after the conquest, Zoroastrianism was proscribed and Iranians were disparagingly referred to as "Ajam" meaning "babblers", a reference to the error-ridden Arabic that the new coverts from Zoroastrianism spoke. But when the Umayyadas were displaced by the Abbasids conditions improved. The Abbasids captured power with the help of an Iranian convert named Abu Muslim, shifted the capital from Damascus to the former Sassanian citadel of Baghdad (Baghdad or Bagdat, in Persian meant "given (datha) by the gods (Bagha)"; Bagha here is cognate with the Sanskrit Bhagavan and survived into Modern Persian in the form of the masculine title Baig or Beg (Lord) and the feminine title Begum and the Turkish Bey) and even invited the Barmak (pramukh??) of the Buddhist monastery of Navavihara near Kabul who converted to Islam and became the Prime Minister of the state under the name Khalid. It was during this period that chess and Indian numerals made their way from Iran into Arabia and many important Persian texts were translated to Arabic. By the 9th century, however, the Caliphate was on the decline and local Iranian dynasties started to assert their independence. A Turkic dynasty established itself at Ghazni in eastern Afghanistan, conquered the whole of the Iranian plateau to the west and took Islam eastwards into India through seventeen bloody invasions. But while Mahmud Ghaznavi, the greatest king of this dynasty is reviled in India, attitudes in Iran differ. He is hailed as a champion of Iranian culture and a fervent patron of Persian literature. Firdausi dedicated his Shahnameh to him thereby evoking comparisons with the legendary Rustam, the hero of the epic. In fact, a cult of Rustam seems to have thrived during this period, he being variously identified with Rostam Farrokhzad and the Sassanian general who made a valiant stand against the Muslims and died fighting, the 1st-century Parthian general Surena who inflicted a crushing defeat on the Romans at the Battle of Carrhae and eventually succumbed at the height of his power to palace intrigue. The etymology of Rustam and other heroes from the later parts of the Shahnameh are all foreign to Iran proper and could be traced to tribal legends of Sistan and Afghanistan thereby indicating a shift in popularity towards an eastern epic cycle as opposed to the western epic cycle of Darius I.This would be Zoroastrianism's last stand before it would be delivered a death blow by Ilkhan and Mongol invasions and centuries of anarchy that would only end with the rise of the Safavids. But Zoroastrian concepts and Persian folklore managed to sneak in into Persian poetry composed by the Sufis just before the Mongols ushered in a period of gloom. And the dominance of Eastern Iran was not limited to the epics and folklore alone. As Islamic sultanates expanded eastwards into Central Asia, Afghanistan and India, they took with them their languages - usually Persian with a smattering of Arabic and Turkic and a variety of Persian mixed with local prakrits established itself as the lingua france of the Muslims of Northern India (including present-day Pakistan) about the 13th century. This was the dialect spoken in Khorasan - the vast arid desert that extended from Media almost upto the hills of central Afghanistan. Both Dari, the official language of Afghanistan and the Urdu spoken across Pakistan and Northern India use the Khorasani diction. Today, Zoroastrianism has few adherents in India and Pakistan apart from migrant communities in the UK, USA, Australia, Europe, South-East Asia and Africa. In Iran, it is almost dead and even among the few who are officially counted as Zoroastrian, various tenets of the faith remain forgotten due to intense institutionalized persecution. But in reality, pre-Islamic beliefs have retained a tangible presence underneath the veneer of Islam and we can say that not just Iranians but Muslims across Central Asia, Turkey, Afghanistan and even Pakistan, actually practicize a Zoroastrianized variety of Islam. That this underlying influence is a living reality can be highlighted by the fact that words used by most Muslims today for matters so intimately connected with faith such as religion (Deen), God (Khoda) and prayer (Namaz) are all of Zoroastrian origin (The word Namaz is derived from Old Persian Nemase which is found in a Zoroastrian prayer Hoshbam; the word is cognate with the Indic Namaste). References:- 1) (Tr.) Darmesteter, James (1880). The Zend Avesta, Part I: The Vendidad, The Sacred Books of the East, Volume 4. Oxford University Press 2) Cumont, Franz (1903). The Mysteries of Mithra. Open Court, Chicago 3) Rawlinson, George. The Seven Great Monarchies of the Ancient Near-East (1876). Longmans, Green and Co. 4) Zimmern, Helen (1883). The Epic of Kings - Stories Retold From Firdusi. T. Fisher Unwin. 5) Greenlees, Duncan (1951) . The Gospel of Zarathushtra. The Theosophical Publishing House, Adyar 6) Herzfeld, Ernst (1928) . Memoirs of the Archeological Survey of India: A New inscription of Darius from Hamadan. Archaeological Survey of India. 7) Zvelebil, Kamil (June 1972). "The Descent of Dravidians". International Journal of Dravidian Linguistics. 1(2). 8) Brunner, C. J. (1974) “The Middle Persian Inscription of the Priest Kirdēr at Naqš-i Rustam,” in Near Eastern Numismatics, Iconography, Epigraphy and History: Studies in Honor of George C. Miles. American University of Beirut 9) (Tr.) Mehta, Siloo. Hoshbam: The Dawn (of Consciousness) by K. N. Dastoor (in Gujarati).
http://ravichandar.blogspot.in/
Capacity, Capability, and Performance: Different Constructs or Three of a Kind? OBJECTIVES: The present study focused on motor activities of young children with cerebral palsy (CP) and examined the relation between motor capacity (what a person can do in a standardized, controlled environment), motor capability (what a person can do in his/her daily environment), and motor performance (what a person actually does do in his/her daily environment). DESIGN: The relations between motor capacity, motor capability, and motor performance were calculated by using Pearson correlations and visualized by scatterplots. SETTING: A cross-sectional study of a hospital-based population of children with CP. PARTICIPANTS: Subjects were children with CP (N=85) aged 30 months (Gross Motor Function Classification System levels I-V). INTERVENTIONS: Not applicable. MAIN OUTCOME MEASURES: Motor capacity, motor capability, and motor performance were assessed with the Gross Motor Function Measure and 2 scales of the Pediatric Evaluation of Disability Inventory, respectively. RESULTS: Correlations between motor capacity, motor capability, and motor performance were high, between 0.84 and 0.92, and significant (P< .001). But when comparing children with the same level of motor capacity or motor capability, large ranges at the level of motor performance were found. CONCLUSIONS: Results imply that motor performance levels are only partly reflected by the motor capacity and motor capability levels in young children children with CP. Contextual factors (physical and social environment) and personal factors (such as motivation) influence the relations between capacity, capability, and performance. This information is essential in making decisions about the focus of therapy to maximize a child's independent functioning in daily life.
https://experts.mcmaster.ca/display/publication936263
A recent article highlights key research advances and needs to inform international policy decision making related to mercury. The article, co-authored by Celia Chen, Ph.D., of the Dartmouth College Superfund Research Program (SRP) Center, emphasizes the importance of bringing together scientific information to better understand the sources of mercury, its movement through the environment, and its effects on human and ecosystem health. Chen is an internationally recognized researcher on the accumulation of metals like mercury in aquatic food webs and serves as director of the Dartmouth SRP's Research Translation Core. - Conference Helps Scientists Inform Policies Around Mercury Pollution SRP News Page - August 2017 International experts on mercury met at the 13th International Conference on Mercury as a Global Pollutant (ICMGP) July 16 - 21 in Providence, Rhode Island, to discuss scientific findings and potential measures to decrease human and wildlife exposure to mercury. - Arsenic website helps identify sources and reduce exposures Environmental Factor - June 2017 A new user-friendly website provides a wealth of information on how people are exposed to arsenic and steps that they can take to reduce exposures. The Dartmouth College Superfund Research Program (SRP) developed the website Arsenic and You to inform the public and answer questions about arsenic in water, food, and other sources. - Duke symposium addresses toxicity of energy production Environmental Factor - December 2015 Several scientists and grantees from NIEHS participated in the Duke University Integrated Toxicology and Environmental Health Program 2015 fall symposium Nov. 13 in Durham, North Carolina. - Dartmouth-Sponsored Food Collaborative Convenes in Hanover SRP News Page - December 2015 The Collaborative on Food with Arsenic and associated Risk and Regulation (C-FARR) gathered in Hanover, New Hampshire, November 2 to address issues related to sources of arsenic and exposure in people through the food they eat.
https://tools.niehs.nih.gov/srp/news/index.cfm?Project_ID=P42ES0073739101
Compromise compromises in a marriage can be challenging to deal with, but it is a necessary element of any kind of relationship that will allow you to get what you want out of the relationship. In order to understand this, we have to look at why people cause them to become. There are two main factors at perform here. The first is just how much you trust each other, as well as the second is definitely how much you are willing to compromise your principles for the sake of being in concert. Financial compromises in a relationship, especially in the case of a marriage, are actually one of the most common types of short-cuts that people generate on a daily basis. Since you are both each person who have get together because you are excited about each other, therefore you have decided to remain together within one rooftop. So , everything is fine, and you are happy. However , periodically things basically aren’t good enough, and that is when compromise is. For example , suppose you and your partner have been with an incredibly painful ordeal. Your partner has robbed on you, or simply you have equally been yourself abused. These are all factors that can place strain over a relationship, and it often requires a lot of effort and hard work to overcome these marks and go forward. However , in case of a marriage, these types of compromises are often required to keep the relationship survive and growing. While it might appear easier to manage to live with these types of constraints, it is vital to realise they are still present. In fact , they are even more likely to appear if the associates in question never have established healthful communication and trust in the relationship. Once one person should produce compromises within a romance, these people usually tend to take the easy way out and choose to leave rather than face the music head on. When ever one spouse decides to stop some control in the marriage, the additional is likely to adhere to suit. To avoid this problem right from developing, interaction and trust between the lovers need to be while strong as is possible. This means that a single person needs to produce a genuine effort and hard work to compromise, while the other displays a determination to travel the extra mile. If the person making the agreement does not want to or perhaps is not able to, the matter will only serve to exacerbate the strain between them and the partner. Ultimately, this will prevent real accommodement from being made and will experience little advantage for the relationship. When an specific wants to establish a compromise in a marriage, they generally take the convenient way out. They are going to try to produce compromises that both of them will probably be comfortable with. Nevertheless , this will never work and is also rarely powerful. The best way to establish a healthy skimp on in a marital relationship is to often put your self in your partner’s shoes and do all you can to visit an accommodation. To accomplish loverwhirl therefore , compromise is hard, but it is often worth it eventually.
https://www.tobidase.de/how-do-compromises-in-a-marriage-job/
OBJECTIVES. A number of studies have shown that victimization from bullying behavior is associated with substantial adverse effects on physical and psychological health, but it is unclear which comes first, the victimization or the health-related symptoms. In our present study, we investigated whether victimization precedes psychosomatic and psychosocial symptoms or whether these symptoms precede victimization. DESIGN. Six-month cohort study with baseline measurements taken in the fall of 1999 and follow-up measurements in the spring of 2000. SETTING. Eighteen elementary schools in the Netherlands. PARTICIPANTS. The study included 1118 children aged 9 to 11 years, who participated by filling out a questionnaire on both occasions of data collection. OUTCOME MEASURES. A self-administered questionnaire measured victimization from bullying, as well as a wide variety of psychosocial and psychosomatic symptoms, including depression, anxiety, bedwetting, headaches, sleeping problems, abdominal pain, poor appetite, and feelings of tension or tiredness. RESULTS. Victims of bullying had significantly higher chances of developing new psychosomatic and psychosocial problems compared with children who were not bullied. In contrast, some psychosocial, but not physical, health symptoms preceded bullying victimization. Children with depressive symptoms had a significantly higher chance of being newly victimized, as did children with anxiety. CONCLUSIONS. Many psychosomatic and psychosocial health problems follow an episode of bullying victimization. These findings stress the importance for doctors and health practitioners to establish whether bullying plays a contributing role in the etiology of such symptoms. Furthermore, our results indicate that children with depressive symptoms and anxiety are at increased risk of being victimized. Because victimization could have an adverse effect on children's attempts to cope with depression or anxiety, it is important to consider teaching these children skills that could make them less vulnerable to bullying behavior. Studies in many countries have shown that a substantial number of elementary and high school students are bullied regularly by their peers. Numbers vary depending on country and definition: 30% of children in Italy report having been bullied at least sometimes, 24% in England (once a week or more), 17% in the United States (once a week or more), 19% in the Netherlands (a few times a month or more), 16% in Finland (once a week or more), and 8% in Germany (once a week or more).1–5 Other studies have shown a significant relationship between victimization and symptoms such as headache, stomach ache, bedwetting, anxiety, and depression.6–13 However, most of these studies included only cross-sectional data, indicating an association but no direct causality. To our knowledge, no longitudinal studies have investigated the relationship between bullying and specific psychosomatic health problems, such as abdominal pain, bedwetting, and headache. Knowing whether bullying victimization precedes these health symptoms or whether these health symptoms precede bullying could help in the prevention of victimization, as well as help in the prevention of these health symptoms. Many general practitioners, pediatricians, and other health care professionals are likely to see children who have been bullied or who display psychosomatic symptoms. Therefore, it is important for these practitioners to know which symptoms create a higher risk for children to become bullied and which symptoms result from being bullied. Our study involved a group of elementary school children in the Netherlands. In the beginning and end of the school year, we presented them with a survey to measure bullying behavior, as well as a large number of psychosomatic and mental health symptoms. With these prospective data we aimed to address the following questions: (1) does bully victimization at the beginning of the school year increase the risk of developing health-related problems later in the same school year, and (2) do health-related problems at the beginning of the school year increase the risk of becoming a bully victim later in the same school year? The study population was derived from 18 Dutch elementary schools that participated as a control group in a longitudinal study on bullying and the implementation and effectiveness of an antibullying policy at schools. Children from the upper 3 grades (aged 9–11 years) participated by filling out a questionnaire. The questionnaires were completed in classrooms under examination-like conditions in October and November of 1999 and May of 2000. The questionnaire contained items on bullying, psychosomatic variables, depression, and several other health, demographic, and social variables. Before data collection, all school boards of the participating schools were informed about the study and all of the school boards gave written informed consent for participation. The design of the study was approved by the medical ethics committee at TNO. Children were presented with a series of health symptoms (ie, anxiety, abdominal pain, sleeping problems, headache, feeling tense, feeling tired, and poor appetite) and asked to report for each symptom the frequency with which they experienced it: never, sometimes, or often during the last 4 weeks, for example, “Did you feel anxious?” and “Did you have a headache?” Each health symptom was dichotomized into no health problem (“never” or “sometimes”) versus a health problem (“often”). Bedwetting was assessed by asking the students if they wetted their bed at least once during the last 4 weeks. Cronbachs's α for all of the measured KIVPA items together was .72. Depression was evaluated with the Short Depression Inventory for Children.13,27,28 This 9-item questionnaire is used to screen for depressive symptoms among children. The questionnaire has shown very good psychometric properties and is extensively evaluated among Dutch elementary school children. Respondents can answer for each item if it is true or not true. An item example is: “The last couple weeks I felt down.” All of the items answered as “true” are summed up, resulting in a 0 to 9 score. A score of ≥7 is considered a strong indication for depression. Respondents with scores ≥7 were classified as depressed. Cronbach's α of the Short Depression Inventory for Children was .75. All of the analyses were performed with SPSS/PC, version 11 (SPSS Inc, Chicago, IL). Descriptive univariate statistics were used to study the prevalence of bully behavior. The first objective of the study was to determine whether bully victimization at the beginning of the school year would enhance the risk of developing health problems later in the same year. Therefore, we excluded for each analysis those children with a specific symptom at baseline measurement to enable us to study the development of that symptom during the course of the school year. For example, to study the incidence of headaches after a period of victimization, we included only those children who were categorized as having “no” headaches at the beginning of the school year. We divided this group into those who were and those who were not victimized at the beginning of the school year, and we looked at the incidence of headache during the school year for both groups. Consequently, odds ratios were calculated. The variables age, gender, and number of friends were included as confounding variables, because these are known to be related to outcome variables, like depression and bullying behavior. Multiple logistic regression was used to control for confounding variables and to calculate odds ratios with 95% confidence intervals (CIs). Our second objective was to answer whether health problems at the beginning of the school year increase the risk of becoming a bully victim later in the year. For this analysis, we excluded those children who had been victimized at the baseline measurement. This enabled us to study the incidence of new victimization during the school year among those children who had a specific health symptom present at the beginning of the school year and those who did not. Again, multiple logistic regression was used to control the confounding variables age, gender, and number of friends and to calculate odds ratios with 95% CIs. This method of analysis bears the disadvantage that a reduced number of children would be included. However, if those bullied children with a specific health symptom at the baseline measurement were to be included, it would be harder to study the sequence between bullying and health symptoms. By only including those children with either victimization or a specific health problem at the baseline measurement, we could investigate the “which comes first” question, that is, if victimization were to precede the development of new symptoms and/or if specific health problems were to precede the development of new victimization. A result was considered significant with a P < .05. No adjustment for multiple comparisons, such as the Bonferroni correction, was done, because this would result in an increase in type II errors, that is, finding a true difference and not considering this significant.29 However, not using this correction increased the possibility for a type I error, meaning that there could be a difference that is seemingly significant but actually because of chance (ie, a type I error). Of a total sample of 1597 children, 1552 (97%) participated at the first measurement at the beginning of the school year. A total of 1118 (70%) children filled out the questionnaire both at the beginning and end of the school year, providing data for this analysis. Student t test analyses indicated that for the 433 children who did not participate at the second measurement, there were no significant differences on any of the demographic or outcome variables of the first measurement. The main reason for nonresponse at the second measurement was that 3 schools (310 students) had insufficient time within their curriculum for a second measurement. Half (49.7%) of the students in the sample were boys, with a mean age of 10 years (SD 1.1). At the beginning of the school year, 14.6% of the students were being bullied, and at the end of the school year, 17.2% of the students were being bullied. We calculated the risk of developing specific health problems during the school year. Table 1 gives the incidence of new symptoms for children who were bullied and those who were not at the beginning of the school year. Children who were bullied at the beginning of the year had a significantly higher risk of developing new health symptoms during the course of the school year. Odds ratios were particularly high for depression (4.18), anxiety (3.01), bedwetting (4.71), abdominal pain (2.37), and feeling tense (3.04). A possible interaction effect with relation to gender was investigated by adding the interaction “bullying × gender” term to the model. For most of the risks of developing health problems, there were no significant differences between boys and girls. Only for the effects of bullying on the development of abdominal pain did the interaction-term significantly improve the model (χ2 = 9.59; P = .002). Being bullied had a strong relation to the development of abdominal pain for girls (odds ratio: 4.98; CI: 2.17–11.43; P < .001), whereas there was no such relationship for boys (odds: 0.34; CI: 0.04–2.66; P = .305). We also calculated the risk of new victimization in relation to somatic and psychological health symptoms. Table 2 presents odds ratios for getting bullied at the end of the school year for children who were not bullied at the beginning of the school year. Children who were depressed, anxious, or reported poor appetite at the beginning of the school year were at higher risk of being bullied at the end of the school year. Children with other symptoms, such as headache, abdominal pain, and bedwetting, were not at apparently higher risk of being bullied. A possible interaction effect with relation to gender was investigated by adding the interaction “health symptom × gender” term to the model. For most of the symptoms there were no significant differences between boys and girls. Only for the effect of sleeping problems on the development of bullying did the interaction term significantly improve the model (χ2 = 4.54; P = .03). Sleeping problems had a stronger relation to being bullied for boys than for girls, but for neither boys nor girls was this relationship significant. We studied the relationship between victimization and health symptoms among a group of elementary school children. The data indicate that children who are regularly bullied at the beginning of a school year have a higher risk of developing new health-related symptoms during the year. This supports the hypothesis that the stress of victimization causes the development of somatic and psychological health problems. However, our study also showed that children who are depressed or anxious at the beginning of the school year are at enhanced risk of becoming new victims of bullying later that year. Various possibilities might explain this. Anxious or depressed behavior could make a child seem more vulnerable to aggressive peers and thereby make the child an easy target for victimization. Other studies have found that victimized children exhibit characteristics of vulnerability, such as subassertive behavior, that make them attractive targets for aggressive children.30 Less assertive behavior by anxious or depressed children could make them easier targets because they are less likely, or less expected by the bullies, to stand up for themselves when they are victimized. Therefore, bullies may fear less retaliation from anxious or depressed children and be more prone to pick these children as their victims. An alternative explanation may be that some children who are anxious or depressed are more inclined to define some of their experiences as having been bullied, whereas other children would not perceive these experiences as victimization. Other studies have supported the suspicion that depression or anxiety could follow an episode of bullying.6 Our study confirms this and further shows that a large number of other health symptoms may also result from a period of being bullied. Bond et al6 found that especially victimized girls are at higher risk of anxiety and depression, but they found no evidence that being anxious or depressed was predictive for a higher risk of being victimized. This latter result differs from that of our study. However, because the children in our study were younger than those of Bond et al,6 our study sample may have had an inherently higher incidence of new bullying cases. In older children, bullying victimization gradually decreases, making it more difficult to show a relationship between preexisting symptoms and later onset of victimization. Our results are consistent with the findings from a recent study by Nishina et al,31 which also found that psychosocial maladjustment (eg, depression and anxiety) both preceded and followed peer victimization. Furthermore, their data showed, in line with our results, that physical symptoms only followed a period of victimization and, unlike psychological symptoms, did not precede victimization. Some of the strengths of our study are the wide variety of symptoms measured and the longitudinal data used for the analysis. There are some methodologic considerations. Data provided for this study are based on self-reports of children. This carries a potential risk that some children may be prone to report more problems in general, and, therefore, some results might overstate the associations between variables. Actual effect sizes may, as a result, be smaller than those produced by our data. It could also be that especially depressed children may have the tendency to experience things more negatively and report more often other health problems or negative experiences. In this light, it should be noted that associations of depression with victimization were particularly high. However, because our analyses included only children who reported either health symptoms or bullying victimization, but not both, at the baseline measurement, children prone to report many symptoms may have been more likely to be excluded from analysis. Because there was no correction made for multiple comparisons, there is a higher change for a type I error among the results. However, the patterns and high number of significant results make it unlikely that overall conclusions are compromised by this type of error. Our findings might have implications for future research and intervention strategies. We found that depression and anxiety make a child more at risk to become victimized and that other, especially more physical, symptoms do not elevate the risk for victimization. It has been suggested that children may consider it socially unacceptable to bully and to be mean to those children who display physical illness,31 and it may be possible that children consider it more permissible to bully those who are psychologically fragile and nonassertive. Future research could focus on this hypothesis and try to identify and preempt those situations in which children are more or less inclined to bully other children. This may have implications for preventive interventions, because children generally may need to learn that it is just as inappropriate to bully those who are psychologically vulnerable as those who are not physically capable of defending themselves. Future studies on the relationship between victimization and health-related symptoms may also look into possible confounding variables, such as ethnicity, social background, and level of education. With regard to high school students, it may be relevant to study the subgroup of gay and lesbian youth. Several studies have indicated that these youth are at higher risk for victimization and experience higher incidences of psychosocial problems, such as depression and suicidal ideation.32–34 Investigating this subgroup may give insight into the relationship between victimization and psychosocial problems with regard to sexual orientation and may help develop strategies to lower the high levels of victimization and psychosocial health problems for this specific group. With regard to health care professionals, our findings have several implications. Our results indicate that victimization causes an increase in health problems, such as headache, abdominal pain, anxiety, and depression. For doctors and health practitioners, these findings stress the importance of asking whether a child is bullied and establishing whether bullying plays a contributing role when a child exhibits such symptoms. Our results further indicate that children with psychosocial health symptoms, like depression and anxiety, are at increased risk of being victimized. Because victimization could have an adverse effect on children's attempts to cope with depression or anxiety, it is important to consider teaching these children social skills that would make them less vulnerable to bullying behavior. Rigby and Slee39 found that suicidal ideation was especially frequent among bullied children who had little social support. Therefore, children with anxiety or depression and additional possible risk factors for victimization, such as having few friends, being unpopular, or being subassertive, should be referred to a psychologist or be trained in social skills to prevent bully victimization. This study was financially supported by ZorgOnderzoek Nederland (grant 22000061).
https://pediatrics.aappublications.org/content/117/5/1568?ijkey=25bada03b91bb1ee78d61fa4553f74a8c82509ac&keytype2=tf_ipsecsha
The causes of TRD vary, and individual biological characteristics are likely involved. These characteristics, accompanied by the heterogeneity of depression itself, result in TRD. Researchers are studying what factors lead to an inadequate response to certain antidepressants, but to date, individual patient characteristics, symptoms, course, and combined comorbidities are considered key factors in TRD development. Risk factors that could lead to TRD and possibly alter the efficacy of TRD treatment include the severity of the patient's depression and the presence of comorbid medical conditions such as diabetes, cancer, chronic pain, and coronary artery disease. Both TRD and treatment-responsive depression involve the same broad range of symptoms, but the distinguishing features in patients who experience one form versus the other remain to be clarified. Treatment Overview Treatment for depression is not one-size-fits-all. Recent research has offered many suggestions for managing the symptoms of TRD, but most findings are mainly empirical, and providers should take a rational approach to initiating treatment methods. SACO, a mnemonic developed to aid in the selection of treatment options, aligns with current guidelines' recommendations of Switching therapies, Augmentation, Combination of antidepressant classes, and Optimization as appropriate approaches for managing TRD. Other options include using genetic testing to determine whether genomic variations will affect a patient's ability to tolerate a certain medication and using medications with off-label antidepressant indications. The complexity of treatment for TRD reflects the diverse nature of the disorder, and the provider should be careful to make an accurate diagnosis before treatment. Studies have not shown one treatment approach to be superior to another, and treatment should be based on the individual patient's disease state. Because of this, TRD treatment will likely be an extensive trial-and-error process. In addition to monitoring for efficacy, it is imperative that the provider monitor for adherence before deeming a treatment approach inadequate; this is because nonadherence could also be a potential link to resistance. Nonadherence during treatment for TRD could be due to a variety of factors, the most prominent ones being cost and side effects. Medical costs are nearly 70% higher for patients with TRD than for those without, resulting from workdays missed because of depressive episodes, physician or hospital visits, and medication costs.[10,11] When therapy is being initiated, it is reasonable to start a patient on the lowest available dose to determine patient response and then titrate (every 2–4 weeks) to the usual dosing range (Table 1) if necessary.[1,7,12–15] If a patient's response to initial conventional dosing is minimal, it is appropriate to increase the dose and reassess prior to switching or augmenting (Table 2).[16,17] In contrast, if the patient has no response to initial therapy, switching to an alternative agent or class, as suggested in the guidelines, may be beneficial. Guideline-directed Treatment Approaches Determining the optimal treatment for patients with TRD involves ongoing discussion of many treatment methods. Trials of different medications may be required to achieve the desired treatment outcome for the individual patient. Many guidelines have delineated a variety of methods to optimize a patient's treatment regimen, including the 2010 American Psychiatric Association (APA) guidelines, the 2016 Canadian guidelines, the Department of Veterans Affairs and Department of Defense (VA/DoD) guidelines, and the 2017–2018 Florida Best Practice guidelines. Given the wide range of treatment approaches available, patients will benefit most from an individualized approach, as the level of resistance is different for each patient. APA Guidelines (2010). The 2010 APA guidelines include the various treatment strategies outlined in the SACO approach and also provide significant evidence regarding the benefits of switching to alternative medication classes (Table 3). The guidelines recommend the use of psychotherapy (cognitive-behavioral therapy [CBT], interpersonal psychotherapy [IPT]) or monotherapy with a common antidepressant for first-line therapy. Treatment response should be monitored initially, at 4 to 8 weeks, and throughout treatment. If a patient experiences severe or life-threatening symptoms with current therapy, consideration should be given to electroconvulsive therapy (ECT), dose reduction, treatment augmentation, treatment of individual side effects, or an alternative medication (tricyclic antidepressant [TCA], monoamine oxidase inhibitor [MAOI], lithium, thyroid therapy, 2nd-generation antipsychotic). Canadian Guidelines (2016). The Canadian guidelines (Figure 1) provide suggestions for alternative medications based on their ranking for line of treatment. Figure 1. Canadian Guidelines (2016) TCA: tricyclic antidepressant. Source: Reference 5. 2016 VA/DoD Guidelines. The Va/DoD guidelines recommend psychotherapy (CBT, IPT, problem-solving therapy) and appropriate monotherapy as initial treatment in patients with MDD. If the patient has an inadequate response to initial treatment, olanzapine plus fluoxetine is suggested. Olanzapine monotherapy is not indicated for TRD treatment; it should used only in combination. This was the only guideline assessed that specified the olanzapine-plus-fluoxetine combination as a viable option for patients with TRD. If an inadequate response persists following two pharmacotherapy trials, it is appropriate to switch the patient to an MAOI or TCA. Following initiation or any dose changes, patients should be monitored at least monthly until remission. Florida Best Practice Psychotherapeutic Medication Guidelines for Adults (2017–2018). This guideline uses levels to distinguish different treatment stages (Figure 2). If a patient has an inadequate response on one level, move to the next. Diagnosis should be evaluated following an inadequate response to levels 1 and 2. Figure 2. Florida Best Practice Psychotherapeutic Medication Guidelines for Adults (2017–2018) CBT: cognitive-behavioral therapy; IPT: interpersonal therapy; MAOI: monoamine oxidase inhibitor; SNRI: serotonin-norepinephrine reuptake inhibitor; SSRI: selective serotonin reuptake inhibitor. Source: Reference 18. Alternative Approaches Additional treatment options listed in the guidelines are ECT and vagus nerve stimulation (VNS). All of the guidelines advise that ECT be reserved for use after a patient has an inadequate response (or intolerance) to several trials of antidepressant classes. The APA suggests ECT as a first-line option for patients who prefer it or those with psychotic symptoms or a positive response to psychotherapy in the past. VNS is suggested as a last-line option, and it is advised against in the VA/DoD guidelines. Transcranial magnetic stimulation, the only FDA-approved somatic therapy, is suggested in all guidelines (except the Canadian guidelines) as a viable treatment option for TRD if pharmacotherapy trials fail.[1,5,15] FDA-approved Drugs Esketamine (Spravato), a ketamine isomer, is the newest FDA-approved treatment option for TRD and is indicated in conjunction with an oral antidepressant. The intranasal formulation allows it to bypass the oral-bioavailability issues seen with ketamine and enables it to reach the brain faster, resulting in a quicker onset of antidepressant effects. Symbyax (olanzapine-fluoxetine) is the only other FDA-approved (2009) pharmacotherapy option for TRD. The Pharmacist's Role Pharmacists play a critical role in the treatment of TRD. They can assist with medication selection and appropriate dosage, and they also can help identify possible drug interactions that may warrant alternative medications. Nonadherence due to lack of motivation or excessive side effects is a big problem with TRD treatment. Pharmacists can help ensure medication adherence by counseling patients on onset of action, possible side effects, and the importance of adherence. It is essential that the pharmacist relay any necessary information to the prescriber so that the patient can receive optimal treatment benefits. Conclusion Finding an effective treatment regimen for a patient with TRD is largely trial-and-error. There is no superiority among the various treatment approaches, including dose reduction, optimization, switching, and augmentation, allowing more individualized regimens. Patients should be monitored closely, and transparency regarding any concerns should be encouraged. Treatment is successful when provider, patient, and pharmacist work cohesively to reach the goal of symptom management. US Pharmacist. 2020;45(5):15-20. © 2020 Jobson Publishing Cite this: Managing Treatment-Resistant Depression - Medscape - May 01, 2020.
https://www.medscape.com/viewarticle/934028_2
Video game developers routinely get put in no-win positions by the consumers who play their games; we expect a flawless experience (an admittedly unrealistic expectation) and even when developers deliver a near-flawless game, consumers inevitably latch on to the few flaws they find and take to the internet or other public forums to air their grievances. It’s human nature, I guess, and we’re all guilty of it. But great games have a way of overcoming the little flaws that hold them back from true perfection to deliver a truly memorable experience, and they do so in a variety of ways both big and small, obvious and subtle. In most cases, it’s the big, obvious in-game moments and interactions that players latch on to that form the lasting memories (emerging from the vault for the first time in Fallout 3, playing as the Galactic Empire for the first time in StarWars: TIE Fighter, experiencing the beauty of Ico’s unconventional game design, etc.), but the small, subtle nuances in a game’s design can go a long way in determining whether the gamer gets an enjoyable or infuriating experience. In the modern age of gaming, as worlds ever-expand to massive size, one of the more subtle aspects of game design has begun to play an increasingly important role: saving. The Basics Before we can dive deep into the impact that great or flawed saving systems have on gameplay, we first have to understand the basics. For this discussion, we’ll focus on three important but different tools utilized by developers: Checkpoints (according to Wikipedia): “Checkpoints are locations in a video game where a player character respawns after death. Characters generally respawn at the last checkpoint that they have reached. A respawn is most often due to the death of the in-game character, but it can also be caused by the failure to meet an objective required to advance in the game. Checkpoints might be temporary, as they stop working when the character loses all of its lives. Most modern games, however, save the game to memory at these points, known as auto-saving. Checkpoints might be visible or invisible to the player. Visible checkpoints might give a player a sense of security when activated, but in turn sacrifice some immersion, as checkpoints are intrinsically “gamey” and might even need an explanation of how they work. Invisible checkpoints don’t break immersion, but make players unsure of where they will respawn.” Save Points (according to Wikipedia): “Some video games only allow the game to be saved at predetermined points in the game, called save points. Save points are employed either because the game is too complex to allow saving at any given point or to make gaming more engaging by forcing the player to rely on skills instead of on the ability to retry indefinitely. Save points are also far easier to program, so when a game developer has to rush a game, save points are attractive to build in, also testing the ‘save anywhere’ is far more difficult.” Save anywhere (according to Wikipedia): “A video game may allow the user to save at any point of the game, any time. The phrase “Save, save, save!” is a reference to this feature and if often included in guides to these types of games to ensure that the user takes maximum advantage of this feature. This was chiefly a computer-only save game ability until the introduction of hard drives on console systems. There are modified versions of this, too. For example, the Nintendo Gamecube game Eternal Darkness uses a modified version of save anywhere, where the player can save almost anytime, for an unlimited number of times, but cannot save if an enemy is in the room. To make gaming more engaging, some video games may impose a limit on the number of times a player saves the game. For instance, IGI 2 allows only a handful saves in each mission, while Max Payne 2 only imposes this restriction on the highest level of difficulty.” Super Mario World (checkpoints), Tales of Vesperia (save points) and The Elder Scrolls V: Skyrim (save anywhere) provide high profile examples of each model, respectively. Each system has it’s place within game design, and some developers mix and match multiple models within a game to provide the best experience. But when choosing a model for a game, developers should consider the following factors: does it match what I’m trying to convey with my game, and will it be seamless for the player? If it does not match and it is not seamless, that is when players will begin to voice their displeasure. Start at the Beginning … or the Middle Most early games didn’t utilize saves; often the games weren’t long enough or difficult enough to justify the need for them. Those that did utilize them, did so sparingly. The Legend of Zelda utilized an on-cartridge, battery-powered save file because the game’s length meant that few players would sit and beat the game in a single setting; but even then, saving was cumbersome, requiring the player to die, or by pressing the A, B, Select, and Start buttons on the controller simultaneously, on top of which, players were instructed to hold the reset button on the console when powering the unit down to ensure the data wasn’t corrupted. While it was nice to be able to save, it was hardly an elegant solution. Other games utilized passwords to save progress instead of actual files. Upon the completion of a level, or a death, the system would generate an alphanumeric password to give to the player. When entered, the password served as detailed instruction for the game to recreate the exact circumstances from when it was generated. In this way, the game was never required to store a save file, but this too presented challenges: players would have to copy and enter the password exactly, a sometimes tedious process; or, as often happened to me, the player could forget or lose the password, meaning they had to start the game again from scratch (for further reading: a list of classic games which utilized passwords for saves). Early games utilized post-level checkpoint systems most frequently; for example, in Super Mario Bros., once a player completed a level by triggering the flag at the castle, the level was considered closed and the player moved on to the next. Interestingly, Super Mario Bros. also utilized checkpoints halfway through levels (though they weren’t visible, meaning players had to hope they’d crossed the threshold); if a player surpassed the halfway point, then died, they would start their next life at the halfway point of the level. By the time Super Mario World on the Super Nintendo Entertainment System appeared, the midway checkpoint was in full swing with some added functionality. From my essay on Super Mario World: “Similar in nature to the goal posts designating completion at the end of each level, a smaller version appeared at roughly the halfway point. As soon as the player crossed these posts they’d begin the level from this point should they die before reaching the end. This checkpoint also provided another player benefit – should Mario cross the marker in his “small” state, he’d instantly be transformed to his bigger version as if he’d receiver a Super Mushroom. This checkpoint system became especially invaluable in later levels of the game when difficulty increased – by restarting at the checkpoint rather than at the beginning of the stage, the player could focus on defeating the particular challenges which killed them rather than re-defeating the segments they’d already navigated successfully.” Super Mario World also introduced limited save point functionality; after a player defeated a select level, such as a Ghost House, Switch Palace or Castle, they could either “save and quit” the game or “save and continue” playing. As games continued to evolve and expand, so too did save functionality. With the PlayStation and Nintendo 64, gamers were introduced to memory cards which were required to save game data, and eventually, consoles would start to rely on hard drives, much like their PC gaming counterparts. But no matter what consoles stored game data on, the primary save functions of checkpoints, save points and save anywhere would remain staples of game design. Making the Right Choice Matters So how should developers choose the correct saving method for their game and why does it matter? To answer the second question first: it matters because games are meant to be an immersive experience that draw players into the world for long stretches; if saving their game becomes a frustrating exercise, that sense of immersion is disrupted. Video games already struggle with a host of other disruptions both big and small; from Josh Snyder’s essay on semiotics: “Perhaps, if the developer isn’t so concerned about how the player will navigate their world, they can focus on character development and dialogue. Only when video games have their own fully developed semiotics will they be able to surpass film, music and literature as truly the most immersive, personable art form available.” Other disruptions occur with screen tearing, frame rate drops, invisible walls and objects, clipping, etc.With so many potential disruption points for players, adding to the list with an out-of-place save mechanic should be avoided. Now the tougher question: how should developers choose the correct saving method for their game? First, they should think carefully about the type of game they’re making – is it a platformer with a series of short, frenetic levels? Is it an open world Western RPG? By having a clear understanding of the type of game the choice should become easier. Let’s examine the survival horror genre as an example. Bill Henning defined the genre’s key characteristics in his essay “Reviving the Dead: The State of Survival Horror”: “Survival horror games drive players through the narrative through simple tropes: survive the night (Resident Evil), desperate escapes with plot twists and turns (Dead Space), or unraveling the players sanity (Silent Hill 2). Usually, health and ammo are limited, emphasizing the survival aspects of the game – players watch their health dwindle with each fight, knowing that every bullet counts, right on down to the last magazine. It’s common for player to not know when they can resupply those precious items, adding a layer of dread on top of the experience.” With these elements in mind, spaced out save points likely make the most sense, as the distance between them adds to the established sense of dread and anxiety by amplifying the need for detailed resource management and planning – if the player is unsure when the next chance to save will arise, they’ll need to focus on acquiring and carrying the items necessary to their survival without expending those precious resources unnecessarily. Remedy utilized an autosave mechanic in Alan Wake and it matched their game perfectly. As the player navigated levels abundant with darkness, light became the primary weapon for players to combat the enemy spawned by the “Dark Presence” and served as safe havens where the player could rest for a precious few moments free from enemies. But the light also served as a natural checkpoint system in which the game saved the player’s progress automatically. When Remedy brought the game to the PC, gamers clamored for save anywhere capability, but Remedy didn’t cave; they offered the following explanation: “The save games will work as they did on Xbox360, with automatic checkpoints. We know some of our PC fans would love a “save anywhere” system, but unfortunately we can’t easily change how the saves work as there are certain restrictions where you can save in the tech so adding a free save anywhere would likely expose a lot of new bugs.” While it’s true that introducing save anywhere functionality may have introduced a bevy of technical problems, it also would have significantly altered the dramatic structure of the game. As Kotaku’s Brian Crecente said in his review of the game: “The emotional impact of seeing a light in the distance as you run through darkened woods, howling shadows at your heels, cannot be overstated.” Large, open-world games like Bethesda’s The Elder Scrolls: Skyrim or Fallout: New Vegas on the other hand are best served utilizing the save anywhere mechanic as designated checkpoints at random parts of the map would do little to enhance the story and would create a great deal of frustration for players. To be clear, both of those games utilize automatic save points when players enter in and out of individual set pieces, such as caves in Skyrim and vaults in Fallout, but in these games, players are free to roam the map as they please and experience random encounters, meaning it could be hours before entering a new setting to trigger an automatic save. And as anyone who has played these games can attest, a lot can go wrong in that span of time, from glitches to random encounters with overpowered enemies. In fact, when I play these games I’ve made it common practice to manually save at least every 15 minutes, just to be sure I don’t lose hours of progress should I encounter one of these game-altering enemies or other problems. In these instances, the convenience of saving anywhere overrides the disruption of immersion by helping to avert the greater frustration of lost hours of game progress. Interestingly, following the release of downloadable content for Skyrim, Bethesda recommended turning off the autosave functionality as it caused the game to crash, increasing the importance of the save anywhere functionality and using it frequently. The Value of Mix and Match Recently, developers realized that these save mechanics aren’t necessarily mutually exclusive and incorporate more than one into their games. The Borderlands series provides an interesting case for combining the checkpoint and save anywhere mechanics with great success. As players venture throughout the vast maps of Pandora, they’ll often stumble across checkpoints which serve two purposes. First, it triggers an automatic save, writing a complete save file to the player’s designated storage device. Second, it provides a traditional spawn point should the player meet their end in battle. On top of these multifunction checkpoints, Gearbox also incorporated a save anywhere mechanic in which the player can quit the game at any point and save their progress. But this save anywhere mechanic differs from those found in games like Fallout: New Vegas and Skyrim; when players load their saved game, they begin at the nearest checkpoint, rather than at whatever point in the world they may have saved. For example, say a player quits and saves in the middle of map they just cleared of enemies, when they load their save the next time they play, their story progress, completed missions and inventory will all remain, but they’ll spawn at a the checkpoint closest to their location when they quit with full health and shields. But why did they choose this option for their save anywhere, rather than a more traditional model which would have restored the game in the exact moment of the save? It’s likely two factors. First, the checkpoint/respawn system was already in place, so why not use it and reduce the amount of coding and development necessary to install an independent save anywhere feature? Second, it fits within the story, as the checkpoints are referred to in-game as “ New U stations” and provide a digital reconstructed clone of the player upon their death. Quitting the game essentially acts as a death, without the associated penalty of subtracting a percentage of the player’s funds as a reconstruction fee. While more games have adopted the mix and match strategy to give players the best, and least disruptive, saving experience, many are still relying on outdated mechanism borne from the past when hardware limited capabilities dictated function. For example, I recently played through Tales of Vesperia for the first time, and was annoyed at the prospect of having to seek out save points, which were often 90 minutes apart. This style of saving has been a staple of Japanese RPGs for years, but the more I played the game, the more it felt like an outdated relic. After a while, I found myself pining for a save anywhere mechanic. Part of the problem in this scenario is my somewhat hectic life. Rarely these days do I have time to spend hours on end sitting and playing a game. The ability to play for 30 or 45 minutes to advance the story between other day-to-day activities is one that I’ve grown accustomed to. Having to venture far and wide to seek out a save point, all the while being interrupted by battles with random spawning enemies only served to disrupt my immersion and displace it with annoyance and frustration. Other than this singular issue, I truly enjoyed the game. A number of Tales games have been released since Tales of Vesperia in 2008, so perhaps the antiquated save mechanic has gone by the wayside or at least been modified for a better experience, but if not, it’s only going to seem more antiquated as other games continue to evolve. Could a simple tweak to a Borderlands style save anywhere feature benefit the game? Would abandoning save points altogether in favor of a Skyrim save anywhere be more appropriate? It likely depends on the story that each Tales game seeks to deliver, but Vesperia certainly would have benefited from the inclusion of either. Right or Wrong? Of course there are always games which will raise questions whether they got a particular mechanic right or wrong; when it comes to saving, the greatest debate likely centers around From Software’s Dark Souls. In a game notorious for its difficulty, with a setting designed to make the player feel weak and isolated, it’s easy to understand why From Software chose to greatly space out its campfire checkpoints to an extreme degree. But many players counter that the distance artificially inflates the difficulty by shifting the focus from sections where a player struggles to unnecessarily repeating sections in which they don’t, limiting their ability to practice the skills necessary for defeating what’s kept them from advancing. The following excerpt comes from a discussion on The Escapist and outlines the argument: “I knew it was going to be hard BUT the thing that most people were saying about the diffuculty [sic], was that the game was never cheap or unfair. But how is making me replay the same 10 minutes of gampeplay [sic] over and over and over and over again not just a cheap way of padding the gameplay? I wouldnt [sic] even mind if i was actually gaining some levels out of this constant grinding … but because i die at the boss every time, meaning I NEVER progress. I could actually travel back in time 3 hours and i’d be in the exact same position, which i think IS unfair.” Of course, reading through the discussion leads to plenty of counterpoints (which I would recommend doing so) and I am honestly on the fence about this one. Again, it’s clear from the design of the rest of the game that From Software did put some thought into the positioning of the save points, but I’ve also felt the same frustration as The Escapist user quoted above. Would having more frequent checkpoints helped to ease my frustration? Yes. Would it have lessened the experience the game was trying to deliver? Also yes. But again, what’s clear is that From Software put a lot of consideration into their checkpoint location and mechanic, and how it would affect the gameplay, which is what every developer should do. Time to Evolve So much of the video game industry has evolved over the years, including the audience playing the games. Just as the developers making the games have grown more sophisticated, so have the players, and in an age of ever-expanding worlds and increasingly intricate missions, every nuance of the gameplay get’s hyper-scrutinized. That scrutiny extends to elements both big and small, and how users save their games is no exception. If developers continue to use antiquated mechanics that don’t match the game, the player’s sense of immersion will come to a grinding halt, and players will voice their frustrations. As developers create their games, it’s important that they consider how saving will work within the world they’re creating. Whether they choose to go with checkpoints, save points, save anywhere or a combined approach should depend on the story, the gameplay and what the developers hope to convey to the player. Most importantly, saving should be a seamless experience for the player to ensure the sense of immersion remains in tact.
http://www.theoryofgaming.com/art-saving/