title
stringlengths
8
300
abstract
stringlengths
0
10k
Nicotinic antagonist augmentation of selective serotonin reuptake inhibitor-refractory major depressive disorder: a preliminary study.
BACKGROUND There is evidence for nicotinic hypercholinergic mechanisms in depression. Clinical relationships between tobacco use and depression suggest important effects of nicotine in major depressive disorder (MDD). It has been hypothesized that cigarette smoking may exert antidepressant effects, presumably mediated through stimulation of nicotinic acetylcholine receptor systems. We compared the nicotinic antagonist, mecamylamine hydrochloride (MEC), with placebo as an augmentation strategy for patients with MDD who were refractory to selective serotonin reuptake inhibitor (SSRI) treatment. METHODS Twenty-one SSRI-treated subjects with MDD were randomized to MEC (up to 10 mg/d; n = 11) or placebo (PLO group; n = 10) during an 8-week trial. The primary outcome measure was the change in depressive symptoms assessed using the 17-item Hamilton Depression Rating Scale during the 8-week trial. RESULTS There was a significant reduction in 17-item Hamilton Depression Rating Scale scores in the MEC versus PLO groups, as evidenced by a significant medication x time interaction (F1,19 = 6.47, P < 0.05). Five (45.5%) of 11 subjects in the active study medication group demonstrated a 50% or more decrease in depressive symptoms from baseline as compared with 1 (10%) of 10 subjects assigned to placebo medication, but this difference was not significant (P = 0.15; Fisher exact test). The primary side effects of MEC were constipation and orthostatic hypotension. CONCLUSIONS These preliminary findings suggest that the nicotinic acetylcholine receptor antagonist, MEC, may have utility as an augmentation strategy for patients with SSRI-refractory MDD.
A predictive model of pathologic response based on tumor cellularity and tumor-infiltrating lymphocytes (CelTIL) in HER2-positive breast cancer treated with chemo-free dual HER2 blockade.
Background The presence of stromal tumor-infiltrating lymphocytes (TILs) is associated with increased pathologic complete response (pCR) and improved outcomes in HER2-positive early-breast cancer (BC) treated with anti-HER2-based chemotherapy. In the absence of chemotherapy, the association of TILs with pCR following anti-HER2 therapy-only is largely unknown. Patients and methods The PAMELA neoadjuvant trial treated 151 women with HER2-positive BC with lapatinib and trastuzumab [and hormonal therapy if hormone receptor (HR)-positive] for 18 weeks. Percentage of TILs and tumor cellularity were determined at baseline (N = 148) and at day 15 (D15) of treatment (N = 134). Associations of TILs and tumor cellularity with pCR in the breast were evaluated. A combined score based on tumor cellularity and TILs (CelTIL) measured at D15 was derived in PAMELA, and validated in D15 samples from 65 patients with HER2-positive disease recruited in the LPT109096 neoadjuvant trial, where anti-HER2 therapy-only was administer for 2 weeks, then standard chemotherapy was added for 24 weeks. Results In PAMELA, baseline and D15 TILs were significantly associated with pCR in univariate analysis. In multivariable analysis, D15 TILs, but not baseline TILs, were significantly associated with pCR. At D15, TILs and tumor cellularity were found independently associated with pCR. A combined score (CelTIL) taking into account both variables was derived. CelTIL at D15 as a continuous variable was significantly associated with pCR, and patients with CelTIL-low and CelTIL-high scores had a pCR rate of 0% and 33%, respectively. In LPT109096, CelTIL at D15 was found associated with pCR both as a continuous variable and as group categories using a pre-defined cut-off (75.0% versus 33.3%). Conclusions On-treatment TILs, but not baseline TILs, are independently associated with response following anti-HER2 therapy-only. A combined score of TILs and tumor cellularity measured at D15 provides independent predictive information upon completion of neoadjuvant anti-HER2-based therapy. Clinical trial number NCT01973660.
Satisfaction with information provided to Danish cancer patients: validation and survey results.
OBJECTIVES To validate five items (CPWQ-inf) regarding satisfaction with information provided to cancer patients from health care staff, assess the prevalence of dissatisfaction with this information, and identify factors predicting dissatisfaction. METHODS The questionnaire was validated by patient-observer agreement and cognitive interviews. The prevalence of dissatisfaction was assessed in a cross-sectional sample of all cancer patients in contact with hospitals during the past year in three Danish counties. RESULTS The validation showed that the CPWQ performed well. Between 3 and 23% of the 1490 participating patients were dissatisfied with each of the measured aspects of information. The highest level of dissatisfaction was reported regarding the guidance, support and help provided when the diagnosis was given. Younger patients were consistently more dissatisfied than older patients. CONCLUSIONS The brief CPWQ performs well for survey purposes. The survey depicts the heterogeneous patient population encountered by hospital staff and showed that younger patients probably had higher expectations or a higher need for information and that those with more severe diagnoses/prognoses require extra care in providing information. PRACTICAL IMPLICATIONS Four brief questions can efficiently assess information needs. With increasing demands for information, a wide range of innovative initiatives is needed.
Large-Scale Sentiment Analysis for News and Blogs (system demonstration)
Newspapers and blogs express opinion of news entities (people, places, things) while reporting on recent events. We present a system that assigns scores indicating positive or negative opinion to each distinct entity in the text corpus. Our system consists of a sentiment identification phase, which associates expressed opinions with each relevant entity, and a sentiment aggregation and scoring phase, which scores each entity relative to others in the same class. Finally, we evaluate the significance of our scoring techniques over large corpus of news and blogs.
The HIT study: Hymenoptera Identification Test--how accurate are people at identifying stinging insects?
BACKGROUND Stinging insects in the order Hymenoptera include bees, wasps, yellow jackets, hornets, and ants. Hymenoptera sting injuries range from localized swelling to rarely death. Insect identification is helpful in the management of sting injuries. OBJECTIVE To determine the accuracy of adults in identifying stinging insects and 2 insect nests. METHODS This was a cross-sectional, multicenter study using a picture-based survey to evaluate an individual's success at identifying honeybees, wasps, bald-face hornets, and yellow jackets. Bald-face hornet and paper wasp nest identification also was assessed in this study. RESULTS Six hundred forty participants completed the questionnaire. Overall, the mean number of correct responses was 3.2 (SD 1.3) of 6. Twenty participants (3.1%) correctly identified all 6 stinging insects and nests and only 10 (1.6%) were unable to identify any of the pictures correctly. The honeybee was the most accurately identified insect (91.3%) and the paper wasp was the least correctly identified insect (50.9%). For the 6 questions regarding whether the participant had been stung in the past by any of the insects (including an unidentified insect), 91% reported being stung by at least 1. Men were more successful at identify stinging insects correctly (P = .002), as were participants stung by at least 4 insects (P = .018). CONCLUSION This study supports the general perception that adults are poor discriminators in distinguishing stinging insects and nests with the exception of the honeybee. Men and those participants who reported multiple stings to at least 4 insects were more accurate overall in insect identification.
A case of precocious puberty in a female.
Sexual precocity is considered to be present when indications of genital maturation become apparent in boys before the age of 10 years and in girls before the age of 8 years (Seckel, 1946). It is customary to divide these cases into two groups. In those with true precocious puberty maturation with spermatogenesis or ovulation has occurred in a normal manner, but at an abnormally early age; in those with pseudoprecocious puberty, premature development of the secondary sex organs, but without spermatogenesis or ovulation, has occurred as a result of an ovarian or adreno-cortical tumour, unusual sensitivity of end-organs to normal hormonal stimulation, or exogenous application of sex hormones or other compounds (Talbot, Sobel, McArthur and Crawford, 1952). In a small proportion of cases true precocious puberty is associated with tumours or cysts in the region of the hypothalamus or with post-meningitic or postencephalitic lesions, but in the majority diligent and repeated search fails to reveal any abnormality in the nervous system or endocrine glands (Wilkins, 1957). Such cases are generally referred to as 'idiopathic' or 'constitutional' (Novak, 1944), and it is suggested that the genetic factor or factors that determine the time of hypothalamic sex maturation must be at fault (Seckel, 1946). In a small percentage of cases there is a heredo-familial tendency (Rush, Bilderback, Slocum and Rogers, 1937; Jacobsen and Macklin, 1952). Precocious puberty of idiopathic origin represents simply the early appearance of normal phenomena, and many of the physical and laboratory examinations reveal findings normal for older children (Lloyd, Lobotsky and Morley, 1950). Except for the hazard of precocious pregnancy and the possibility of subnormal stature, the prognosis of girls with idiopathic precocious puberty is good, and it does not appear that the menopause is accelerated or that premature senility occurs (Talbot et al., 1952; Jolly, 1955). The following case of precocious puberty occurring in a female child appears to present features of sufficient interest to warrant publication. Case History
R-C3D: Region Convolutional 3D Network for Temporal Activity Detection
We address the problem of activity detection in continuous, untrimmed video streams. This is a difficult task that requires extracting meaningful spatio-temporal features to capture activities, accurately localizing the start and end times of each activity. We introduce a new model, Region Convolutional 3D Network (R-C3D), which encodes the video streams using a three-dimensional fully convolutional network, then generates candidate temporal regions containing activities, and finally classifies selected regions into specific activities. Computation is saved due to the sharing of convolutional features between the proposal and the classification pipelines. The entire model is trained end-to-end with jointly optimized localization and classification losses. R-C3D is faster than existing methods (569 frames per second on a single Titan X Maxwell GPU) and achieves state-of-the-art results on THUMOS’14. We further demonstrate that our model is a general activity detection framework that does not rely on assumptions about particular dataset properties by evaluating our approach on ActivityNet and Charades. Our code is available at http://ai.bu.edu/r-c3d/
Reduction of C[double bond, length as m-dash]O functional groups through H addition reactions: a comparative study between H2CO + H, CH3CH2CHO + H and CH3OCHO + H under interstellar conditions.
H-addition reactions on the icy interstellar grains may play an important role in the formation of complex organic molecules. In the present work we propose a comparative study of H2CO + H, CH3CH2CHO + H and CH3OCHO + H solid state reactions at 10 K under interstellar conditions in order to characterize the main reaction pathways involved in the hydrogenation of a CHO functional group. We show that the most probable mechanism for the formation of alcohols under non-energetic conditions through the saturation of the CHO group corresponds to the attachment of the H atom to the CH group with noticeable variations of the energy barriers for each studied reaction. These energy barriers have been calculated to be 8.3, 14.6 and 32.7 kJ mol-1 for H2CO + H, CH3CH2CHO + H and CH3OCHO + H, respectively. The coupling of the experimental and theoretical analysis proves that while the simplest aldehyde, formaldehyde, is easily reduced to methanol, methylformate and propanal behave differently under H-bombardments but they cannot be a source of alcohol formation through H-addition reactions. Consequently, for the formation of alcohols larger than CH3OH, other chemical pathways should be taken into account, probably energetic processing such as the photolysis of interstellar ice analogues containing C-, H- and O-bearing compounds or the coupling of the H-addition reaction and photon-irradiation on species with a CHO functional group.
High-risk sex offenders may not be high risk forever.
This study examined the extent to which sexual offenders present an enduring risk for sexual recidivism over a 20-year follow-up period. Using an aggregated sample of 7,740 sexual offenders from 21 samples, the yearly recidivism rates were calculated using survival analysis. Overall, the risk of sexual recidivism was highest during the first few years after release, and decreased substantially the longer individuals remained sex offense-free in the community. This pattern was particularly strong for the high-risk sexual offenders (defined by Static-99R scores). Whereas the 5-year sexual recidivism rate for high-risk sex offenders was 22% from the time of release, this rate decreased to 4.2% for the offenders in the same static risk category who remained offense-free in the community for 10 years. The recidivism rates of the low-risk offenders were consistently low (1%-5%) for all time periods. The results suggest that offense history is a valid, but time-dependent, indicator of the propensity to sexually reoffend. Further research is needed to explain the substantial rate of desistance by high-risk sexual offenders.
Simple low-dimensional features approximating NCC-based image matching
This paper proposes new low-dimensional image features that enable images to be very efficiently matched. Image matching is one of the key technologies for many vision-based applications, including template matching, block motion estimation, video compression, stereo vision, image/video near-duplicate detection, similarity join for image/video database, and so on. Normalized cross correlation (NCC) is one of widely used methods for image matching with preferable characteristics such as robustness to intensity offsets and contrast changes, but it is computationally expensive. The proposed features, derived by the method of Lagrange multipliers, can provide upper-bounds of NCC as a simple dot product between two low-dimensional feature vectors. By using the proposed features, NCC-based image matching can be effectively accelerated. The matching performance with the proposed features is demonstrated using an image database obtained from actual broadcast videos. The new features are shown to outperform other methods: multilevel successive elimination algorithm (MSEA), discrete cosine transform (DCT) coefficients, and histograms. The proposed method can achieve very high precision while only slightly sacrificing recall.
A Compensation Technique for Smooth Transitions in Non-inverting Buck-Boost Converter
With the advent of battery-powered portable devices and the mandatory adoptions of power factor correction (PFC), non-inverting buck-boost converter is attracting numerous attentions. Conventional two-switch or four-switch non-inverting buck-boost converters choose their operation modes by measuring input and output voltage magnitudes. This can cause higher output voltage transients when input and output are close to each other. For the mode selection, the comparison of input and output voltage magnitudes is not enough due to the voltage drops raised by the parasitic components. In addition, the difference in the minimum and maximum effective duty cycle between controller output and switching device yields the discontinuity at the instant of mode change. Moreover, the different properties of output voltage versus a given duty cycle of buck and boost operating modes contribute to the output voltage transients. In this paper, the effect of the discontinuity due to the effective duty cycle derived from device switching time at the mode change is analyzed. A technique to compensate the output voltage transient due to this discontinuity is proposed. In order to attain additional mitigation of output transients and linear input/output voltage characteristic in buck and boost modes, the linearization of DC-gain of large signal model in boost operation is analyzed as well. Analytical, simulation, and experimental results are presented to validate the proposed theory.
Fast congruence closure and extensions
Congruence closure algorithms for deduction in ground equational theories are ubiquitous in many (semi-) decision procedures used for verification and automated deduction. In many of these applications one needs an incremental algorithm that is moreover capable of recovering, among the thousands of input equations, the small subset that explains the equivalence of a given pair of terms. In this paper we present an algorithm satisfying all these requirements. First, building on ideas from abstract congruence closure algorithms, we present a very simple and clean incremental congruence closure algorithm and show that it runs in the best known time O(n log n). After that, we introduce a proof-producing union-find data structure that is then used for extending our congruence closure algorithm, without increasing the overall O(n log n) time, in order to produce a k-step explanation for a given equation in almost optimal time (quasi-linear in k). Finally, we show that the previous algorithms can be smoothly extended, while still obtaining the same asymptotic time bounds, in order to support the interpreted functions symbols successor and predecessor, which have been shown to be very useful in applications such as microprocessor verification.
Soluble Aβ seeds are potent inducers of cerebral β-amyloid deposition.
Cerebral β-amyloidosis and associated pathologies can be exogenously induced by the intracerebral injection of small amounts of pathogenic Aβ-containing brain extract into young β-amyloid precursor protein (APP) transgenic mice. The probable β-amyloid-inducing factor in the brain extract has been identified as a species of aggregated Aβ that is generated in its most effective conformation or composition in vivo. Here we report that Aβ in the brain extract is more proteinase K (PK) resistant than is synthetic fibrillar Aβ, and that this PK-resistant fraction of the brain extract retains the capacity to induce β-amyloid deposition upon intracerebral injection in young, pre-depositing APP23 transgenic mice. After ultracentrifugation of the brain extract, <0.05% of the Aβ remained in the supernatant fraction, and these soluble Aβ species were largely PK sensitive. However, upon intracerebral injection, this soluble fraction accounted for up to 30% of the β-amyloid induction observed with the unfractionated extract. Fragmentation of the Aβ seeds by extended sonication increased the seeding capacity of the brain extract. In summary, these results suggest that multiple Aβ assemblies, with various PK sensitivities, are capable of inducing β-amyloid aggregation in vivo. The finding that small and soluble Aβ seeds are potent inducers of cerebral β-amyloidosis raises the possibility that such seeds may mediate the spread of β-amyloidosis in the brain. If they can be identified in vivo, soluble Aβ seeds in bodily fluids also could serve as early biomarkers for cerebral β-amyloidogenesis and eventually Alzheimer's disease.
Scalability and sparsity issues in recommender datasets: a survey
Recommender systems have been widely used in various domains including movies, news, music with an aim to provide the most relevant proposals to users from a variety of available options. Recommender systems are designed using techniques from many fields, some of which are: machine learning, information retrieval, data mining, linear algebra and artificial intelligence. Though in-memory nearest-neighbor computation is a typical approach for collaborative filtering due to its high recommendation accuracy; its performance on scalability is still poor given a huge user and item base and availability of only few ratings (i.e., data sparsity) in archetypal merchandising applications. In order to alleviate scalability and sparsity issues in recommender systems, several model-based approaches were proposed in the past. However, if research in recommender system is to achieve its potential, there is a need to understand the prominent techniques used directly to build recommender systems or for preprocessing recommender datasets, along with its strengths and weaknesses. In this work, we present an overview of some of the prominent traditional as well as advanced techniques that can effectively handle data dimensionality and data sparsity. The focus of this survey is to present an overview of the applicability of some advanced techniques, particularly clustering, biclustering, matrix factorization, graph-theoretic, and fuzzy techniques in recommender systems. In addition, it highlights the applicability and recent research works done using each technique.
Competing for Attention in Social Media under Information Overload Conditions
Modern social media are becoming overloaded with information because of the rapidly-expanding number of information feeds. We analyze the user-generated content in Sina Weibo, and find evidence that the spread of popular messages often follow a mechanism that differs from the spread of disease, in contrast to common belief. In this mechanism, an individual with more friends needs more repeated exposures to spread further the information. Moreover, our data suggest that for certain messages the chance of an individual to share the message is proportional to the fraction of its neighbours who shared it with him/her, which is a result of competition for attention. We model this process using a fractional susceptible infected recovered (FSIR) model, where the infection probability of a node is proportional to its fraction of infected neighbors. Our findings have dramatic implications for information contagion. For example, using the FSIR model we find that real-world social networks have a finite epidemic threshold in contrast to the zero threshold in disease epidemic models. This means that when individuals are overloaded with excess information feeds, the information either reaches out the population if it is above the critical epidemic threshold, or it would never be well received.
The effect of rear-wheel position on seating ergonomics and mobility efficiency in wheelchair users with spinal cord injuries: a pilot study.
This study analyzed the effect of rear-wheel position on seating comfort and mobility efficiency. Twelve randomly selected paraplegic wheelchair users participated in the study. Wheelchairs were tested in two rear-wheel positions while the users operated the wheelchair on a treadmill and while they worked on a computer. Propulsion efficiency, seating comfort, and propulsion qualities were registered at different loads during the treadmill session. During the computer session, pelvic position, estimated seating comfort, and estimated activity performance were measured. The change in rear-wheel position affected wheelchair ergonomics with respect to weight distribution (p < 0.0001) and seat inclination angle (position I = 5 and position II = 12). These changes had a significant effect on push frequency (p < 0.05) and stroke angle (p < 0.05) during wheelchair propulsion. We found no consistent effect on mechanical efficiency, estimated exertion, breathlessness, seating comfort, estimated propulsion qualities, pelvic position, or activity performance.
Learning to Respond with Deep Neural Networks for Retrieval-Based Human-Computer Conversation System
To establish an automatic conversation system between humans and computers is regarded as one of the most hardcore problems in computer science, which involves interdisciplinary techniques in information retrieval, natural language processing, artificial intelligence, etc. The challenges lie in how to respond so as to maintain a relevant and continuous conversation with humans. Along with the prosperity of Web 2.0, we are now able to collect extremely massive conversational data, which are publicly available. It casts a great opportunity to launch automatic conversation systems. Owing to the diversity of Web resources, a retrieval-based conversation system will be able to find at least some responses from the massive repository for any user inputs. Given a human issued message, i.e., query, our system would provide a reply after adequate training and learning of how to respond. In this paper, we propose a retrieval-based conversation system with the deep learning-to-respond schema through a deep neural network framework driven by web data. The proposed model is general and unified for different conversation scenarios in open domain. We incorporate the impact of multiple data inputs, and formulate various features and factors with optimization into the deep learning framework. In the experiments, we investigate the effectiveness of the proposed deep neural network structures with better combinations of all different evidence. We demonstrate significant performance improvement against a series of standard and state-of-art baselines in terms of p@1, MAP, nDCG, and MRR for conversational purposes.
Price Competition in Markets with Consumer Variety Seeking
We investigate price competition between firms in markets characterized by consumer variety seeking. While previous research has addressed the effect of consumer inertia on prices, there exists no research on the effects of variety seeking on price competition. Our study fills this gap in the literature. Using a two-period duopoly framework as in Klemperer's analysis of inertial markets, we show that the noncooperative pricing equilibrium in a market with consumer variety seeking may be the same as the collusive outcome in an otherwise identical market without variety seeking. Specifically, our variety-seeking model implies tacit collusion between firms in both periods, unlike the inertia model of Klemperer that implies tacit collusion between firms only in the second period but implies fierce price competition in the first period. When consumers are assumed to have rational expectations about future prices, the implied first-period prices increase further, which is consistent with what Klemperer finds in an inertial market. To summarize, while our variety-seeking analyses support two key results (pertaining to second-period prices and rational expectations) previously derived for inertial markets by Klemperer, they depart from one key result (pertaining to first-period prices).
Data-Driven News Generation for Automated Journalism
Despite increasing amounts of data and ever improving natural language generation techniques, work on automated journalism is still relatively scarce. In this paper, we explore the field and challenges associated with building a journalistic natural language generation system. We present a set of requirements that should guide system design, including transparency, accuracy, modifiability and transferability. Guided by the requirements, we present a data-driven architecture for automated journalism that is largely domain and language independent. We illustrate its practical application in the production of news articles upon a user request about the 2017 Finnish municipal elections in three languages, demonstrating the successfulness of the data-driven, modular approach of the design. We then draw some lessons for future automated journalism.
Sentence Reduction for Automatic Text Summarization
We present a novel sentence reduction system for automatically removing extraneous phrases from sentences that are extracted from a document for summarization purpose. The system uses multiple sources of knowledge to decide which phrases in an extracted sentence can be removed, including syntactic knowledge, context information, and statistics computed from a corpus which consists of examples written by human professionals. Reduction can significantly improve the conciseness of automatic summaries.
Management of psoriasis and psoriatic arthritis in a combined dermatology and rheumatology clinic
Psoriasis and psoriatic arthritis (PsA) are chronic systemic inflammatory disorders with wide spectrums of cutaneous and musculoskeletal presentations. Management of joint disease in this population can be challenging and often requires the expertise of rheumatology in conjunction with dermatology. A multidisciplinary clinic setting may benefit these patients, and in this study we sought to evaluate the experience of such a model. We performed a retrospective chart review of patients evaluated between October 2003 and October 2009 in the Center for Skin and Related Musculoskeletal Diseases (SARM) at Brigham and Women’s Hospital, Boston, MA, USA, where patients are seen by both an attending rheumatologist and dermatologist. Main outcomes included the presence of comorbidities, accuracy of the initial diagnosis, and escalation of treatment modalities. Over the 6-year period, 510 patients were evaluated. Two hundred sixty-eight patients had psoriasis and/or PsA. The prevalence of comorbidities was high (45% hypertension, 46% hyperlipidemia, 19% diabetes, and 36% history of the past or current smoking). Visit in SARM resulted in a revised diagnosis that differed from the previous diagnosis at outside clinics in 46% of cases. Patients were more likely to receive a systemic medication after the evaluation in SARM as compared to before, 25 versus 15%, respectively, with an odds ratio of 5.1. Patients were also more likely to be treated with a biologic agent after the evaluation in SARM as compared to before, 37 versus 16%, respectively. Multidisciplinary care may facilitate the diagnosis of joint disease and offers a more comprehensive treatment approach for patients with both psoriasis and PsA. Our data can be used to support the efforts to provide integrated rheumatologic and dermatologic care for this population.
Enhanced membrane protein topology prediction using a hierarchical classification method and a new scoring function.
The prediction of transmembrane (TM) helix and topology provides important information about the structure and function of a membrane protein. Due to the experimental difficulties in obtaining a high-resolution model, computational methods are highly desirable. In this paper, we present a hierarchical classification method using support vector machines (SVMs) that integrates selected features by capturing the sequence-to-structure relationship and developing a new scoring function based on membrane protein folding. The proposed approach is evaluated on low- and high-resolution data sets with cross-validation, and the topology (sidedness) prediction accuracy reaches as high as 90%. Our method is also found to correctly predict both the location of TM helices and the topology for 69% of the low-resolution benchmark set. We also test our method for discrimination between soluble and membrane proteins and achieve very low overall false positive (0.5%) and false negative rates (0 to approximately 1.2%). Lastly, the analysis of the scoring function suggests that the topogeneses of single-spanning and multispanning TM proteins have different levels of complexity, and the consideration of interloop topogenic interactions for the latter is the key to achieving better predictions. This method can facilitate the annotation of membrane proteomes to extract useful structural and functional information. It is publicly available at http://bio-cluster.iis.sinica.edu.tw/~bioapp/SVMtop.
GraphP: Reducing Communication for PIM-Based Graph Processing with Efficient Data Partition
Processing-In-Memory (PIM) is an effective technique that reduces data movements by integrating processing units within memory. The recent advance of “big data” and 3D stacking technology make PIM a practical and viable solution for the modern data processing workloads. It is exemplified by the recent research interests on PIM-based acceleration. Among them, TESSERACT is a PIM-enabled parallel graph processing architecture based on Micron’s Hybrid Memory Cube (HMC), one of the most prominent 3D-stacked memory technologies. It implements a Pregel-like vertex-centric programming model, so that users could develop programs in the familiar interface while taking advantage of PIM. Despite the orders of magnitude speedup compared to DRAM-based systems, TESSERACT generates excessive crosscube communications through SerDes links, whose bandwidth is much less than the aggregated local bandwidth of HMCs. Our investigation indicates that this is because of the restricted data organization required by the vertex programming model. In this paper, we argue that a PIM-based graph processing system should take data organization as a first-order design consideration. Following this principle, we propose GraphP, a novel HMC-based software/hardware co-designed graph processing system that drastically reduces communication and energy consumption compared to TESSERACT. GraphP features three key techniques. 1) “Source-cut” partitioning, which fundamentally changes the cross-cube communication from one remote put per cross-cube edge to one update per replica. 2) “Two-phase Vertex Program”, a programming model designed for the “source-cut” partitioning with two operations: GenUpdate and ApplyUpdate. 3) Hierarchical communication and overlapping, which further improves performance with unique opportunities offered by the proposed partitioning and programming model. We evaluate GraphP using a cycle accurate simulator with 5 real-world graphs and 4 algorithms. The results show that it provides on average 1.7 speedup and 89% energy saving compared to TESSERACT.
Clustering and Resource Allocation for Dense Femtocells in a Two-Tier Cellular OFDMA Network
Small cells such as femtocells overlaying the macrocells can enhance the coverage and capacity of cellular wireless networks and increase the spectrum efficiency by reusing the frequency spectrum assigned to the macrocells in a universal frequency reuse fashion. However, management of both the cross-tier and co-tier interferences is one of the most critical issues for such a two-tier cellular network. Centralized solutions for interference management in a two-tier cellular network with orthogonal frequency-division multiple access (OFDMA), which yield optimal/near-optimal performance, are impractical due to the computational complexity. Distributed solutions, on the other hand, lack the superiority of centralized schemes. In this paper, we propose a semi-distributed (hierarchical) interference management scheme based on joint clustering and resource allocation for femtocells. The problem is formulated as a mixed integer non-linear program (MINLP). The solution is obtained by dividing the problem into two sub-problems, where the related tasks are shared between the femto gateway (FGW) and femtocells. The FGW is responsible for clustering, where correlation clustering is used as a method for femtocell grouping. In this context, a low-complexity approach for solving the clustering problem is used based on semi-definite programming (SDP). In addition, an algorithm is proposed to reduce the search range for the best cluster configuration. For a given cluster configuration, within each cluster, one femto access point (FAP) is elected as a cluster head (CH) that is responsible for resource allocation among the femtocells in that cluster. The CH performs sub-channel and power allocation in two steps iteratively, where a low-complexity heuristic is proposed for the sub-channel allocation phase. Numerical results show the performance gains due to clustering in comparison to other related schemes. Also, the proposed correlation clustering scheme offers performance, which is close to that of the optimal clustering, with a lower complexity.
Amount, type, and sources of carbohydrates in relation to ischemic heart disease mortality in a Chinese population: a prospective cohort study.
BACKGROUND The relation between carbohydrate intake and risk of ischemic heart disease (IHD) has not been fully explored in Asian populations known to have high-carbohydrate diets. OBJECTIVE We assessed whether intakes of total carbohydrates, different types of carbohydrates, and their food sources were associated with IHD mortality in a Chinese population. DESIGN We prospectively examined the association of carbohydrate intake and IHD mortality in 53,469 participants in the Singapore Chinese Health Study with an average follow-up of 15 y. Diet was assessed by using a semiquantitative food-frequency questionnaire. HRs and 95% CIs were calculated by using a Cox proportional hazards analysis. RESULTS We documented 1660 IHD deaths during 804,433 person-years of follow-up. Total carbohydrate intake was not associated with IHD mortality risk [men: HR per 5% of energy, 0.97 (95% CI: 0.92, 1.03); women: 1.06 (95% CI: 0.99, 1.14)]. When types of carbohydrates were analyzed individually, starch intake was associated with higher risk [men: 1.03 (95% CI: 0.99, 1.08); women: 1.08, (95% CI: 1.02, 1.14)] and fiber intake with lower risk of IHD mortality [men: 0.94 (95% CI: 0.82, 1.08); women: 0.71 (95% CI: 0.60, 0.84)], with stronger associations in women than men (both P-interaction < 0.01). In substitution analyses, the replacement of one daily serving of rice with one daily serving of noodles was associated with higher risk (difference in HR: 26.11%; 95% CI: 10.98%, 43.30%). In contrast, replacing one daily serving of rice with one of vegetables (-23.81%; 95% CI: -33.12%, -13.20%), fruit (-11.94%; 95% CI: -17.49%, -6.00%), or whole-wheat bread (-19.46%; 95% CI: -34.28%, -1.29%) was associated with lower risk of IHD death. CONCLUSIONS In this Asian population with high carbohydrate intake, the total amount of carbohydrates consumed was not substantially associated with IHD mortality. In contrast, the shifting of food sources of carbohydrates toward a higher consumption of fruit, vegetables, and whole grains was associated with lower risk of IHD death.
Design of a Wheelchair with Legs for People with Motor Disabilities
Low cost. For example, golf carts, outdoor chairs and special purpose sand buggies. wheelchair. AbstractA proof-of-concept prototype wheelchair with legs for people with motor disabilities is proposed, with the objective of demonstrating the feasibility of a completely new approach to mobility. Our prototype system consists of a chair equipped with wheels and legs, and is capable of traversing uneven terrain and circumventing obstacles. The important design considerations, the system design and analysis, and an experimental prototype of a chair are discussed. The results from the analysis and experimentation show the feasibility of the proposed concept and its advantages.
Corporate Social Responsibility , Ethical Leadership , and Trust Propensity : A Multi-Experience Model of Perceived Ethical Climate
Existing research on the formation of employee ethical climate perceptions focuses mainly on organization characteristics as antecedents, and although other constructs have been considered, these constructs have typically been studied in isolation. Thus, our understanding of the context in which ethical climate perceptions develop is incomplete. To address this limitation, we build upon the work of Rupp (Organ Psychol Rev 1:72–94, 2011) to develop and test a multi-experience model of ethical climate which links aspects of the corporate social responsibility (CSR), ethics, justice, and trust literatures and helps to explain how employees’ ethical climate perceptions form. We argue that in forming ethical climate perceptions, employees consider the actions or characteristics of a complex web of actors. Specifically, we propose that employees look (1) outward at how communities are impacted by their organization’s actions (e.g., CSR), (2) upward to make inferences about the ethicality of leaders in their organizations (e.g., ethical leadership), and (3) inward at their own propensity to trust others as they form their perceptions. Using a multiple-wave field study (N = 201) conducted at a privately held US corporation, we find substantial evidence in support of our model.
Endothelial cell activation and proliferation in ovarian tumors: two distinct steps as potential markers for antiangiogenic therapy response.
Ovarian cancer is the second leading cause of cancer-related death in women worldwide. Since most patients are diagnosed in advanced disease stages, the starting point, early steps and molecular mechanisms of ovarian cancer angiogenesis are still incompletely characterized. Most immunohistochemical studies for assessment of microvessel density (MVD) in ovarian tumors are based on CD31, CD34 and CD105 immunostaining of tumor blood vessels. Yet, the proliferative status of tumor blood vessel endothelial cells has not as yet been used in the assessment of tumor blood vessels. The present study investigated the Ki67 proliferative index of tumor blood vessel endothelial cells highlighted with the CD34 panendothelial marker and the CD105 endothelial marker in ovarian cancers by antigen co-localization with a doublestaining immunohistochemical method. Lack of co-localization of CD105 and Ki67 in all types of ovarian tumors together with the presence of CD34+/Ki67-positive endothelial cells suggest that endothelial cell activation and proliferation are distinct steps in ovarian tumors. Differences in the proliferation index were observed in endothelial cells from blood vessels of the tumor core and those of the tumor peripheral zones. Potential specific targeting of activated and proliferating tumor blood vessels may provide clues for improving antiangiogenic therapy efficiency.
Caregiving and volunteering: are private and public helping behaviors linked?
OBJECTIVE The purpose of this study was to examine the relationship between two forms of helping behavior among older adults--informal caregiving and formal volunteer activity. METHODS To evaluate our hypotheses, we employed Tobit regression models to analyze panel data from the first two waves of the Americans' Changing Lives survey. RESULTS We found that older adult caregivers were more likely to be volunteers than noncaregivers. Caregivers who provided a relatively high number of caregiving hours annually reported a greater number of volunteer hours than did noncaregivers. Caregivers who provided care to nonrelatives were more likely than noncaregivers to be a volunteer and to volunteer more hours. Finally, caregivers were more likely than noncaregivers to be asked to volunteer. DISCUSSION Our results provide support for the hypothesis that caregivers are embedded in networks that provide them with more opportunities for volunteering. Additional research on the motivations for volunteering and greater attention to the context and hierarchy of caregiving and volunteering are needed.
Controlled pilot study of piracetam for pediatric opsoclonus-myoclonus.
Piracetam is an effective symptomatic treatment for some types of myoclonus in adults. To survey the efficacy and safety of piracetam in pediatric opsoclonus-myoclonus, we conducted an open, randomized, two-period, dose-ranging, double-blind, crossover, clinical trial of five children comparing the antimyoclonic properties of oral piracetam to placebo. We devised and validated a new rating scale, specifically for pediatric opsoclonus-myoclonus. Two parents while blinded were able to identify the active phase by improvement in behavior, but another thought the behavior was worse. None of the patients showed improvement in myoclonus. The adult-equivalent dose of piracetam used in this study, which is threefold higher than that used in previous pediatric studies, was well tolerated and safe. We found our rating scale to be a reliable and useful tool for future studies of opsoclonus-myoclonus in children.
Identities in-between: the impact of satellite broadcasting on Greek Orthodox minority (Rum Polites) women's perception of their identities in Turkey
Abstract This study aims to shed a light on women belonging to the Greek Orthodox Christian (Rum Polites) community in Istanbul, Turkey and their perception of their identity with the help of satellite broadcasting (ERT World). This research is the first attempt to analyse Rum women's viewing attitudes and their correlations with a number of variables such as education, age, family structure, religion, occupation, and their perceptions of themselves as part of a distinctive religious and cultural entity. Since the female members of the community are heavy television viewers, television is a powerful tool to construct a social reality and a sense of self. By conducting in-depth interviews and interpretative phenomenological analysis (IPA), this study aims to reveal how this unique community makes sense of their identities and social worlds through television.
Surface modification study of low energy electron beam irradiated polycarbonate film
Abstract The effect of low energy electron beam irradiation on polycarbonate (PC) film has been studied here. The PC film of thickness 20 μm was exposed by 10 keV electron beam with 100 nA/cm 2 current density. The irradiated film was characterized by mean of X-ray photoelectron spectroscopy (XPS), atomic force microscopy (AFM) and residual gas analyzer (RGA). Formation of unsaturated bonds and partial graphitization of the surface layer are measured by XPS. Results of the AFM imaging shows electron implantation induce changes in surface morphology of the polymer film. The residual gas analyzer (RGA) spectrum of PC is recorded in situ during irradiation. The results show the change in cross-linking density of the polymer at the top surface.
Listener-Aware Music Recommendation from Sensor and Social Media Data
Music recommender systems are lately seeing a sharp increase in popularity due to many novel commercial music streaming services. Most systems, however, do not decently take their listeners into account when recommending music items. In this note, we summarize our recent work and report our latest findings on the topics of tailoring music recommendations to individual listeners and to groups of listeners sharing certain characteristics. We focus on two tasks: context-aware automatic playlist generation (also known as serial recommendation) using sensor data and music artist recommendation using social media data.
Anatomic Distribution of Nerves and Microvascular Density in the Human Anterior Vaginal Wall: Prospective Study
BACKGROUND The presence of the G-spot (an assumed erotic sensitive area in the anterior wall of the vagina) remains controversial. We explored the histomorphological basis of the G-spot. METHODS Biopsies were drawn from a 12 o'clock direction in the distal- and proximal-third areas of the anterior vagina of 32 Chinese subjects. The total number of protein gene product 9.5-immunoreactive nerves and smooth muscle actin-immunoreactive blood vessels in each specimen was quantified using the avidin-biotin-peroxidase assay. RESULTS Vaginal innervation was observed in the lamina propria and muscle layer of the anterior vaginal wall. The distal-third of the anterior vaginal wall had significantly richer small-nerve-fiber innervation in the lamina propria than the proximal-third (p = 0.000) and in the vaginal muscle layer (p = 0.006). There were abundant microvessels in the lamina propria and muscle layer, but no small vessels in the lamina propria and few in the muscle layer. Significant differences were noted in the number of microvessels when comparing the distal- with proximal-third parts in the lamina propria (p = 0.046) and muscle layer (p = 0.002). CONCLUSIONS Significantly increased density of nerves and microvessels in the distal-third of the anterior vaginal wall could be the histomorphological basis of the G-spot. Distal anterior vaginal repair could disrupt the normal anatomy, neurovascular supply and function of the G-spot, and cause sexual dysfunction.
Efficacy and safety of olanzapine combined with aprepitant, palonosetron, and dexamethasone for preventing nausea and vomiting induced by cisplatin-based chemotherapy in gynecological cancer: KCOG-G1301 phase II trial
Olanzapine is effective in chemotherapy-induced nausea and vomiting (CINV). In patients receiving highly emetogenic chemotherapy (HEC), its efficacy was reported as rescue therapy for breakthrough emesis refractory to triplet therapy (palonosetron, aprepitant, and dexamethasone). However, its preventive effects with triplet therapy for CINV are unknown. This study aimed to investigate efficacy and safety of preventive use of olanzapine with triplet therapy for CINV of HEC. This study is a prospective multicenter study conducted by Kansai Clinical Oncology Group. Forty chemo-naïve gynecological cancer patients receiving HEC with cisplatin (≥50 mg/m2) were enrolled. Oral olanzapine (5 mg) was administered with triplet therapy a day prior to cisplatin administration and on days 1–5. The primary endpoint was complete response (no vomiting and no rescue) rate for the overall phase (0–120 h post-chemotherapy). Secondary endpoints were complete response rate for acute phase (0–24 h post-chemotherapy) and delayed phase (24–120 h post-chemotherapy) and complete control (no vomiting, no rescue, and no significant nausea) rate and total control (no vomiting, no rescue, and no nausea) rate for each phase. These endpoints were evaluated during the first cycle of chemotherapy. Complete response rates for acute, delayed, and overall phases were 97.5, 95.0, and 92.5 %, respectively. Complete control rates were 92.5, 87.5, and 82.5 %, respectively. Total control rates were 87.5, 67.5, and 67.5 %, respectively. There were no grade 3 or 4 adverse events. Preventive use of olanzapine combined with triplet therapy gives better results than those from previously reported studies of triplet therapy.
Population-Scale Sequencing Data Enable Precise Estimates of Y-STR Mutation Rates.
Short tandem repeats (STRs) are mutation-prone loci that span nearly 1% of the human genome. Previous studies have estimated the mutation rates of highly polymorphic STRs by using capillary electrophoresis and pedigree-based designs. Although this work has provided insights into the mutational dynamics of highly mutable STRs, the mutation rates of most others remain unknown. Here, we harnessed whole-genome sequencing data to estimate the mutation rates of Y chromosome STRs (Y-STRs) with 2-6 bp repeat units that are accessible to Illumina sequencing. We genotyped 4,500 Y-STRs by using data from the 1000 Genomes Project and the Simons Genome Diversity Project. Next, we developed MUTEA, an algorithm that infers STR mutation rates from population-scale data by using a high-resolution SNP-based phylogeny. After extensive intrinsic and extrinsic validations, we harnessed MUTEA to derive mutation-rate estimates for 702 polymorphic STRs by tracing each locus over 222,000 meioses, resulting in the largest collection of Y-STR mutation rates to date. Using our estimates, we identified determinants of STR mutation rates and built a model to predict rates for STRs across the genome. These predictions indicate that the load of de novo STR mutations is at least 75 mutations per generation, rivaling the load of all other known variant types. Finally, we identified Y-STRs with potential applications in forensics and genetic genealogy, assessed the ability to differentiate between the Y chromosomes of father-son pairs, and imputed Y-STR genotypes.
HTS DC Cable Line Project: On-Going Activities in Russia
Previous research has shown that implementing high-temperature superconducting dc power cables in electrical grids of large metropolitan areas will have major positive impacts on power system operation and control. Current activities in Russia comprise developing a 2.5-km high-temperature superconducting dc cable and its installation in St. Petersburg electrical grid. This work includes five major parts: installation site selection, cable calculation, development and manufacturing, cryogenic equipment development, ac/dc converter development, and testing of all dc line elements. As of today, the list of subcontractors has been approved. The purpose of this report is to summarize current results and future work.
Using psychophysiological techniques to measure user experience with entertainment technologies
Emerging technologies offer exciting new ways of using entertainment technology to create fantastic play experiences and foster interactions between players. Evaluating entertainment technology is challenging because success isn’ t defined in terms of productivity and performance, but in terms of enjoyment and interaction. Current subjective methods of evaluating entertainment technology aren’ t sufficiently robust. This paper describes two experiments designed to test the efficacy of physiological measures as evaluators of user experience with entertainment technologies. We found evidence that there is a different physiological response in the body when playing against a computer versus playing against a friend. These physiological results are mirrored in the subjective reports provided by the participants. In addition, we provide guidelines for collecting physiological data for user experience analysis, which were informed by our empirical investigations. This research provides an initial step towards using physiological responses to objectively evaluate a user’s experience with entertainment technology.
Multifunctional Chitosan Inverse Opal Particles for Wound Healing.
Wound healing is one of the most important and basic issues faced by the medical community. In this paper, we present biomass-composited inverse opal particles with a series of advanced features for drug delivery and wound healing. The particles were derived by using chitosan biomass to negatively replicate spherical colloid crystal templates. Because of the interconnected porous structures, various forms of active drugs, including fibroblast growth factor could be loaded into the void spaces of the inverse opal particles and encapsulated by temperature-responsive hydrogel. This endowed the composited particles with the capability of intelligent drug release through the relatively high temperature caused by the inflammation reaction at wound sites. Because the structural colors and characteristic reflection peaks of the composited inverse opal particles are blue-shifted during the release process, the drug delivery can be monitored in real time. It was demonstrated that the biomass-composited microcarriers were able to promote angiogenesis, collagen deposition, and granulation-tissue formation as well as reduce inflammation and thus significantly contributed to wound healing. These features point to the potential value of multifunctional biomass inverse opal particles in biomedicine.
Nile River sediment fluctuations over the past 7000 yr and their key role in sapropel development
The provenance pattern of Nile River sediments can be used as a proxy for paleoclimatic changes in East Africa. The 87Sr/86Sr ratios are particularly appropriate for such provenance investigations, because the White Nile drains predominantly crystalline basement rocks, whereas the Blue Nile and Atbara flow off the Ethiopian Highlands, which consist of Tertiary volcanic rocks. A high-resolution profile of 87Sr/86Sr and Ti/Al ratios from a well-dated core in the Nile Delta shows a close correspondence with known changes in Nile flow over the past 7000 yr. At times of higher river flow there was markedly decreased input of Blue Nile–derived and total sediment. This change was caused by northward movement of the Inter Tropical Convergence Zone, resulting in increased vegetative cover in the Ethiopian Highlands due to higher rainfall and a longer wet season. This inverse relationship between Nile River flow and sediment flux may have had important implications in the development of agricultural technology in ancient Egypt. The marked minimum in 87Sr/86Sr at 4200–4500 yr B.P. is coincident with the end of the Old Kingdom in Egypt and provides independent evidence that demise of the Old Kingdom might have been associated with an extended period of catastrophic low floods. During the Quaternary and late Neogene, there was periodic deposition of organic-rich sediments (sapropels) in the eastern Mediterranean that represent important indicators of major environmental change. Evidence from the Ti/Al ratio suggests that the pattern of erosion and sediment supply from the Nile catchment observed in this study also occurred throughout much of the Neogene and Quaternary. The reduced inputs of Blue Nile sediment during times of sapropel formation contributed to the increased primary productivity by reducing the amount of phosphate removed on particles and to the observed change to N limitation in the eastern Mediterranean, which are important characteristics of sapropel deposition.
Compressive deformation of wood impregnated with low molecular weight phenol formaldehyde (PF) resin I: effects of pressing pressure and pressure holding
The deformation behavior of low molecular weight phenol formaldehyde (PF) resin-impregnated wood under compression in the radial direction was investigated for obtaining high-strength wood at low pressing pressures. Flat-sawn grain Japanese cedar (Cryptomeria japonica) blocks with a density of 0.34 g/cm3 were treated with aqueous solution of 20% low molecular weight PF resin resulting in weight gain of 60.8%. Oven-dried specimens were compressed using hot plates fixed to a testing machine. The temperature was 150°C and the pressing speed was 5 mm/min. The impregnation of PF resin caused significant softening of the cell walls resulting in collapse at low pressures. The cell wall collapse was strain-dependent and occurred at a strain of 0.05–0.06 mm/mm regardless of whether the wood was treated with PF resin. Thus, pressure holding causing creep deformation of the cell walls was also effective in initiating cell wall collapse at low pressure. Utilizing a combination of low molecular weight PF resin impregnation and pressure holding at 2 MPa resulted in a density increase of PF resin-treated wood from 0.45 to 1.1 g/cm3. At the same time, the Young’s modulus and bending strength increased from 10 GPa to 22 GPa and 80 MPa to 250 MPa, respectively. It can be concluded that effective utilization of the collapse region of the cell wall is a desirable method for obtaining high-strength PF resin-impregnated wood at low pressing pressures.
Arousal increases social transmission of information.
Social transmission is everywhere. Friends talk about restaurants , policy wonks rant about legislation, analysts trade stock tips, neighbors gossip, and teens chitchat. Further, such interpersonal communication affects everything from decision making and well-But although it is clear that social transmission is both frequent and important, what drives people to share, and why are some stories and information shared more than others? Traditionally, researchers have argued that rumors spread in the " 3 Cs " —times of conflict, crisis, and catastrophe (e.g., wars or natural disasters; Koenig, 1985)―and the major explanation for this phenomenon has been generalized anxiety (i.e., apprehension about negative outcomes). Such theories can explain why rumors flourish in times of panic, but they are less useful in explaining the prevalence of rumors in positive situations, such as the Cannes Film Festival or the dot-com boom. Further, although recent work on the social sharing of emotion suggests that positive emotion may also increase transmission, why emotions drive sharing and why some emotions boost sharing more than others remains unclear. I suggest that transmission is driven in part by arousal. Physiological arousal is characterized by activation of the autonomic nervous system (Heilman, 1997), and the mobilization provided by this excitatory state may boost sharing. This hypothesis not only suggests why content that evokes more of certain emotions (e.g., disgust) may be shared more than other a review), but also suggests a more precise prediction , namely, that emotions characterized by high arousal, such as anxiety or amusement (Gross & Levenson, 1995), will boost sharing more than emotions characterized by low arousal, such as sadness or contentment. This idea was tested in two experiments. They examined how manipulations that increase general arousal (i.e., watching emotional videos or jogging in place) affect the social transmission of unrelated content (e.g., a neutral news article). If arousal increases transmission, even incidental arousal (i.e., outside the focal content being shared) should spill over and boost sharing. In the first experiment, 93 students completed what they were told were two unrelated studies. The first evoked specific emotions by using film clips validated in prior research (Christie & Friedman, 2004; Gross & Levenson, 1995). Participants in the control condition watched a neutral clip; those in the experimental conditions watched an emotional clip. Emotional arousal and valence were manipulated independently so that high-and low-arousal emotions of both a positive (amusement vs. contentment) and a negative (anxiety vs. …
Understanding Poor Seismic Performance of Concrete Walls and Design Implications Sri
This preprint is a PDF of a manuscript that has been accepted for publication in Earthquake Spectra. It is the final version that was uploaded and approved by the author(s). While the paper has been through the usual rigorous peer review process for the Journal, it has not been copyedited, nor have the figures and tables been modified for final publication. Please also note that the paper may refer to online Appendices that are not yet available.
Examining mediators of housing mobility on adolescent asthma: results from a housing voucher experiment.
Literature on neighborhood effects on health largely employs non-experimental study designs and does not typically test specific neighborhood mediators that influence health. We address these gaps using the Moving to Opportunity (MTO) housing voucher experiment. Research has documented both beneficial and adverse effects on health in MTO, but mediating mechanisms have not been tested explicitly. We tested mediation of MTO effects on youth asthma (n = 2829). MTO randomized families living in public housing to an experimental group receiving a voucher to subsidize rental housing, or a control group receiving no voucher, and measured outcomes 4-7 years following randomization. MTO had a harmful main effect vs. controls for self-reported asthma diagnosis (b = 0.24, p = 0.06), past-year asthma attack (b = 0.44, p = 0.02), and past-year wheezing (b = 0.17, p = 0.17). Using Inverse Odds Weighting mediation we tested mental health, smoking, and four housing dimensions as potential mediators of the MTO-asthma relationship. We found no significant mediation overall, but mediation may be gender-specific. Gender-stratified models displayed countervailing mediation effects among girls for asthma diagnosis by smoking (p = 0.05) and adult-reported housing quality (p = 0.06), which reduced total effects by 35% and 42% respectively. MTO treatment worsened boys' mental health and mental health reduced treatment effects on asthma diagnosis by 27%. Future research should explore other potential mediators and gender-specific mediators of MTO effects on asthma. Improving measurement of housing conditions and other potential mediators may help elucidate the "black box" of neighborhood effects.
pi-ADL: an Architecture Description Language based on the higher-order typed pi-calculus for specifying dynamic and mobile software architectures
A key aspect of the design of any software system is its architecture. An architecture description, from a runtime perspective, should provide a formal specification of the architecture in terms of components and connectors and how they are composed together. Further, a dynamic or mobile architecture description must provide a specification of how the architecture of the software system can change at runtime. Enabling specification of dynamic and mobile architectures is a large challenge for an Architecture Description Language (ADL). This article describes π-ADL, a novel ADL that has been designed in the ArchWare European Project to address specification of dynamic and mobile architectures. It is a formal, well-founded theoretically language based on the higher-order typed π-calculus. While most ADLs focus on describing software architectures from a structural viewpoint, π-ADL focuses on formally describing architectures encompassing both the structural and behavioural viewpoints. The π-ADL design principles, concepts and notation are presented. How π-ADL can be used for specifying static, dynamic and mobile architectures is illustrated through case studies. The π-ADL toolset is outlined.
Spectral–Spatial Classification of Hyperspectral Imagery Based on Partitional Clustering Techniques
A new spectral-spatial classification scheme for hyperspectral images is proposed. The method combines the results of a pixel wise support vector machine classification and the segmentation map obtained by partitional clustering using majority voting. The ISODATA algorithm and Gaussian mixture resolving techniques are used for image clustering. Experimental results are presented for two hyperspectral airborne images. The developed classification scheme improves the classification accuracies and provides classification maps with more homogeneous regions, when compared to pixel wise classification. The proposed method performs particularly well for classification of images with large spatial structures and when different classes have dissimilar spectral responses and a comparable number of pixels.
Wav2Letter: an End-to-End ConvNet-based Speech Recognition System
This paper presents a simple end-to-end model for speech recognition, combining a convolutional network based acoustic model and a graph decoding. It is trained to output letters, with transcribed speech, without the need for force alignment of phonemes. We introduce an automatic segmentation criterion for training from sequence annotation without alignment that is on par with CTC [6] while being simpler. We show competitive results in word error rate on the Librispeech corpus [18] with MFCC features, and promising results from raw waveform.
Understanding latent interactions in online social networks
Popular online social networks (OSNs) like Facebook and Twitter are changing the way users communicate and interact with the Internet. A deep understanding of user interactions in OSNs can provide important insights into questions of human social behavior and into the design of social platforms and applications. However, recent studies have shown that a majority of user interactions on OSNs are latent interactions, that is, passive actions, such as profile browsing, that cannot be observed by traditional measurement techniques. In this article, we seek a deeper understanding of both active and latent user interactions in OSNs. For quantifiable data on latent user interactions, we perform a detailed measurement study on Renren, the largest OSN in China with more than 220 million users to date. All friendship links in Renren are public, allowing us to exhaustively crawl a connected graph component of 42 million users and 1.66 billion social links in 2009. Renren also keeps detailed, publicly viewable visitor logs for each user profile. We capture detailed histories of profile visits over a period of 90 days for users in the Peking University Renren network and use statistics of profile visits to study issues of user profile popularity, reciprocity of profile visits, and the impact of content updates on user popularity. We find that latent interactions are much more prevalent and frequent than active events, are nonreciprocal in nature, and that profile popularity is correlated with page views of content rather than with quantity of content updates. Finally, we construct latent interaction graphs as models of user browsing behavior and compare their structural properties, evolution, community structure, and mixing times against those of both active interaction graphs and social graphs.
The Mixmaster Universe in Five Dimensions
We consider a five dimensional vacuum cosmology with Bianchi type-IX spatial geometry and an extra non-compact coordinate. Finding a new class of solutions, we examine and rule out the possibility of deterministic chaos. We interpret this result within the context of induced matter theory.
Types of Social Capital and Mental Disorder in Deprived Urban Areas: A Multilevel Study of 40 Disadvantaged London Neighbourhoods
OBJECTIVES To examine the extent to which individual and ecological-level cognitive and structural social capital are associated with common mental disorder (CMD), the role played by physical characteristics of the neighbourhood in moderating this association, and the longitudinal change of the association between ecological level cognitive and structural social capital and CMD. DESIGN Cross-sectional and longitudinal study of 40 disadvantaged London neighbourhoods. We used a contextual measure of the physical characteristics of each neighbourhood to examine how the neighbourhood moderates the association between types of social capital and mental disorder. We analysed the association between ecological-level measures of social capital and CMD longitudinally. PARTICIPANTS 4,214 adults aged 16-97 (44.4% men) were randomly selected from 40 disadvantaged London neighbourhoods. MAIN OUTCOME MEASURES General Health Questionnaire (GHQ-12). RESULTS Structural rather than cognitive social capital was significantly associated with CMD after controlling for socio-demographic variables. However, the two measures of structural social capital used, social networks and civic participation, were negatively and positively associated with CMD respectively. 'Social networks' was negatively associated with CMD at both the individual and ecological levels. This result was maintained when contextual aspects of the physical environment (neighbourhood incivilities) were introduced into the model, suggesting that 'social networks' was independent from characteristics of the physical environment. When ecological-level longitudinal analysis was conducted, 'social networks' was not statistically significant after controlling for individual-level social capital at follow up. CONCLUSIONS If we conceptually distinguish between cognitive and structural components as the quality and quantity of social capital respectively, the conclusion of this study is that the quantity rather than quality of social capital is important in relation to CMD at both the individual and ecological levels in disadvantaged urban areas. Thus, policy should support interventions that create and sustain social networks. One of these is explored in this article. TRIAL REGISTRATION Controlled-Trials.com ISRCTN68175121 http://www.controlled-trials.com/ISRCTN68175121.
Algorithm 457: finding all cliques of an undirected graph
Description bttroductian. A maximal complete subgraph (clique) is a complete subgraph that is not contained in any other complete subgraph. A recent paper [1] describes a number of techniques to find maximal complete subgraphs of a given undirected graph. In this paper, we present two backtracking algorithms, using a branchand-bound technique [4] to cut off branches that cannot lead to a clique. The first version is a straightforward implementation of the basic algorithm. It is mainly presented to illustrate the method used. This version generates cliques in alphabetic (lexicographic) order. The second version is derived from the first and generates cliques in a rather unpredictable order in an attempt to minimize the number of branches to be traversed. This version tends to produce the larger cliques first and to generate sequentially cliques having a large common intersection. The detailed algorithm for version 2 is presented here. Description o f the algorithm--Version 1. Three sets play an important role in the algorithm. (1) The set compsub is the set to be extended by a new point or shrunk by one point on traveling along a branch of the backtracking tree. The points that are eligible to extend compsub, i.e. that are connected to all points in compsub, are collected recursively in the remaining two sets. (2) The set candidates is the set of all points that will in due time serve as an extension to the present configuration of compsub. (3) The set not is the set of all points that have at an earlier stage already served as an extension of the present configuration of compsub and are now explicitly excluded. The reason for maintaining this set trot will soon be made clear. The core of the algorithm consists of a recursively defined extension operator that will be applied to the three sets Just described. It has the duty to generate all extensions of the given configuration of compsub that it can make with the given set of candidates and that do not contain any of the points in not. To put it differently: all extensions of compsub containing any point in not have already been generated. The basic mechanism now consists of the following five steps:
Maximum margin planning
Imitation learning of sequential, goal-directed behavior by standard supervised techniques is often difficult. We frame learning such behaviors as a maximum margin structured prediction problem over a space of policies. In this approach, we learn mappings from features to cost so an optimal policy in an MDP with these cost mimics the expert's behavior. Further, we demonstrate a simple, provably efficient approach to structured maximum margin learning, based on the subgradient method, that leverages existing fast algorithms for inference. Although the technique is general, it is particularly relevant in problems where A* and dynamic programming approaches make learning policies tractable in problems beyond the limitations of a QP formulation. We demonstrate our approach applied to route planning for outdoor mobile robots, where the behavior a designer wishes a planner to execute is often clear, while specifying cost functions that engender this behavior is a much more difficult task.
Efficacy and safety of mildronate for acute ischemic stroke: a randomized, double-blind, active-controlled phase II multicenter trial.
BACKGROUND AND OBJECTIVE Mildronate, an inhibitor of carnitine-dependent metabolism, is considered to be an anti-ischemic drug. This study is designed to evaluate the efficacy and safety of mildronate injection in treating acute ischemic stroke. METHODS We performed a randomized, double-blind, multicenter clinical study of mildronate injection for treating acute cerebral infarction. 113 patients in the experimental group received mildronate injection, and 114 patients in the active-control group received cinepazide injection. In addition, both groups were given aspirin as a basic treatment. Modified Rankin Scale (mRS) score was performed at 2 weeks and 3 months after treatment. National Institutes of Health Stroke Scale (NIHSS) score and Barthel Index (BI) score were performed at 2 weeks after treatment, and then vital signs and adverse events were evaluated. RESULTS A total of 227 patients were randomized to treatment (n = 113, mildronate; n = 114, active-control). After 3 months, there was no significant difference for the primary endpoint between groups categorized in terms of mRS scores of 0-1 and 0-2 (p = 0.52 and p = 0.07, respectively). There were also no significant differences for the secondary endpoint between groups categorized in terms of NIHSS scores of >5 and >8 (p = 0.98 and p = 0.97, respectively) or BI scores of >75 and >95 (p = 0.49 and p = 0.47, respectively) at 15 days. The incidence of serious adverse events was similar between the two groups. CONCLUSION Mildronate injection is as effective and safe as cinepazide injection in treating acute cerebral infarction.
Erectile dysfunction in men receiving methadone and buprenorphine maintenance treatment.
INTRODUCTION Use of opiates/opioids is associated with hypoactive sexual desire, erectile and orgasmic dysfunction. AIM To determine prevalence and investigate etiology of sexual dysfunction in men on methadone or buprenorphine maintenance treatment (MMT, BMT). MAIN OUTCOME MEASURES International Index of Erectile Function (IIEF), hormone assays, Beck Depression Inventory. METHODS A total of 103 men (mean age 37.6 +/- 7.9) on MMT (N = 84) or BMT (N = 19) were evaluated using the IIEF, hormone assays, Beck Depression Inventory, body mass index (BMI), demographic, and other substance use measures. RESULTS Mean total IIEF scores for partnered men were lower for MMT (50.4 +/- 18.2; N = 53) than reference groups (61.4 +/- 16.8; N = 415; P < 0.0001) or BMT (61.4 +/- 7.0; N = 14; P = 0.048). Among partnered men on MMT, 53% had erectile dysfunction (ED) compared with 24% of reference groups; 26% had moderate to severe ED, 12.1% in under 40s and 40.0% among those 40+ years. On multiple regression, depression, older age, and lower total testosterone were associated with lower IIEF and EF domain; on multivariate analysis, there were no significant associations between IIEF or EF and free testosterone, opioid dose, cannabis or other substance use, viral hepatitis, or BMI. Total testosterone accounted for 16% of IIEF and 15% of EF variance. Men without sexual partners had lower Desire and Erection Confidence scores and less recent sexual activity, suggesting potentially higher prevalence of sexual dysfunction in this group. CONCLUSION Men on MMT, but not BMT, have high prevalence of ED, related to hypogonadism and depression. Practitioners should screen for sexual dysfunction in men receiving opioid replacement treatment. Future studies of sexual dysfunction in opioid-treated men should examine the potential benefits of dose reduction, androgen replacement, treatment of depression, and choice of opioid.
Self-Compassion , Self-Esteem , and Well-Being
This article focuses on the construct of self-compassion and how it differs from self-esteem. First, it discusses the fact that while self-esteem is related to psychological well-being, the pursuit of high self-esteem can be problematic. Next it presents another way to feel good about oneself: selfcompassion. Self-compassion entails treating oneself with kindness, recognizing one’s shared humanity, and being mindful when considering negative aspects of oneself. Finally, this article suggests that self-compassion may offer similar mental health benefits as self-esteem, but with fewer downsides. Research is presented which shows that self-compassion provides greater emotional resilience and stability than self-esteem, but involves less self-evaluation, ego-defensiveness, and self-enhancement than self-esteem. Whereas self-esteem entails evaluating oneself positively and often involves the need to be special and above average, self-compassion does not entail selfevaluation or comparisons with others. Rather, it is a kind, connected, and clear-sighted way of relating to ourselves even in instances of failure, perceived inadequacy, and imperfection. Imagine that you’re an amateur singer-songwriter, and you invite your friends and family to see you perform at a nearby coffeehouse that showcases local talent. After the big night you ask everyone how they thought it went. ‘You were average’ is the reply. How would you feel in this scenario? Ashamed, humiliated, like you were a failure? In our incredibly competitive society, being average is unacceptable. We have to be special and above average to feel we have any worth at all. The problem, of course, is that it is impossible for everyone to be above average at the same time. This means that we tend to inflate our self-evaluations (Alicke & Sedikides, 2009) and put others down so that we can feel superior in comparison (Tesser, 1999) – all in the name of maintaining our selfesteem. For instance, research has shown that fully 90% of drivers think they’re more skilled than their road mates (Preston & Harris, 1965) – even people who’ve recently caused a car accident think they’re superior drivers! This paper will argue that striving for high self-esteem can sometimes be counterproductive, and that self-compassion may offer a healthier and more sustainable way to feel good about oneself. First, however, I will consider some of the problems with seeing self-esteem as the ultimate marker of psychological health. Psychology and Self-Esteem: A Love Affair Self-esteem is an evaluation of our worthiness as individuals, a judgment that we are good, valuable people. William James, one of the founding fathers of Western psychology, argued that self-esteem was an important aspect of mental health. According to James, self-esteem is a product of ‘perceived competence in domains of importance’ (James, 1890). This means that self-esteem is derived from thinking we’re good at things that have significance to us, but not those we don’t personally value (e.g., one teen male Social and Personality Psychology Compass 5/1 (2011): 1–12, 10.1111/j.1751-9004.2010.00330.x a 2011 The Author Social and Personality Psychology Compass a 2011 Blackwell Publishing Ltd may invest his self-esteem in being a good football player but not a high-achieving student, whereas the opposite may be true for another teen). Charles Horton Cooley, an early sociologist, proposed that feelings of self-worth also stem from the ‘looking glass self’ – our perceptions of how we appear in the eyes of others (Cooley, 1902). Interestingly, self-esteem is often impacted more powerfully by the opinions of acquaintances than close others (Harter, 1999), meaning that the foundations of self-esteem can be vague and ill-formed. (After all, how well do we really know acquaintances’ opinions and how well do they really know us?) Psychologists’ interest in self-esteem has grown exponentially over the years, with more than 15,000 journal articles written about the topic (Baumeister, 1998). The vast majority of articles argue that self-esteem is positively associated with adaptive outcomes (see Pyszczynski, Greenberg, Solomon, Arndt, & Schimel, 2004, for a review). There have been several large-scale programs to promote self-esteem in the schools (Mecca, Smelser, & Vasconcellos, 1989). Many self-esteem programs for school kids, however, tend to emphasize indiscriminate praise. Elementary schools in particular assume that their mission is to raise the self-esteem of their pupils, to prepare children for success and happiness later on in life. For this reason, they discourage teachers from making critical remarks to young children because of the damage it might do to their self-esteem (Twenge, 2006). The desire to raise children’s self-esteem has led to some serious grade inflation: 48% of high school students received an A average in 2004, as compared to 18% in 1968 (Sax et al., 2004). The question is, however, is all this emphasis on raising self-esteem necessarily a good thing? Pitfalls Along the Way Recent reviews of the research literature suggest that self-esteem may not be the panacea it’s made out to be (Baumeister, Campbell, Krueger, & Vohs, 2003; Crocker & Park, 2004). First, it should be noted that self-esteem is often highly resistant to change, and that most programs designed to raise self-esteem fail (Swann, 1996). It also appears that self-esteem is largely the outcome of doing well, not the cause of doing well. For instance, self-esteem appears to be the result rather than the cause of improved academic performance (Baumeister et al., 2003). The problem does not necessarily rest with self-esteem itself. It is certainly better to feel worthy and proud than worthless and ashamed. More problematic is what people do to get and keep a sense of high self-esteem (Crocker & Park, 2004). For instance, the desire to have high self-esteem is associated with self-enhancement bias (Sedikides & Gregg, 2008), meaning that people see themselves more positively than they actually are. While Taylor and Brown (1988) have argued that positive illusions enhance psychological well-being, it is also the case that such biases can obscure needed areas of improvement (e.g., maybe those drivers who recently caused a car accident should learn to drive more carefully!) Self-esteem is also associated with the better-than-average effect, otherwise known as ‘The Lake Woebegone Effect’ (Maxwell & Lopus, 1994). (Garrison Keillor famously describes the fictional town of Lake Wobegon as a place where ‘all the women are strong, all the men are good-looking, and all the children are above average.’) Research shows that most people think they are funnier, more logical, more popular, better looking, nicer, more trustworthy, wiser and more intelligent than others (Alicke & Govorun, 2005). The need to feel superior in order to feel okay about oneself means that the pursuit of high self-esteem may involve puffing the self up while putting others down. 2 Self-Compassion, Self-Esteem, and Well-Being a 2011 The Author Social and Personality Psychology Compass 5/1 (2011): 1–12, 10.1111/j.1751-9004.2010.00330.x Social and Personality Psychology Compass a 2011 Blackwell Publishing Ltd Not surprisingly, perhaps, people who are prejudiced often have a positive self-concept. The reason they feel so good about themselves is precisely because they believe their own group is superior to others (Crocker, Thompson, McGraw, & Ingerman, 1987; Fein & Spencer, 1997). Those with high self-esteem may sometimes get angry and aggressive towards others – especially if they aren’t given the respect they think they deserve (Baumeister, Smart, & Boden, 1996). Moreover, they may dismiss negative feedback as unreliable or biased, or else blame poor performance on others. As a result, they may take less personal responsibility for actions and develop an inaccurate self-concept, hindering potential growth (Sedikides, 1993). The emphasis placed on self-esteem in our society has also led to a worrying trend: The narcissism scores of college students have climbed steeply since 1987, with 65% percent of modern-day students scoring higher in narcissism than previous generations (Twenge, Konrath, Foster, Campbell, & Bushman, 2008). Not coincidentally, students’ average self-esteem level rose by an even greater margin over the same period. Although narcissists have extremely high self-esteem and are quite happy much of the time, they also have inflated, unrealistic conceptions of their own attractiveness, competence, and intelligence, feeling entitled to special treatment (Twenge & Campbell, 2009). Their inflated egos are easily pricked, causing them to retaliate against perceived offenders (Bushman & Baumeister, 1998). Narcissists also tend to drive people away over time due to their egocentric tendencies (Campbell & Buffardi, 2008). While narcissism is an extreme form of self-esteem, it should be remembered that the problems associated with the need to get and keep self-esteem apply to non-narcissists as well (Crocker & Park, 2004). Not only is the pursuit of self-esteem potentially linked to bloated self-views, it may pose problems when it is contingent on particular outcomes (Crocker, Luhtanen, Cooper, & Bouvrette, 2003). Global self-esteem often rests on evaluations of self-worth in domains such as appearance, academic ⁄work performance, or social approval (Harter, 1999). This means that skills important for life success are sometimes neglected in order to maintain high self-esteem (like the teen male who spends most of his time playing football rather than studying or doing his homework because his self-esteem is more invested in being a good football player). This type of contingent self-esteem can be unstable, fluctuating according to our latest success or failure. Contingent self-esteem drives people to obsess about the implications of negative events for self-worth, making them more vulner
Neural Attention for Learning to Rank Questions in Community Question Answering
In real-world data, e.g., from Web forums, text is often contaminated with redundant or irrelevant content, which leads to introducing noise in machine learning algorithms. In this paper, we apply Long Short-Term Memory networks with an attention mechanism, which can select important parts of text for the task of similar question retrieval from community Question Answering (cQA) forums. In particular, we use the attention weights for both selecting entire sentences and their subparts, i.e., word/chunk, from shallow syntactic trees. More interestingly, we apply tree kernels to the filtered text representations, thus exploiting the implicit features of the subtree space for learning question reranking. Our results show that the attention-based pruning allows for achieving the top position in the cQA challenge of SemEval 2016, with a relatively large gap from the other participants while greatly decreasing running time.
Caroline: An Autonomously Driving Vehicle for Urban Environments
The 2007 DARPA Urban Challenge afforded the golden opportunity for the Technische Universität Braunschweig to demonstrate its abilities to develop an autonomously driving vehicle to compete with the world’s best competitors. After several stages of qualification, our team CarOLO qualified early for the DARPA Urban Challenge Final Event and was among only eleven teams from initially 89 competitors to compete in the final. We had the ability to work together in a large group of experts, each contributing his expertise in his discipline, and significant organisational, financial and technical support by local sponsors who helped us to become the best non-US team. In this report, we describe the 2007 DARPA Urban Challenge, our contribution ”Caroline”, the technology and algorithms along with her performance in the DARPA Urban Challenge Final Event on November 3, 2007. M. Buehler et al. (Eds.): The DARPA Urban Challenge, STAR 56, pp. 441–508. springerlink.com c © Springer-Verlag Berlin Heidelberg 2009 442 F.W. Rauskolb et al. 1 Motivation and Introduction Focused research is often centered around interesting challenges and awards. The airplane industry started off with awards for the first flight over the British Channel as well as the Atlantic Ocean. The Human Genome Project, the RoboCups and the series of DARPA Grand Challenges for autonomous vehicles serve this very same purpose to foster research and development in a particular direction. The 2007 DARPA Urban Challenge is taking place to boost development of unmanned vehicles for urban areas. Although there is an obvious direct benefit for DARPA and the U.S. government, there will also be a large number of spin-offs in technologies, tools and engineering techniques, both for autonomous vehicles, but also for intelligent driver assistance. An intelligent driver assistance function needs to be able to understand the surroundings of the car, evaluate potential risks and help the driver to behave correctly, safely and, in case it is desired, also efficiently. These topics do not only affect ordinary cars, but also buses, trucks, convoys, taxis, special-purpose vehicles in factories, airports and more. It will take a number of years before we will have a mass market for cars that actively and safely protect the passenger and the surroundings, like pedestrians, from accidents in any situation. Intelligent functions in vehicles are obviously complex systems. Large issues in this project where primarily the methods, techniques and tools for the development of such a highly critical, reliable and complex system. Adapting and combining methods from different engineering disciplines were an important prerequisite for our success. For a stringent deadline-oriented development of such a system it is necessary to rely on a clear structure of the project, a dedicated development process and an efficient engineering that fits the project’s needs. Thus, we did not only concentrate on the single software modules of our autonomously driving vehicle named Caroline, but also on the process itself. We furthermore needed an appropriate tool suite that allowed us to run the development and in particular the testing process as efficient as possible. This includes a simulator allowing us to simulate traffic situations and therefore achieve a sufficient coverage of test situations that would have been hardly to conduct in reality. Only a good collaboration between the participating disciplines allowed us to develop Caroline in time to achieve such a good result in the 2007 DARPA Urban Challenge. In the long term, our goal was not only to participate in a competition but also to establish a sound basis for further research on how to enhance vehicle safety by implementing new technologies to provide vehicle users with reliable and robust driver assistance systems, e.g. by giving special attention on technology for sensor data fusion and robust and reliable system architectures including new methods for simulation and testing. Therefore, the 2007 DARPA Urban Challenge provided a golden opportunity to combine several expertise from several fields of science and engineering. For this purpose, the interdisciplinary team CarOLO had been founded, which drew its members Caroline: An Autonomously Driving Vehicle for Urban Environments 443 from five different institutes. In addition, the team received support from a consortium of national and international companies. In this paper, we firstly introduce the 2007 DARPA Urban Challenge and derive the basic requirements for the car from its rules in section 2. Section 3 describes the overall architecture of the system, which is detailed in section 4 describing sensor fusion, vision, artificial intelligence, vehicle control and along with safety concepts. Section 5 describes the overall development process, discusses quality assurance and the simulator used to achieve sufficient testing coverage in detail. Section 6 finally describes the evaluation of Caroline, namely the performance during the National Qualification Event and the DARPA Urban Challenge Final Event in Victorville, California, the results we found and the conclusions to draw from our performance. 2 2007 DARPA Urban Challenge The 2007 DARPA Urban Challenge is the continuation of the well-known Grand Challenge events of 2004 and 2005, which were entitled ”Barstow to Primm” and ”Desert Classic”. To continue the tradition of having names reflect the actual task, DARPA named the 2007 event ”Urban Challenge”, announcing with it the nature of the mission to be accomplished. The 2004 course, as shown in Fig. 1, led from the Barstow, California (A) to Primm, Nevada (B) and had a total length of about 142 miles. Prior to the main event, DARPA held a qualification, inspection and demonstration for each robot. Nevertheless, none of the original fifteen vehicles managed to come even close to the goal of successfully completing the course. With 7.4 miles as the farthest distance travelled, the challenge ended very disappointingly and no one won the $1 million cash prize. Thereafter, the DARPA program managers heightened the barriers for entering the 2005 challenge significantly. They also modified the entire quality inspection process to one involving a step-by-step application process, including a video of the car in action and the holding of so-called Site Visits, which involved the visit of DARPA officials to team-chosen test sites. The rules for these Site Visits were very strict, e.g. determining exactly how the courses had to be equipped and what obstacles had to be available. From initially 195 teams, 118 were selected for site visits and 43 had finally made it into the National Qualification Event at the California Speedway in Ontario, California. The NQE consisted of several tasks to be completed and obstacles to overcome autonomously by the participating vehicles, including tank traps, a tunnel, speed bumps, stationary cars to pass and many more. On October 5, 2005, DARPA announced the 23 teams that would participate in the final event. The course started in Primm, Nevada, where the 2004 challenge should have ended. With a total distance of 131.6 miles and several natural obstacles, the course was by no means easier than the one from the year before. At the end, five teams completed it and the rest did significantly 444 F.W. Rauskolb et al. Fig. 1. 2004 DARPA Grand Challenge Area between Barstow, CA (A) and Primm, NV (B). better as the teams the year before. The Stanford Racing Team was awarded the $2 million first prize. In 2007, DARPA wanted to increase the difficulty of the requirements, in order to meet the goal set by Congress and the Department of Defense that by 2015 a third of the Army’s ground combat vehicles would operate unmanned. Having already proved the feasibility of crossing a desert and overcome natural obstacles without human intervention, now a tougher task had to be mastered. As the United States Armed Forces are currently facing serious challenges in urban environments, the choice of such seemed logical. DARPA used the good experience and knowledge gained from the first and second Grand Challenge event to define the tasks for the autonomous vehicles. The 2007 DARPA Urban Challenge took place in Vicorville, CA as depicted in Fig. 2. The Technische Universität Braunschweig started in June 2006 as a newcomer in the 2007 DARPA Urban Challenge. Significantly supported by industrial partners, five institutes from the faculties of computer science and mechanical and electrical engineering equipped a 2006 Volkswagen Passat station wagon named ”Caroline” to participate in the DARPA Urban Challenge as a ”Track B” competitor. Track B competitors did not receive any financial support from the DARPA compared to ”Track A” competitors. Track A teams had to submit technical proposals to get technology development funding awards up to $1,000,000 in fall 2006. Track B teams had to provide a 5 minutes video demonstrating the vehicles capabilities in April 2007. Using these videos, DARPA selected 53 teams of the initial 89 teams that advanced to the next stage in the Caroline: An Autonomously Driving Vehicle for Urban Environments 445 Fig. 2. 2007 DARPA Grand Challenge Area in Victorville, CA. qualification process, the ”Site Visit” as already conducted in the 2005 Grand Challenge. Team CarOLO got an invitation for a Site Visit that had to take place in the United States. Therefore, team CarOLO accepted gratefully an offer from the Southwest Research Insitute in San Antonio, Texas providing a location for the Site Visit. On June 20, Caroline proved that she was ready for the National Qualification Event in fall 2007. Against great odds, she showed her abilities to the DARPA officials when a huge thunderstorm hit San Antonio during the Site Visit. The tasks to complete included the correct handling of intersection precedence, passing of vehicles, lane keeping and general safe behaviour. Afte
Human , Social , and Now Positive Psychological Capital Management : INVESTING IN PEOPLE FOR COMPETITIVE ADVANTAGE
Despite the substantial support for the importance of the human factor in achieving sustainable performance, human resources continue to primarily be given lip service, partly owing to the commonly perceived difficulties of quantifying their impact on bottom-line performance and competitive advantage. This article uses well-established business criteria to compare the contribution of human resources to sustainable competitive advantage to that of traditionally recognized resources such as financial, structural, and technological capital. The article presents support for three dimensions of human resources that warrant its inimitablity and sustainability for competitive advantage. Two of these dimensions are human capital (explicit and tacit knowledge) and social capital (networks, norms/ values, and trust). Then, a new dimension of human resources, positive psychological capital, which involves measurable, developable psychological capacities that can be readily enhanced and managed for performance improvement, is introduced. These capacities include self-efficacy/ confidence, hope, optimism, and resiliency. There is growing evidence that human resources are crucial to organizational success, and may offer the best return on investment for sustainable competitive advantage. Jeffery Pfeffer's extensive work, summarized in his book The Human Equation, discusses substantially supported but unfortunate findings that only about half of today's organizations and their managers believe that human resources really do matter. Moreover, only half of those organizations act upon their beliefs beyond paying lip service to this vital resource. Few organizations have adopted high performance work practices, such as 360-degree feedback, pay-for-performance, self-managed teams, employee empowerment, and other human-oriented initiatives. Furthermore, Pfeffer shows that about half of those who " believe " that human resources are their most important asset and " do something about it " actually " stick to " their beliefs and commit to these high performance work practices over time. Pfeffer carefully documents that the resulting " one-eighth " are the world-class that may vary in size and technology, and across industries and countries, but share worldwide superiority in profitability, productivity, innovation, quality, customer satisfaction, and bottom-line profitability. As Carly Fiorina of Hewlett-Packard Co. recently observed: " the most magical and tangible and ultimately the most important ingredient in the transformed landscape is people. " Other well known business leaders such as Intel Corp.'s Andy Grove and Microsoft's 4 Bill Gates further support this claim in their observation that " our most important asset walks out the door every night. " In other words, there finally seems to be the realization that human …
A 600MS/s 30mW 0.13µm CMOS ADC array achieving over 60dB SFDR with adaptive digital equalization
At high conversion speed, time interleaving provides a viable way of achieving analog-to-digital conversion with low power consumption, especially when combined with the successive-approximation-register (SAR) architecture that is known to scale well in CMOS technology. In this work, we showcase a digital background-equalization technique to treat the path-mismatch problem as well as individual ADC nonlinearities in time-interleaved SAR ADC arrays. This approach was first introduced to calibrate the linearity errors in pipelined ADCs [1], and subsequently extended to treat SAR ADCs [2]. In this prototype, we demonstrate the effectiveness of this technique in a compact SAR ADC array, which achieves 7.5 ENOB and a 65dB SFDR at 600MS/s while dissipating 23.6mW excluding the on-chip DLL, and exhibiting one of the best conversion FOMs among ADCs with similar sample rates and resolutions [3].
A 40-Gb/s Optical Transceiver Front-End in 45 nm SOI CMOS
A low-power, 40-Gb/s optical transceiver front-end is demonstrated in a 45-nm silicon-on-insulator (SOI) CMOS process. Both single-ended and differential optical modulators are demonstrated with floating-body transistors to reach output swings of more than 2 VPP and 4 VPP, respectively. A single-ended gain of 7.6 dB is measured over 33 GHz. The optical receiver consists of a transimpedance amplifier (TIA) and post-amplifier with 55 dB ·Ω of transimpedance over 30 GHz. The group-delay variation is ±3.9 ps over the 3-dB bandwidth and the average input-referred noise density is 20.5 pA/(√Hz) . The TIA consumes 9 mW from a 1-V supply for a transimpedance figure of merit of 1875 Ω /pJ. This represents the lowest power consumption for a transmitter and receiver operating at 40 Gb/s in a CMOS process.
Powertrace : Network-level Power Profiling for Low-power Wireless Networks
Low-power wireless networks are quickly becoming a critical part of our everyday infrastructure. Power consumption is a critical concern, but power measurement and estimation is a challenge. We present Powertrace, which to the best of our knowledge is the first system for network-level power profiling of low-power wireless systems. Powertrace uses power state tracking to estimate system power consumption and a structure called energy capsules to attribute energy consumption to activities such as packet transmissions and receptions. With Powertrace, the power consumption of a system can be broken down into individual activities which allows us to answer questions such as “How much energy is spent forwarding packets for node X?”, “How much energy is spent on control traffic and how much on critical data?”, and “How much energy does application X account for?”. Experiments show that Powertrace is accurate to 94% of the energy consumption of a device. To demonstrate the usefulness of Powertrace, we use it to experimentally analyze the power behavior of the proposed IETF standard IPv6 RPL routing protocol and a sensor network data collection protocol. Through using Powertrace, we find the highest power consumers and are able to reduce the power consumption of data collection with 24%. It is our hope that Powertrace will help the community to make empirical energy evaluation a widely used tool in the low-power wireless research community toolbox.
Measurement-Based Harmonic Modeling of an Electric Vehicle Charging Station Using a Three-Phase Uncontrolled Rectifier
The harmonic pollution of electric vehicle chargers is increasing rapidly as the scale of electric vehicle charging stations enlarges to meet the rising demand for electric vehicles. This paper investigates the operating characteristics of a three-phase uncontrolled rectification electric vehicle charger with passive power factor correction. A method for estimating the equivalent circuit parameters of chargers is proposed based on the measured feature data of voltage and current at the ac side and on the circuit constraint during the conduction charging process. A harmonic analytical model of the charging station is then established by dividing the same charger types into groups. The parameter estimation method and the harmonic model of the charging station are verified through simulation, experiment, and field test. The parameter sensitivities of the equivalent circuit to the charging current harmonic are also studied.
Designing the process design process
We suggest that designing design processes is an ill-posed problem which must be tackled with great care and in an evolutionary fashion. We argue it is an important activity, however, as companies today use a small percentage of the intellectual capital they own when designing, suggesting there is room for significant improvement. We discuss who in industry and academia are currently involved with designing design processes. Based on empirical studies we and others have carried out, we have based our approach to study and support design processes on managing the information they generate and use. We are learning how to carry out studies more effectively with industrial partners, what features we need for managing information to study and improve design processes. We are even learning some general observations about the effect of different behavior of the group on its success at designing. 1. Department of Chemical Engineering, Carnegie Mellon University 2. Engineering Design Research Center, Carnegie Mellon University 3. Dept. of Solid Mechanics, Materials and Structures, Faculty of Engineering, Tel Aviv University 4. The current n-dim group comprises: Douglas P. Cunningham, Allen H. Dutoit, Helen L. Granger, Suresh Konda, K. C. Marshall, Russell C. Milliken, Ira A. Monarch, J. Peter Neergaard, Robert H. Patrick, Yoram Reich, Eswaran Subrahmanian, Mark E. Thomas, Arthur W. Westerberg
Perceived effectiveness of text vs. multimedia Location-Based Advertising messaging
The emergence of mobile communication and positioning technologies has presented advertisers and marketers with a radically innovative advertising channel: Location-Based Advertising (LBA). Despite the growing attention given to LBA, little is understood about the differential effects of text and multimedia advertising formats on the mobile consumer perceptions and behaviours. This exploratory study empirically examines the effects of multimedia advertisements vis-à-vis text-based advertisements on consumer perceptions and behaviours in a simulated LBA environment. A structural model was formulated to test their effects on consumer perceptions of entertainment, informativeness and irritation. Results show that multimedia LBA messages lead to more favourable attitude, increase the intention to use the LBA application, and have significant impact on purchase intention. Furthermore, this study indicates the role of multimedia as a double-edged sword: on the one hand, it suggests that multimedia impose a higher level of irritation; on the other hand, it suggests that multimedia enhance the informativeness and entertainment value of LBA. Implications for theory and practice are discussed. Perceived effectiveness of text vs. multimedia LBA messaging 155
The role of infant sleep in intergenerational transmission of trauma.
INTRODUCTION Children of parents who experienced trauma often present emotional and behavioral problems, a phenomenon named inter-generational transmission of trauma (IGTT). Combined with antenatal factors, parenting and the home environment contribute to the development and maintenance of sleep problems in children. In turn, infant sleep difficulty predicts behavioral and emotional problems later in life. The aim of this study was to investigate whether infant sleep problems predict early behavioral problems indicative of IGTT. METHODS 184 first-time mothers (ages 18-47) participated. N=83 had a history of childhood abuse and posttraumatic stress disorder (PTSD+); 38 women reported childhood abuse but did not meet diagnostic criteria for PTSD (PTSD-); and the control group (N=63) had neither a history of abuse nor psychopathology (CON). Depression, anxiety, and sleep difficulty were assessed in the mothers at 4 months postpartum. Infant sleep was assessed using the Child Behavior Sleep Questionnaire (CSHQ). Outcome measures included the Parent Bonding Questionnaire (PBQ) at 4 months and the Child Behavior Check List (CBCL) at 18 months. RESULTS Infants of PTSD+ mothers scored higher on the CSHQ and had more separation anxiety around bedtime than PTSD- and CON, and the severity of their symptoms was correlated with the degree of sleep disturbance. Maternal postpartum depression symptoms mediated impaired mother-infant bonding, while infant sleep disturbance contributed independently to impaired bonding. Mother-infant bonding at 4 months predicted more behavioral problems at 18 months. CONCLUSIONS Infant sleep difficulties and maternal mood play independent roles in infant-mother bonding disturbance, which in turn predicts behavioral problems at 18 months.
Learning from mistakes: towards a correctable learning algorithm
Many learning algorithms generate complex models that are difficult for a human to interpret, debug, and extend. In this paper, we address this challenge by proposing a new learning paradigm called correctable learning, where the learning algorithm receives external feedback about which data examples are incorrectly learned. We define a set of metrics which measure the correctability of a learning algorithm. We then propose a simple and efficient correctable learning algorithm which learns local models for different regions of the data space. Given an incorrect example, our method samples data in the neighborhood of that example and learns a new, more correct local model over that region. Experiments over multiple classification and ranking datasets show that our correctable learning algorithm offers significant improvements over the state-of-the-art techniques.
Common genetic influences underlie comorbidity of migraine and endometriosis.
We examined the co-occurrence of migraine and endometriosis within the largest known collection of families containing multiple women with surgically confirmed endometriosis and in an independent sample of 815 monozygotic and 457 dizygotic female twin pairs. Within the endometriosis families, a significantly increased risk of migrainous headache was observed in women with endometriosis compared to women without endometriosis (odds ratio [OR] 1.57, 95% confidence interval [CI]: 1.12-2.21, P=0.009). Bivariate heritability analyses indicated no evidence for common environmental factors influencing either migraine or endometriosis but significant genetic components for both traits, with heritability estimates of 69 and 49%, respectively. Importantly, a significant additive genetic correlation (r(G) = 0.27, 95% CI: 0.06-0.47) and bivariate heritability (h(2)=0.17, 95% CI: 0.08-0.27) was observed between migraine and endometriosis. Controlling for the personality trait neuroticism made little impact on this association. These results confirm the previously reported comorbidity between migraine and endometriosis and indicate common genetic influences completely explain their co-occurrence within individuals. Given pharmacological treatments for endometriosis typically target hormonal pathways and a number of findings provide support for a relationship between hormonal variations and migraine, hormone-related genes and pathways are highly plausible candidates for both migraine and endometriosis. Therefore, taking into account the status of both migraine and endometriosis may provide a novel opportunity to identify the genes underlying them. Finally, we propose that the analysis of such genetically correlated comorbid traits can increase power to detect genetic risk loci through the use of more specific, homogenous and heritable phenotypes.
L2 Regularization versus Batch and Weight Normalization
Batch Normalization is a commonly used trick to improve the training of deep neural networks. These neural networks use L2 regularization, also called weight decay, ostensibly to prevent overfitting. However, we show that L2 regularization has no regularizing effect when combined with normalization. Instead, regularization has an influence on the scale of weights, and thereby on the effective learning rate. We investigate this dependence, both in theory, and experimentally. We show that popular optimization methods such as ADAM only partially eliminate the influence of normalization on the learning rate. This leads to a discussion on other ways to mitigate this issue.
Portfolio Optimization with Mental Accounts
We integrate appealing features of Markowitz’s mean-variance portfolio theory (MVT) and Shefrin and Statman’s behavioral portfolio theory (BPT) into a new mental accounting (MA) framework. Features of the MA framework include an MA structure of portfolios, a definition of risk as the probability of failing to reach the threshold level in each mental account, and attitudes toward risk that vary by account. We demonstrate a mathematical equivalence between MVT, MA, and risk management using value at risk (VaR). The aggregate allocation across MA subportfolios is mean-variance efficient with short selling. Short-selling constraints on mental accounts impose very minor reductions in certainty equivalents, only if binding for the aggregate portfolio, offsetting utility losses from errors in specifying risk-aversion coefficients in MVT applications. These generalizations of MVT and BPT via a unified MA framework result in a fruitful connection between investor consumption goals and portfolio production.
A compiler for throughput optimization of graph algorithms on GPUs
Writing high-performance GPU implementations of graph algorithms can be challenging. In this paper, we argue that three optimizations called throughput optimizations are key to high-performance for this application class. These optimizations describe a large implementation space making it unrealistic for programmers to implement them by hand. To address this problem, we have implemented these optimizations in a compiler that produces CUDA code from an intermediate-level program representation called IrGL. Compared to state-of-the-art handwritten CUDA implementations of eight graph applications, code generated by the IrGL compiler is up to 5.95x times faster (median 1.4x) for five applications and never more than 30% slower for the others. Throughput optimizations contribute an improvement up to 4.16x (median 1.4x) to the performance of unoptimized IrGL code.
Gender differences in seasonal affective disorder (SAD)
146 women and 44 men (out- and inpatients; treatment sample) with Seasonal Affective Disorder (SAD; winter type) were tested for gender differences in demographic, clinical and seasonal characteristics. Sex ratio in prevalence was (women : men) 3.6 : 1 in unipolar depressives and 2.4 : 1 in bipolars (I and II). Sex ratios varied also between different birth cohorts and men seemed to underreport symptoms. There was no significant difference in symptom-profiles in both genders, however a preponderance of increased eating and different food selection on a trend level occured in women. The female group suffered significantly more often from thyroid disorders and from greater mood variations because of dark and cloudy weather. Women referred themselves to our clinic significantly more frequently as compared to men. In summary gender differences in SAD were similar to those of non-seasonal depression: the extent of gender differences in the prevalence of affective disorders appears to depend on case criteria such as diagnosis (unipolar vs. bipolar), birth cohort and number of symptoms as minimum threshold for diagnosis. We support the idea of applying sex-specific diagnostic criteria for diagnosing depression on the basis of our data and of the literature.
A Survey on Mobile Anchor Node Assisted Localization in Wireless Sensor Networks
Localization is one of the key technologies in wireless sensor networks (WSNs), since it provides fundamental support for many location-aware protocols and applications. Constraints on cost and power consumption make it infeasible to equip each sensor node in the network with a global position system (GPS) unit, especially for large-scale WSNs. A promising method to localize unknown nodes is to use mobile anchor nodes (MANs), which are equipped with GPS units moving among unknown nodes and periodically broadcasting their current locations to help nearby unknown nodes with localization. A considerable body of research has addressed the mobile anchor node assisted localization (MANAL) problem. However, to the best of our knowledge, no updated surveys on MAAL reflecting recent advances in the field have been presented in the past few years. This survey presents a review of the most successful MANAL algorithms, focusing on the achievements made in the past decade, and aims to become a starting point for researchers who are initiating their endeavors in MANAL research field. In addition, we seek to present a comprehensive review of the recent breakthroughs in the field, providing links to the most interesting and successful advances in this research field.
Above the glass ceiling? A comparison of matched samples of female and male executives.
In this study the authors compare career and work experiences of executive women and men. Female (n = 51) and male (n = 56) financial services executives in comparable jobs were studied through archival information on organizational outcomes and career histories, and survey measures of work experiences. Similarities were found in several organizational outcomes, such as compensation, and many work attitudes. Important differences were found, however, with women having less authority, receiving fewer stock options, and having less international mobility than men. Women at the highest executive levels reported more obstacles than lower level women. The gender differences coupled with women's lower satisfaction with future career opportunities raise questions about whether women are truly above the glass ceiling or have come up against a 2nd, higher ceiling.
Improving Gender Classification of Blog Authors
The problem of automatically classifying the gender of a blog author has important applications in many commercial domains. Existing systems mainly use features such as words, word classes, and POS (part-ofspeech) n-grams, for classification learning. In this paper, we propose two new techniques to improve the current result. The first technique introduces a new class of features which are variable length POS sequence patterns mined from the training data using a sequence pattern mining algorithm. The second technique is a new feature selection method which is based on an ensemble of several feature selection criteria and approaches. Empirical evaluation using a real-life blog data set shows that these two techniques improve the classification accuracy of the current state-ofthe-art methods significantly.
A Low-Bandwidth Network File System
Users rarely consider running network file systems over slow or wide-area networks, as the performance would be unacceptable and the bandwidth consumption too high. Nonetheless, efficient remote file access would often be desirable over such networks---particularly when high latency makes remote login sessions unresponsive. Rather than run interactive programs such as editors remotely, users could run the programs locally and manipulate remote files through the file system. To do so, however, would require a network file system that consumes less bandwidth than most current file systems.This paper presents LBFS, a network file system designed for low-bandwidth networks. LBFS exploits similarities between files or versions of the same file to save bandwidth. It avoids sending data over the network when the same data can already be found in the server's file system or the client's cache. Using this technique in conjunction with conventional compression and caching, LBFS consumes over an order of magnitude less bandwidth than traditional network file systems on common workloads.
A survey of techniques for internet traffic classification using machine learning
The research community has begun looking for IP traffic classification techniques that do not rely on `well known¿ TCP or UDP port numbers, or interpreting the contents of packet payloads. New work is emerging on the use of statistical traffic characteristics to assist in the identification and classification process. This survey paper looks at emerging research into the application of Machine Learning (ML) techniques to IP traffic classification - an inter-disciplinary blend of IP networking and data mining techniques. We provide context and motivation for the application of ML techniques to IP traffic classification, and review 18 significant works that cover the dominant period from 2004 to early 2007. These works are categorized and reviewed according to their choice of ML strategies and primary contributions to the literature. We also discuss a number of key requirements for the employment of ML-based traffic classifiers in operational IP networks, and qualitatively critique the extent to which the reviewed works meet these requirements. Open issues and challenges in the field are also discussed.
Cognitive Effects of a Structural Overview in a Hypertext
Disorientation and navigation inefficiency are the consequences of the fragmented and incoherent structure of most hypertexts. To avoid these negative effects, researchers recommend—among other things—an interface with a structural overview of the relations between sections. Some authors have found that with such an overview, information is looked up faster and remembered better. This study examined whether a structural overview also leads to a deeper understanding. Forty students read a hypertext about the effects of ultraviolet radiation in one of two presentation conditions (structural overview and list). In the list condition, the same topics were mentioned as in the overview condition, but just in the format of a list. After reading, they answered textbase questions which measured their recognition and also inference questions supposed to measure their situation model constructed from the information read. The last type of questions indicated the readers’ understanding of the text. On textbase questions, subjects with low as well as high prior knowledge scored equally well in both conditions. In contrast to our expectation, the overview did not improve the recognition of main points. However, on situation model questions low prior knowledge subjects scored significantly lower in the overview condition than in the list condition. These results supported our hypothesis that a structural overview may hinder the understanding of less knowledgeable readers, because it draws their attention to the textual macrostructure at the expense of attention to the microstructure of the text. Introduction Hypertext is emerging as an increasingly prevalent form of information presentation. The links in this medium make it possible to flexibly combine linear and hierarchical structures of information. Links are connections between sections on different levels (pages) in the hierarchy of the text. However, as the number of links goes up, the structure of a hypertext becomes more fragmentary. The consequences of a fragmented structure are: cognitive overload, navigation inefficiency and disorientation (Conklin, 1987; Gärling and Golledge, 1989; Oliver and Oliver, 1996; Van Nimwegen et al., in press). To avoid these negative effects, researchers recommend—among other things— an interface with a structural overview of the links between sections (Jonassen, 1986; Halasz and Conklin, 1989; Heller, 1990; Thüring et al., 1996). With the help of overviews that make clear which sections on which pages are present, users have less problems with looking up task-relevant information (Nielsen, 1995). In view of the educational possibilities of hypertext, Dee-Lucas (1996) investigated the effects of a structural overview. She found that with such an overview as browser, information is looked up faster. Besides this, she reported that subjects who used the overview recalled more main points of the text. However, it remains the question of whether a structural overview also leads to better comprehension. This question formed the focus of our study. Theoretical background According to Van Dijk and Kintsch (1983) there exist three levels whereby readers mentally represent textual information (see also Van Oostendorp and Goldman, 1998). The first level concerns the surface structure of sentences. On this level, readers remember, for instance, the sequence of words in a sentence. A text is remembered at the textbase level. The representation at this level can be seen as a network of related propositions. However, what readers really understand of a text and integrate with prior knowledge is called the situation model (Van Dijk and Kintsch, 1983; Van Oostendorp, 1996). Because of the importance of comprehension and application of knowledge in education, the construction of situation models is the most important objective of instructional texts (Perrig and Kintsch, 1985). For this reason, the question of effect of the structure of a text has on the construction of situation models is highly relevant. McNamara et al. (1996) examined the role of structure in the comprehension of science texts and based their research on the Construction-Integration (CI) model developed by W Kintsch (1988). Structure is defined by the extent of connectiveness between propositions in the textbase. In constructing a textbase, the reader must infer the semantic relations between propositions underlying sentences. The process of drawing these inferences is relatively simple when the surface level of the text has a well developed or explicit structure. In this situation, the semantic relations between propositions are almost explicitly present. A text has a locally coherent structure if propositions refer clearly to each other, for example, by the presence of referents. A text has a globally coherent structure if the semantic relations across parts, such as paragraphs and larger sections, are fully expressed. A writer may explicitly link each section to other sections and to the overall topic with topic headers and sentences which explain these links providing the text with a globally coherent structure. The crux of the CI-model is that readers can only integrate the textbase and prior knowledge successfully into a situation model if the textbase is sufficiently interrelated. Thus, a well developed structure is a necessary condition for the integration and situation model construction. If the text makes this integration process too difficult by 130 British Journal of Educational Technology Vol 30 No 2 1999 © British Educational Communications and Technology Agency, 1999. lack of structure, the reader must try to (re)construct the relevant semantic relations on the basis of prior knowledge. Results reported by McNamara et al. (1996) are in line with this notion. They measured the textbase by free recall and recognition questions and the situation model by inference and problem solving questions (see also Van Oostendorp and Bonebakker (1998) for the plausibility of this distinction). McNamara et al. found interactions between level of prior knowledge and the structure of a text on comprehension, that is on the construction of the situation model. In their experiment, a well developed structure improved the comprehension of readers with low prior knowledge, but impaired comprehension of high prior knowledge readers. This result fits the CI-model of Kintsch (1988); for answering inference and problem solving questions, readers must have developed a coherent textbase representation during the reading. The coherence of the textbase depends on the structure of the text: a well developed structure leads to a coherent textbase, a weak structure to an incoherent textbase. If the text has a weak structure, then the textbase is not coherent and, consequently, a reader must derive the required semantic relations between propositions from background knowledge. Low knowledgeable readers encounter here more problems than high knowledgeable readers. Thus, low knowledgeable readers can only develop an adequate situation model if the textbase is coherent. In contrast, subjects with high prior knowledge scored better on situational questions if they had read the incoherent version of the experimental text. McNamara et al. explain this result by supposing that a strong textual structure hinders the understanding of knowledgeable readers, because it reduces the amount of active processing during reading. Easing the reader’s burden of figuring out the meaning of the text could result in less effective learning (see also Kintsch, 1990). McNamara et al. further established that a strong textual structure improved textbase recall. This result was found earlier by Beyer (1990) and Britton and Gulgoz (1991). McNamara et al. (1996) distinguished in correspondence with the theory of Van Dijk and Kintsch (1983) macropropositions and micropropositions. Macropropositions concern the central, summarizing idea units of a text, micropropositions concern details. They reported that both a strong local and a strong global structure significantly supported the recall of macropropositions, but structure manipulation did not have a significant effect on the recall of micropropositions. They also found that a strong global structure improved the recall of macropropositions better than a strong local structure. The effect that a strong textual structure particularly supported the recall of macropropositions corresponds to the result found by Dee-Lucas (1996) in her study of hypertext. Hypotheses In our study we presume that the effect of a structural overview on the situation model that learners construct of a hypertext deviates in one respect from the study of McNamara et al. (1996). We agree that the local structure of a text is highly relevant in constructing a situation model in the case of low prior knowledge. Indeed, particularly less knowledgeable readers must attentively keep track of the local semantic Cognitive effects of a structural overview in a hypertext 131 © British Educational Communications and Technology Agency, 1999. relations to attain comprehension, because they lack the prior knowledge to infer these relations themselves. The CI-model of Kintsch (1988) and the study of McNamara et al. (1996) provide evidence of this assumption. However, in the case of a structural overview adjoined to a (hyper)text that has a strong local structure, we suppose that during reading learners do not use the in principle available local structure because of the negative attentional effect of the global structure added by the overview. This effect particularly hinders the comprehension of readers with low prior knowledge: their construction of a situation model depends more on the local structure of the text than the model construction of readers with sufficient prior knowledge does. Thus, in contrast to McNamara et al., we expect that a well developed global structure provided by the structural overview hinders
Semi-supervised auto-encoder based on manifold learning
Auto-encoder is a popular representation learning technique which can capture the generative model of data via a encoding and decoding procedure typically driven by reconstruction errors in an unsupervised way. In this paper, we propose a semi-supervised manifold learning based auto-encoder (named semAE). semAE is based on a regularized auto-encoder framework which leverages semi-supervised manifold learning to impose regularization based on the encoded representation. Our proposed approach suits more practical scenarios in which a small number of labeled data are available in addition to a large number of unlabeled data. Experiments are conducted on several well-known benchmarking datasets to validate the efficacy of semAE from the aspects of both representation and classification. The comparisons to state-of-the-art representation learning methods on classification performance in semi-supervised settings demonstrate the superiority of our approach.
One-year follow-up evaluation of Project Towards No Drug Abuse (TND-4).
OBJECTIVES This paper describes the one-year outcomes of the fourth experimental trial of Project Towards No Drug Abuse. Two theoretical content components of the program were examined to increase our understanding of the relative contribution of each to the effectiveness of the program. METHODS High schools in Southern California (n=18) were randomly assigned to one of three conditions: cognitive perception information curriculum, cognitive perception information+behavioral skills curriculum, or standard care (control). The curricula were delivered to high school students (n=2734) by project health educators and regular classroom teachers. Program effectiveness was assessed with both dichotomous and continuous measures of 30-day substance use at baseline and one-year follow-up. RESULTS Across all program schools, the two different curricula failed to significantly reduce dichotomous measures of substance use (cigarette, alcohol, marijuana, and hard drugs) at one-year follow-up. Both curricula exerted an effect only on the continuous measure of hard drug use, indicating a 42% (p=0.02) reduction in the number of times hard drugs were used in the last 30 days in the program groups relative to the control. CONCLUSIONS The lack of main effects of the program on dichotomous outcomes was contrary to previous studies. An effect on an ordinal count measure of hard drug use among both intervention conditions replicates previous work and suggests that this program effect may have been due to changes in cognitive misperception of drug use rather than behavioral skill.
Learning to Discriminate Noises for Incorporating External Information in Neural Machine Translation
Previous studies show that incorporating external information could improve the translation quality of Neural Machine Translation (NMT) systems. However, there are inevitably noises in the external information, severely reducing the benefit that the existing methods could receive from the incorporation. To tackle the problem, this study pays special attention to the discrimination of the noises during the incorporation. We argue that there exist two kinds of noise in this external information, i.e. global noise and local noise, which affect the translations for the whole sentence and for some specific words, respectively. Accordingly, we propose a general framework that learns to jointly discriminate both the global and local noises, so that the external information could be better leveraged. Our model is trained on the dataset derived from the original parallel corpus without any external labeled data or annotation. Experimental results in various real-world scenarios, language pairs, and neural architectures indicate that discriminating noises contributes to significant improvements in translation quality by being able to better incorporate the external information, even in very noisy conditions.
The division of the world, 1941-1955
This book should be of interest to students of twentieth century history from introductory to research level. The text argues that neither "Soviet Expansionism" nor "American Imperialism" can be seen as the central factor of the Cold War.
Employing Program Semantics for Malware Detection
In recent years, malware has emerged as a critical security threat. In addition, malware authors continue to embed numerous anti-detection features to evade the existing malware detection approaches. Against this advanced class of malicious programs, dynamic behavior-based malware detection approaches outperform the traditional signature-based approaches by neutralizing the effects of obfuscation and morphing techniques. The majority of dynamic behavior detectors rely on system-calls to model the infection and propagation dynamics of malware. However, these approaches do not account an important anti-detection feature of modern malware, i.e., systemcall injection attack. This attack allows the malicious binaries to inject irrelevant and independent system-calls during the program execution thus modifying the execution sequences defeating the existing system-call-based detection. To address this problem, we propose an evasion-proof solution that is not vulnerable to system-call injection attacks. Our proposed approach characterizes program semantics using asymptotic equipartition property (AEP) mainly applied in information theoretic domain. The AEP allows us to extract information-rich call sequences that are further quantified to detect the malicious binaries. Furthermore, the proposed detection model is less vulnerable to call-injection attacks as the discriminating components are not directly visible to malware authors. We run a thorough set of experiments to evaluate our solution and compare it with the existing system-call-based malware detection techniques. The results demonstrate that the proposed solution is effective in identifying real malware instances.
Querying semantic web data with SPARQL
The Semantic Web is the initiative of the W3C to make information on the Web readable not only by humans but also by machines. RDF is the data model for Semantic Web data, and SPARQL is the standard query language for this data model. In the last ten years, we have witnessed a constant growth in the amount of RDF data available on the Web, which have motivated the theoretical study of some fundamental aspects of SPARQL and the development of efficient mechanisms for implementing this query language. Some of the distinctive features of RDF have made the study and implementation of SPARQL challenging. First, as opposed to usual database applications, the semantics of RDF is open world, making RDF databases inherently incomplete. Thus, one usually obtains partial answers when querying RDF with SPARQL, and the possibility of adding optional information if present is a crucial feature of SPARQL. Second, RDF databases have a graph structure and are interlinked, thus making graph navigational capabilities a necessary component of SPARQL. Last, but not least, SPARQL has to work at Web scale! RDF and SPARQL have attracted interest from the database community. However, we think that this community has much more to say about these technologies, and, in particular, about the fundamental database problems that need to be solved in order to provide solid foundations for the development of these technologies. In this paper, we survey some of the main results about the theory of RDF and SPARQL putting emphasis on some research opportunities for the database community.
Immediate and delayed effects of risperidone on cerebral metabolism in neuroleptic naïve schizophrenic patients: correlations with symptom change.
OBJECTIVE Different symptom patterns have been shown to be associated with specific patterns of cerebral metabolic activity in schizophrenia. Treatment with various neuroleptic drugs results in decreased metabolism in frontal cortical regions. The temporal and regional relation between changes in metabolism and symptom improvement after treatment with risperidone was studied in eight previously unmedicated schizophrenic patients. METHOD Cerebral metabolic activity was measured using PET before neuroleptic exposure, after the first dose of risperidone, and after 6 weeks of treatment. Pearson correlations were calculated for regions of significant change in metabolism and symptom change. RESULTS After 6 weeks of treatment significant deactivations were seen in the left lateral cortical frontal region and medial frontal cortex. Significant changes were detectable in the medial frontal region 90 minutes after the first dose of risperidone. Patients with higher baseline activity in the identified medial frontal cluster had higher baseline positive symptom scores and reduction in medial frontal metabolism was correlated with reduction in positive symptom score. CONCLUSION The evidence suggests that the reduction in medial-frontal activity after treatment with risperidone is a direct effect of risperidone and not a consequence of symptom improvement. Reduction of medial frontal metabolism may be one of the physiological mechanisms by which risperidone alleviates symptoms of psychosis in schizophrenia.
The fetal three-vessel and tracheal view revisited.
The routine use of four-chamber screening of the fetal heart was pioneered in the early 1980s and has been shown to detect reliably mainly univentricular hearts in the fetus. Many conotruncal anomalies and ductal-dependent lesions may, however, not be detected with the four-chamber view alone and additional planes are needed. The three-vessel and tracheal (3VT) view is a transverse plane in the upper mediastinum demonstrating simultaneously the course and the connection of both the aortic and ductal arches, their relationship to the trachea and the visualization of the superior vena cava. The purpose of the article is to review the two-dimensional anatomy of this plane and the contribution of colour Doppler and to present a checklist to be achieved on screening ultrasound. Typical suspicions include the detection of abnormal vessel number, abnormal vessel size, abnormal course and alignment and abnormal colour Doppler pattern. Anomalies such as pulmonary and aortic stenosis and atresia, aortic coarctation, interrupted arch, tetralogy of Fallot, common arterial trunk, transposition of the great arteries, right aortic arch, double aortic arch, aberrant right subclavian artery, left superior vena cava are some of the anomalies showing an abnormal 3VT image. Recent studies on the comprehensive evaluation of the 3VT view and adjacent planes have shown the potential of visualizing the thymus and the left brachiocephalic vein during fetal echocardiography and in detecting additional rare conditions. National and international societies are increasingly recommending the use of this plane during routine ultrasound in order to improve prenatal detection rates of critical cardiac defects.
Insights into layout patterns of mobile user interfaces by an automatic analysis of android apps
Mobile phones recently evolved into smartphones that provide a wide range of services. One aspect that differentiates smartphones from their predecessor is the app model. Users can easily install third party applications from central mobile application stores. In this paper we present a process to gain insights into mobile user interfaces on a large scale. Using the developed process we automatically disassemble and analyze the 400 most popular free Android applications. The results suggest that the complexity of the user interface differs between application categories. Further, we analyze interface layouts to determine the most frequent interface elements and identify combinations of interface widgets. The most common combination that consists of three nested elements covers 5.43% of all interface elements. It is more frequent than progress bars and checkboxes. The ten most frequent patterns together cover 21.13% of all interface elements. They are all more frequent than common widget including radio buttons and spinner. We argue that the combinations identified not only provide insights about current mobile interfaces, but also enable the development of new optimized widgets.
Toward crowdsourcing micro-level behavior annotations: the challenges of interface, training, and generalization
Research that involves human behavior analysis usually requires laborious and costly efforts for obtaining micro-level behavior annotations on a large video corpus. With the emerging paradigm of crowdsourcing however, these efforts can be considerably reduced. We first present OCTAB (Online Crowdsourcing Tool for Annotations of Behaviors), a web-based annotation tool that allows precise and convenient behavior annotations in videos, directly portable to popular crowdsourcing platforms. As part of OCTAB, we introduce a training module with specialized visualizations. The training module's design was inspired by an observational study of local experienced coders, and it enables an iterative procedure for effectively training crowd workers online. Finally, we present an extensive set of experiments that evaluates the feasibility of our crowdsourcing approach for obtaining micro-level behavior annotations in videos, showing the reliability improvement in annotation accuracy when properly training online crowd workers. We also show the generalization of our training approach to a new independent video corpus.
Adrenergic activity and peripheral hemodynamics in relation to sodium sensitivity in patients with essential hypertension.
In 25 outpatients with essential hypertension, sodium sensitivity, defined as the difference in mean arterial pressure (delta MAP) between 2 weeks of high-sodium (300 mmol per day) and 2 weeks of low-sodium (LS) intake (50-100 mmol per day), was studied in relation to the plasma norepinephrine (NE) level, NE release, and pressor response to intravenous NE. In addition, forearm blood flow (FBF) was measured by plethysmography. There were two control periods of regular sodium intake, one of 4 weeks' duration at the beginning of the study and one of 2 weeks' duration at the end. The delta MAP ranged from +18 to -8 mm Hg. The eight patients in whom delta MAP was greater than 10 mm Hg were regarded as salt-sensitive. When compared with salt-insensitive subjects, salt-sensitive patients had higher plasma NE levels in the control period (p less than 0.05) and after 2 weeks of HS intake (p less than 0.01). Sodium sensitivity was directly related to the change in plasma NE between the HS and LS periods (p less than 0.001). The NE release decreased in salt-insensitive subjects whereas it increased in salt-sensitive patients between the LS and HS periods. Changes in NE release were directly related to sodium sensitivity (p less than 0.05). The pressor response to NE was not significantly influenced by changes in sodium intake. The FBF fell in salt-sensitive patients and increased in salt-insensitive subjects between the LS and HS periods. Sodium sensitivity was directly related to the change in forearm vascular resistance (p less than 0.01).(ABSTRACT TRUNCATED AT 250 WORDS)
TOPOMON: A Monitoring Tool for Grid Network Topology
In Grid environments, high-performance applications have to take into account the available network performance between the individual sites. Existing monitoring tools like the Network Weather Service (NWS) measure bandwidth and latency of end-to-end network paths. This information is necessary but not sufficient. With more than two participating sites, simultaneous transmissions may collide with each other on shared links of the wide-area network. If this occurs, applications may obtain lower network performance than predicted by NWS. In this paper, we describe TopoMon, a monitoring tool for Grid networks that augments NWS with additional sensors for the routes between the sites of a Grid environment. Our tool conforms to the Grid Monitoring Architecture (GMA) defined by the Global Grid Forum. It unites NWS performance and topology discovery in a single monitoring architecture. Our topology consumer process collects route information between the sites of a Grid environment and derives the overall topology for utilization by application programs and communication libraries. The topology can also be visualized for Grid application developers.
BinaryRelax: A Relaxation Approach For Training Deep Neural Networks With Quantized Weights
We propose BinaryRelax, a simple two-phase algorithm, for training deep neural networks with quantized weights. The set constraint that characterizes the quantization of weights is not imposed until the late stage of training, and a sequence of pseudo quantized weights is maintained. Specifically, we relax the hard constraint into a continuous regularizer via Moreau envelope, which turns out to be the squared Euclidean distance to the set of quantized weights. The pseudo quantized weights are obtained by linearly interpolating between the float weights and their quantizations. A continuation strategy is adopted to push the weights towards the quantized state by gradually increasing the regularization parameter. In the second phase, exact quantization scheme with a small learning rate is invoked to guarantee fully quantized weights. We test BinaryRelax on the benchmark CIFAR and ImageNet color image datasets to demonstrate the superiority of the relaxed quantization approach and the improved accuracy over the state-of-the-art training methods. Finally, we prove the convergence of BinaryRelax under an approximate orthogonality condition.
Copper-Graphene-Based Photonic Crystal Fiber Plasmonic Biosensor
We propose a photonic crystal fiber surface plasmon resonance biosensor where the plasmonic metal layer and the sensing layer are placed outside the fiber structure, which makes the sensor configuration practically simpler and the sensing process more straightforward. Considering the long-term stability of the plasmonic performance, copper (Cu) is used as the plasmonic material, and graphene is used to prevent Cu oxidation and enhance sensing performance. Numerical investigation of guiding properties and sensing performance is performed by using a finite-element method. The proposed sensor shows average wavelength interrogation sensitivity of 2000 nm/refractive index unit (RIU) over the analyte refractive indices ranging from 1.33 to 1.37, which leads to a sensor resolution of 5 × 10-5 RIU. Due to the simple structure and promising results, the proposed sensor could be a potential candidate for detecting biomolecules, organic chemicals, and other analytes.
Replacing a Swiss ball for an exercise bench causes variable changes in trunk muscle activity during upper limb strength exercises
BACKGROUND The addition of Swiss balls to conventional exercise programs has recently been adopted. Swiss balls are an unstable surface which may result in an increased need for force output from trunk muscles to provide adequate spinal stability or balance. The aim of the study was to determine whether the addition of a Swiss ball to upper body strength exercises results in consistent increases in trunk muscle activation levels. METHODS The myoelectric activity of four trunk muscles was quantified during the performance of upper body resistance exercises while seated on both a stable (exercise bench) and labile (swiss ball) surface. Participants performed the supine chest press, shoulder press, lateral raise, biceps curl and overhead triceps extension. A repeated measures ANOVA with post-hoc Tukey test was used to determine the influence of seated surface type on muscle activity for each muscle. RESULTS & DISCUSSION There was no statistically significant (p < .05) difference in muscle activity between surface conditions. However, there was large degree of variability across subjects suggesting that some individuals respond differently to surface stability. These findings suggest that the incorporation of swiss balls instead of an exercise bench into upper body strength training regimes may not be justified based only on the belief that an increase spinal stabilizing musculature activity is inherent. Biomechanically justified ground based exercises have been researched and should form the basis for spinal stability training as preventative and therapeutic exercise training regimes. CONCLUSION Selected trunk muscle activity during certain upper limb strength training exercises is not consistently influenced by the replacement of an exercise bench with a swiss ball.
Stereo Vision in Structured Environments by Consistent Semi-Global Matching
This paper considers the use of stereo vision in structured environments. Sharp discontinuities and large untextured areas must be anticipated, but complex or natural shapes of objects and fine structures should be handled as well. Additionally, radiometric differences of input images often occur in practice. Finally, computation time is an issue for handling large or many images in acceptable time. The Semi-Global Matching method is chosen as it fulfills already many of the requirements. Remaining problems in structured environments are carefully analyzed and two novel extensions suggested. Firstly, intensity consistent disparity selection is proposed for handling untextured areas. Secondly, discontinuity preserving interpolation is suggested for filling holes in the disparity images that are caused by some filters. It is shown that the performance of the new method on test images with ground truth is comparable to the currently best stereo methods, but the complexity and runtime is much lower.
Fast Marginal Likelihood Maximisation for Sparse Bayesian Models
The ‘sparse Bayesian’ modelling approach, as exemplified by the ‘relevance vector machine’, enables sparse classification and regression functions to be obtained by linearly-weighting a small number of fixed basis functions from a large dictionary of potential candidates. Such a model conveys a number of advantages over the related and very popular ‘support vector machine’, but the necessary ‘training’ procedure — optimisation of the marginal likelihood function — is typically much slower. We describe a new and highly accelerated algorithm which exploits recently-elucidated properties of the marginal likelihood function to enable maximisation via a principled and efficient sequential addition and deletion of candidate basis functions.
BigStation: enabling scalable real-time signal processingin large mu-mimo systems
Multi-user multiple-input multiple-output (MU-MIMO) is the latest communication technology that promises to linearly increase the wireless capacity by deploying more antennas on access points (APs). However, the large number of MIMO antennas will generate a huge amount of digital signal samples in real time. This imposes a grand challenge on the AP design by multiplying the computation and the I/O requirements to process the digital samples. This paper presents BigStation, a scalable architecture that enables realtime signal processing in large-scale MIMO systems which may have tens or hundreds of antennas. Our strategy to scale is to extensively parallelize the MU-MIMO processing on many simple and low-cost commodity computing devices. Our design can incrementally support more antennas by proportionally adding more computing devices. To reduce the overall processing latency, which is a critical constraint for wireless communication, we parallelize the MU-MIMO processing with a distributed pipeline based on its computation and communication patterns. At each stage of the pipeline, we further use data partitioning and computation partitioning to increase the processing speed. As a proof of concept, we have built a BigStation prototype based on commodity PC servers and standard Ethernet switches. Our prototype employs 15 PC servers and can support real-time processing of 12 software radio antennas. Our results show that the BigStation architecture is able to scale to tens to hundreds of antennas. With 12 antennas, our BigStation prototype can increase wireless capacity by 6.8x with a low mean processing delay of 860μs. While this latency is not yet low enough for the 802.11 MAC, it already satisfies the real-time requirements of many existing wireless standards, e.g., LTE and WCDMA.
Meta Comments for Summarizing Meeting Speech
This paper is about the extractive summarization of meeting speech, using the ICSI and AMI corpora. In the first set of experiments we use prosodic, lexical, structural and speaker-related features to selec t th most informative dialogue acts from each meeting, with the hypothesis being that such a rich mixture of features will yield the best results. In the second part, w e present an approach in which the identification of “meta-comments” is used to creat e more informative summaries that provide an increased level of abstraction. W e find that the inclusion of these meta comments improves summarization perform ance according to several evaluation metrics.
Ending Violence against Women and Girls – Protecting Human
Anette Funk is a political scientist specialising in gender and women's rights issues. She worked for several years for a German NGO which provides support to female victims of trafficking and violence. Anette currently works with the project "Strengthening Women's Rights" at GTZ. Her work focuses mainly on issues concerning gender-based violence and women's political participation. James L. Lang is a consultant and trainer working on issues of poverty, gender equality and ending gender-based violence. James has worked with numerous development organisations including UNDP, Oxfam GB, United Nations INSTRAW and GTZ. He is the author of a number of publications on gender-based violence and men as partners in gender equality. He is currently working for UNDP in Southeast Asia. Juliane Osterhaus has a degree in sociology and completed her postgraduate studies at the German Development Institute. Since 1990, she has been working in various positions at headquarters and in Africa for GTZ focusing on gender and human rights issues, participation and socioculture. At present she is directing the two supra-regional projects " Strengthening Women's Rights " and " Realizing Human Rights in German Development Cooperation " .
Selective involvement of the goldfish lateral pallium in spatial memory
The involvement of the main pallial subdivisions of the teleost telencephalic pallium in spatial cognition was evaluated in a series of three experiments. The first two compared the effects of lesions selective to the lateral (LP), medial (MP) and dorsal (DP) telencephalic pallium of goldfish, on the retention and the reversal learning of a spatial constancy task which requires the use of allocentric or relational strategies. The results showed that LP lesions produced a selective impairment on the capability of goldfish to solve the spatial task previously learned and on the reversal learning of the same procedure, whereas MP and DP lesions did not produce observable deficits. The third experiment evaluated, by means of the AgNOR stain, learning-dependent changes of the neuronal transcription activity in the pallium of goldfish trained in the spatial constancy task or in a cue version of the same procedure, which only differed on their spatial cognition demands. The results revealed that training in the spatial task produced an increment in the transcriptive activity which was selective to the neurons of the ventral lateral pallium, as indicated by increases in the size of the nucleolar organizing region (NOR), the nucleolar organelles associated with the synthesis of ribosomal proteins. In contrast, training in the cue version did not produced observable changes. These data, revealing a striking functional similarity between the lateral telencephalic pallium of the teleost fish and the amniote hippocampus, provide additional evidence regarding the homology of both structures.
Detecting fraudulent users using behaviour analysis
With the increased global use of online media platforms, there are more opportunities than ever to misuse those platforms or perpetrate fraud. One such fraud is within the music industry, where perpetrators create automated programs, streaming songs to generate revenue or increase popularity of an artist. With growing annual revenue of the digital music industry, there are significant financial incentives for perpetrators with fraud in mind. The focus of the study is extracting user behavioral patterns and utilising them to train and compare multiple supervised classification method to detect fraud. The machine learning algorithms examined are Logistic Regression, Support Vector Machines, Random Forest and Artificial Neural Networks. The study compares performance of these algorithms trained on imbalanced datasets carrying different fractions of fraud. The trained models are evaluated using the Precision Recall Area Under the Curve (PR AUC) and a F1-score. Results show that the algorithms achieve similar performance when trained on balanced and imbalanced datasets. It also shows that Random Forest outperforms the other methods for all datasets tested in this experiment.
BaRC: Backward Reachability Curriculum for Robotic Reinforcement Learning
Model-free Reinforcement Learning (RL) offers an attractive approach to learn control policies for highdimensional systems, but its relatively poor sample complexity often necessitates training in simulated environments. Even in simulation, goal-directed tasks whose natural reward function is sparse remain intractable for state-of-the-art model-free algorithms for continuous control. The bottleneck in these tasks is the prohibitive amount of exploration required to obtain a learning signal from the initial state of the system. In this work, we leverage physical priors in the form of an approximate system dynamics model to design a curriculum for a model-free policy optimization algorithm. Our Backward Reachability Curriculum (BaRC) begins policy training from states that require a small number of actions to accomplish the task, and expands the initial state distribution backwards in a dynamically-consistent manner once the policy optimization algorithm demonstrates sufficient performance. BaRC is general, in that it can accelerate training of any model-free RL algorithm on a broad class of goal-directed continuous control MDPs. Its curriculum strategy is physically intuitive, easy-to-tune, and allows incorporating physical priors to accelerate training without hindering the performance, flexibility, and applicability of the model-free RL algorithm. We evaluate our approach on two representative dynamic robotic learning problems and find substantial performance improvement relative to previous curriculum generation techniques and naı̈ve exploration strategies.