title
stringlengths
8
300
abstract
stringlengths
0
10k
Machine Learned Sentence Selection Strategies for Query-Biased Summarization
It has become standard for search engines to augment result lists with document summaries. Each document summary consists of a title, abstract, and a URL. In this work, we focus on the task of selecting relevant sentences for inclusion in the abstract. In particular, we investigate how machine learning-based approaches can effectively be applied to the problem. We analyze and evaluate several learning to rank approaches, such as ranking support vector machines (SVMs), support vector regression (SVR), and gradient boosted decision trees (GBDTs). Our work is the first to evaluate SVR and GBDTs for the sentence selection task. Using standard TREC test collections, we rigorously evaluate various aspects of the sentence selection problem. Our results show that the effectiveness of the machine learning approaches varies across collections with different characteristics. Furthermore, the results show that GBDTs provide a robust and powerful framework for the sentence selection task and significantly outperform SVR and ranking SVMs on several data sets.
New measurement scales for evaluating perceptions of the technology-mediated customer service experience
Service organizations are increasingly utilizing advanced information and communication technologies, such as the Internet, in hopes of improving the efficiency, cost-effectiveness, and/or quality of their customer-facing operations. More of the contact a customer has with the firm is likely to be with the back-office and, therefore, mediated by technology. While previous operations management research has been important for its contributions to our understanding of customer contact in face-to-facesettings, considerably less work has been done to improve our understanding of customer contact in what we refer to as technology-mediated settings (e.g., via telephone, instant messaging (IM), or email). This paper builds upon the service operations management (SOM) literature on customer contact by theoretically defining and empirically developing new multi-item measurement scales specifically designed for assessing tech ology-mediated customer contact. Seminal works on customer contact theory and its empirical measurement are employed to provide a foundation for extending these concepts to technology-mediated contexts. We also draw upon other important frameworks, including the Service Profit Chain, the Theory of Planned Behavior, and the concept of media/information richness, in order to identify and define our constructs. We follow a rigorous empirical scale development process to create parsimonious sets of survey items that exhibit satisfactory levels of reliability and validity to be useful in advancing SOM empirical research in the emerging Internet-enabled back-office. © 2004 Elsevier B.V. All rights reserved.
Untangling Blockchain: A Data Processing View of Blockchain Systems
Blockchain technologies are gaining massive momentum in the last few years. Blockchains are distributed ledgers that enable parties who do not fully trust each other to maintain a set of global states. The parties agree on the existence, values, and histories of the states. As the technology landscape is expanding rapidly, it is both important and challenging to have a firm grasp of what the core technologies have to offer, especially with respect to their data processing capabilities. In this paper, we first survey the state of the art, focusing on private blockchains (in which parties are authenticated). We analyze both in-production and research systems in four dimensions: distributed ledger, cryptography, consensus protocol, and smart contract. We then present BLOCKBENCH, a benchmarking framework for understanding performance of private blockchains against data processing workloads. We conduct a comprehensive evaluation of three major blockchain systems based on BLOCKBENCH, namely Ethereum, Parity, and Hyperledger Fabric. The results demonstrate several trade-offs in the design space, as well as big performance gaps between blockchain and database systems. Drawing from design principles of database systems, we discuss several research directions for bringing blockchain performance closer to the realm of databases.
Addressing Information Security Risks by Adopting Standards
Modern society depends on information technology in nearly every facet of human activity including, finance, transportation, education, government, and defense. Organizations are exposed to various and increasing kinds of risks, including information technology risks. Several standards, best practices, and frameworks have been created to help organizations manage these risks. The purpose of this research work is to highlight the challenges facing enterprises in their efforts to properly manage information security risks when adopting international standards and frameworks. To assist in selecting the best framework to use in risk management, the article presents an overview of the most popular and widely used standards and identifies selection criteria. It suggests an approach to proper implementation as well. A set of recommendations is put forward with further research opportunities on the subject. KeywordsInformation security; risk management; security frameworks; security standards; security management.
A preliminary study of emotional intelligence , empathy and exam performance in first year medical students I
A group of 156 first year medical students completed measures of emotional intelligence (EI) and physician empathy, and a scale assessing their feelings about a communications skills course component. Females scored significantly higher than males on EI. Exam performance in the autumn term on a course component (Health and Society) covering general issues in medicine was positively and significantly related to EI score but there was no association between EI and exam performance later in the year. High EI students reported more positive feelings about the communication skills exercise. Females scored higher than males on the Health and Society component in autumn, spring and summer exams. Structural equation modelling showed direct effects of gender and EI on autumn term exam performance, but no direct effects other than previous exam performance on spring and summer term performance. EI also partially mediated the effect of gender on autumn term exam performance. These findings provide limited evidence for a link between EI and academic performance for this student group. More extensive work on associations between EI, academic success and adjustment throughout medical training would clearly be of interest. 2005 Elsevier Ltd. All rights reserved. 0191-8869/$ see front matter 2005 Elsevier Ltd. All rights reserved. doi:10.1016/j.paid.2005.04.014 q Ethical approval from the College of Medicine and Veterinary Medicine was sought and received for this investigation. Student information was gathered and used in accordance with the Data Protection Act. * Corresponding author. Tel.: +44 131 65
Sentential Paraphrasing as Black-Box Machine Translation
We present a simple, prepackaged solution to generating paraphrases of English sentences. We use the Paraphrase Database (PPDB) for monolingual sentence rewriting and provide machine translation language packs: prepackaged, tuned models that can be downloaded and used to generate paraphrases on a standard Unix environment. The language packs can be treated as a black box or customized to specific tasks. In this demonstration, we will explain how to use the included interactive webbased tool to generate sentential paraphrases.
Phase field modeling of heterogeneous microcrystalline ceramics
Abstract Diffuse interface models and simulations capture deformation and failure of polycrystalline ceramics with multiple phases. Two heterogeneous ceramic solids are investigated. The first consists of a boron carbide matrix phase embedded with titanium diboride grains. Boron carbide may undergo cleavage fracture, twinning, and amorphization under sufficient mechanical loading. Titanium diboride demonstrates cleavage fracture and limited plastic slip. The second ceramic composite consists of diamond crystals with a smaller fraction of silicon carbide grains, where the latter may encapsulate the former in a micro- or nano-crystalline matrix and/or may be interspersed as larger micro-crystals among the diamond grains. Diamond may undergo cleavage fracture while the cubic phase of silicon carbide may fracture and twin. A general constitutive framework suitable for representing behaviors of all phases of each material system is reported. This phase field framework is implemented in three-dimensional (3D) finite element (FE) simulations of polycrystalline aggregates under compressive loading. Numerical results demonstrate effects of grain and phase morphology and activation or suppression of slip or twinning mechanisms on the overall strength and ductility of each material system. Efforts to inhibit localization and cleavage in boron carbide crystals, including addition of titanium diboride, promote increases in strength of the first composite, though residual stresses may be necessary to realize toughness improvements observed experimentally. In the second composite, layers of softer silicon carbide nanocrystals along grain boundaries (GBs) improve overall strength and ductility relative to addition of larger bulk silicon carbide grains.
The problems and solutions of network update in SDN: A survey
Network is dynamic and requires update in the operation. However, many confusions and problems can be caused by careless schedule in the update process. Although the problem is noticed for long in traditional networks, software defined networking (SDN) brings new opportunities and solutions to this problem by the separation of control and data plane, as well as the centralized control. This paper makes a survey on the problems caused by network update, including forwarding loop, forwarding black hole, link congestion, network policy violation, etc., as well as the state-of-the-art solutions to these problems in the SDN paradigm.
5G Millimeter-Wave Antenna Array: Design and Challenges
As there has been an explosive increase in wireless data traffic, mmw communication has become one of the most attractive techniques in the 5G mobile communications systems. Although mmw communication systems have been successfully applied to indoor scenarios, various external factors in an outdoor environment limit the applications of mobile communication systems working at the mmw bands. In this article, we discuss the issues involved in the design of antenna array architecture for future 5G mmw systems, in which the antenna elements can be deployed in the shapes of a cross, circle, or hexagon, in addition to the conventional rectangle. The simulation results indicate that while there always exists a non-trivial gain fluctuation in other regular antenna arrays, the circular antenna array has a flat gain in the main lobe of the radiation pattern with varying angles. This makes the circular antenna array more robust to angle variations that frequently occur due to antenna vibration in an outdoor environment. In addition, in order to guarantee effective coverage of mmw communication systems, possible solutions such as distributed antenna systems and cooperative multi-hop relaying are discussed, together with the design of mmw antenna arrays. Furthermore, other challenges for the implementation of mmw cellular networks, for example, blockage, communication security, hardware development, and so on, are discussed, as are potential solutions.
An Effectiveness Study on Trajectory Similarity Measures
The last decade has witnessed the prevalence of sensor and GPS technologies that produce a sheer volume of trajectory data representing the motion history of moving objects. Measuring similarity between trajectories is undoubtedly one of the most important tasks in trajectory data management since it serves as the foundation of many advanced analyses such as similarity search, clustering, and classification. In this light, tremendous efforts have been spent on this topic, which results in a large number of trajectory similarity measures. Generally, each individual work introducing a new distance measure has made specific claims on the superiority of their proposal. However, for most works, the experimental study was focused on demonstrating the efficiency of the search algorithms, leaving the effectiveness aspect unverified empirically. In this paper, we conduct a comparative experimental study on the effectiveness of six widely used trajectory similarity measures based on a real taxi trajectory dataset. By applying a variety of transformations we designed for each original trajectory, our experimental observations demonstrate the advantages and drawbacks of these similarity measures in different circumstances.
Sum-product networks: A new deep architecture
The key limiting factor in graphical model inference and learning is the complexity of the partition function. We thus ask the question: what are the most general conditions under which the partition function is tractable? The answer leads to a new kind of deep architecture, which we call sum-product networks (SPNs) and will present in this abstract.
The Impact of Language Anxiety and Language Proficiency on WTC in EFL Context
Spontaneous and sustained use of the L2 inside and outside the classroom varies according to a number of linguistic, communicative, social, and psychological factors. Authentic communication in L2 as a result of the complex interrelated system of variables occurs in terms of utilizing L2 for a variety of communicative acts, such as speaking up in class or reading a newspaper, and changes accordingly over time and across situations. By helping the students to decrease language anxiety and increase a willingness to use the L2 inside and outside the classroom, we direct the focus of language teaching away from merely linguistic and structural competence to authentic communication. Willingness to communicate (WTC) model integrates these variables to predict L2 communication, and a few number of studies have tested the model with EFL students. To this end, the current study is an attempt to shed light on the examination of Iranian EFL university students’ WTC and its interaction with their language anxiety and language proficiency. Forty nine university students participated in this study, took TOEFL first and then filled out two questionnaires of WTC, MacIntyre, Baker, Clement, and Conrod (2001) and language anxiety, Horwitz, Horwitz, and Cope (1986). For data analysis, Repeated Measures ANOVA and Spearman correlation were run and the results have revealed that Iranian university students’ WTC is directly related to their language proficiency but surprisingly higher proficient learners showed to be less communicative than lower proficient ones outside the classroom and this proves the state-like nature of WTC in the present sample. Moreover, the interaction between WTC and anxiety did not turn out to be significant. This shows that anxiety did not affect the learners’ participation in communication (WTC). Finally, anxiety and language proficiency are negatively correlated, so the association between language learning experience and L2 anxiety has been confirmed in the results of this study. Therefore, linguistic variables appear to be more predictive of WTC for Iranians, and language instructors should work on their students' English proficiency.
AFQ056 treatment of levodopa-induced dyskinesias: results of 2 randomized controlled trials.
Study objectives were to assess the efficacy, safety, and tolerability of AFQ056 in Parkinson's disease patients with levodopa-induced dyskinesia. Two randomized, double-blind, placebo-controlled, parallel-group, in-patient studies for Parkinson's disease patients with moderate to severe levodopa-induced dyskinesia (study 1) and severe levodopa-induced dyskinesia (study 2) on stable dopaminergic therapy were performed. Patients received 25-150 mg AFQ056 or placebo twice daily for 16 days (both studies). Study 2 included a 4-day down-titration. Primary outcomes were the Lang-Fahn Activities of Daily Living Dyskinesia Scale (study 1), the modified Abnormal Involuntary Movement Scale (study 2), and the Unified Parkinson's Disease Rating Scale-part III (both studies). Secondary outcomes included the Unified Parkinson's Disease Rating Scale-part IV items 32-33. The primary analysis was change from baseline to day 16 on all outcomes. Treatment differences were assessed. Fifteen patients were randomized to AFQ056 and 16 to placebo in study 1; 14 patients were randomized to each group in study 2. AFQ056-treated patients showed significant improvements in dyskinesias on day 16 versus placebo (eg, Lang-Fahn Activities of Daily Living Dyskinesia Scale, P = .021 [study 1]; modified Abnormal Involuntary Movement Scale, P = .032 [study 2]). No significant changes were seen from baseline on day 16 on the Unified Parkinson's Disease Rating Scale-part III in either study. Adverse events were reported in both studies, including dizziness. Serious adverse events (most commonly worsening of dyskinesias, apparently associated with stopping treatment) were reported by 4 AFQ056-treated patients in study 1, and 3 patients (2 AFQ056-treated patient and 1 in the placebo group) in study 2. AFQ056 showed a clinically relevant and significant antidyskinetic effect without changing the antiparkinsonian effects of dopaminergic therapy. © 2011 Movement Disorder Society.
One month treatment with the once daily oral beta 2-agonist bambuterol in asthmatic patients.
Bambuterol is a new long-acting oral bronchodilator for once daily use in patients with asthma. It is a prodrug of terbutaline, designed to be slowly metabolized to terbutaline. Results from comparative studies have shown that it has similar clinical efficacy to other oral bronchodilators, but less side-effects. The present study was aimed at verifying the 24 h effect duration of bambuterol, 10 and 20 mg in comparison with placebo during a one month treatment period. The study was conducted as a double-blind, randomized, parallel group placebo-controlled, multicentre trial. It started with a one week run-in period with placebo, when oral bronchodilators were withdrawn. At the end of this reference period, the patients were randomized to one of three treatments: placebo, bambuterol 10 mg, or bambuterol 20 mg, once daily in the evening. The treatment period lasted for 4 weeks. Four hundred and eighty seven patients with a mean age of 45 yrs were included. Mean baseline forced expiratory volume in one second (FEV1) and FEV1% of predicted were 2.05 l and 62%, respectively. Administration of 10 mg bambuterol resulted in a significant 24 h effect duration, expressed as an increase in mean daily morning and evening peak expiratory flow (PEF) (+11 l.min-1, adjusted means) throughout the study, as compared with placebo. Bambuterol, 20 mg, gave a significant 24 h effect duration in both FEV1 and morning and evening PEF as compared with placebo. Furthermore, the adverse events observed during the study were relatively few and mild.(ABSTRACT TRUNCATED AT 250 WORDS)
A comparison of 18 different x-ray detectors currently used in dentistry.
PURPOSE There has been a proliferation of available dental x-ray detectors over the recent past. The purpose of this short technical report is to provide a basic comparison of spatial resolution, contrast perceptibility, and relative exposure latitudes of 18 current dental x-ray detectors, including solid-state systems (CCD and CMOS), photostimulable phosphors, and analog film. METHODS Spatial resolution was measured using a 0.025 mm Pb phantom test grid with a measurement range from 1.5 to 20 lp/mm. For contrast perceptibility, a 7-mm thick aluminum perceptibility test device with wells of 0.1-0.9 mm depth at 0.1 mm intervals and 1 defect of 1.5 mm was used. The relative exposure latitude was determined by expert consensus using clear discrimination of the enamel-dentin junction as the lower limit and pixel blooming or unacceptable levels of cervical burnout as the upper limit. RESULTS The highest spatial resolution was found with Kodak InSight film, RVG-ui (CCD), and RVG 6000 (CMOS) detectors, each of which achieved 20 lp/mm, followed by the Planmeca Dixi2 v3 at > or =16 lp/mm. Contrast resolution was at least to 0.2 mm through 7 mm aluminum for all 18 detectors, with the best results being found for the Visualix HDI, RVG-ui, RVG 5000, and RVG 6000 detectors and the Schick CDR wired and wireless systems. The greatest exposure ranges were found with photostimulable phosphors and the Kodak RVG 6000 and RVG 5000 detectors. CONCLUSIONS Most current x-ray detectors generally perform well in terms of spatial and contrast resolutions, and in terms of exposure latitude. These findings were independent of the modality applied.
The mutual information in random linear estimation
We consider the estimation of a signal from the knowledge of its noisy linear random Gaussian projections, a problem relevant in compressed sensing, sparse superposition codes or code division multiple access just to cite few. There has been a number of works considering the mutual information for this problem using the heuristic replica method from statistical physics. Here we put these considerations on a firm rigorous basis. First, we show, using a Guerra-type interpolation, that the replica formula yields an upper bound to the exact mutual information. Secondly, for many relevant practical cases, we present a converse lower bound via a method that uses spatial coupling, state evolution analysis and the I-MMSE theorem. This yields, in particular, a single letter formula for the mutual information and the minimal-mean-square error for random Gaussian linear estimation of all discrete bounded signals.
Seismic techniques to delineate dissolution features in the upper 1000 ft at a power plant site
Shallow seismic techniques (Steeples and Miller, 1990; Park et al., 1999; Xia et al., in press) were used to enhance the effectiveness of a drilling program designed to locate dissolution features large enough to put the integrity of equipment or environment at Alabama Electric Cooperative’s proposed Damascus site at risk (“A” on Figure 1). Dissolution features will directly impact engineering design specifications and future plant safety. This applied research program identified acoustic characteristic unique to voids, subsurface subsidence, and/or karst features; evaluated the potential of acoustic methods to enhance drill-assisted mapping of major stratigraphic units and structural features; identified the maximum and minimum depths of seismic investigation; estimated resolution potential (vertical and horizontal features detectable and resolvable); defined optimum geometries and equipment; and incorporated production 21⁄2-D reflection and shear wave profiles with exploratory drilling. The shear wave velocity field provided valuable information about areas that might be at risk of subsiding. From that feasibility survey, areas studied with “young” sinkholes, directly tied to karst features, produced pronounced velocity inversions in close proximity to large velocity gradients (generally forming a closure on contoured cross-sections). These anomalous areas were interpreted to indicate increased stress associated with roof rock loading over rubble zones or void areas. Data from both the reflection and shear wave profiles from the feasibility and 21⁄2-D surveys possessed several unique features that are probably related to dissolution and subsidence.
On Formalizing the UML Object Constraint Language OCL
We present a formal semantics for the Object Constraint Language (OCL) which is part of the Uniied Modeling Language (UML) { an emerging standard language and notation for object-oriented analysis and design. In context of information systems modeling, UML class diagrams can be utilized for describing the overall structure, whereas additional integrity constraints and queries are speciied with OCL expressions. By using OCL, constraints and queries can be speciied in a formal yet comprehensible way. However, the OCL itself is currently de-ned only in a semi-formal way. Thus the semantics of constraints is in general not precisely deened. Our approach gives precise meaning to OCL concepts and to some central aspects of UML class models. A formal semantics facilitates veriication, validation and simulation of models and helps to improve the quality of models and software designs.
Perceived discrimination and stigma toward children affected by HIV/AIDS and their HIV-positive caregivers in central Haiti.
In many settings worldwide, HIV-positive individuals have experienced a significant level of stigma and discrimination. This discrimination may also impact other family members affected by the disease, including children. The aim of our study was to identify factors associated with stigma and/or discrimination among HIV-affected youth and their HIV-positive caregivers in central Haiti. Recruitment of HIV-positive patients with children aged 10-17 years was conducted in 2006-2007. Data on HIV-related stigma and/or discrimination were based on interviews with 451 youth and 292 caregivers. Thirty-two percent of caregivers reported that children were discriminated against because of HIV/AIDS. Commune of residence was associated with discrimination against children affected by HIV/AIDS and HIV-related stigma among HIV-positive caregivers, suggesting variability across communities. Multivariable regression models showed that lacking social support, being an orphan, and caregiver HIV-related stigma were associated with discrimination in HIV-affected children. Caregiver HIV-related stigma demonstrated a strong association with depressive symptoms. The results could inform strategies for potential interventions to reduce HIV-related stigma and discrimination. These may include increasing social and caregiver support of children affected by HIV, enhancing support of caregivers to reduce burden of depressive symptoms, and promoting reduction of HIV-related stigma and discrimination at the community-level.
Integrated system identification and state-of-charge estimation of battery systems
Summary form only given. Accurate estimation of the state of charge in battery systems is of essential importance for battery system management. Due to nonlinearity, high sensitivity of the inverse mapping from external measurements, and measurement errors, SOC estimation has remained a challenging task. This is further compounded by the fact that battery characteristic model parameters change with time and operating conditions. This paper introduces an adaptive nonlinear observer design that compensates nonlinearity and achieves better estimation accuracy. A two-time-scale signal processing method is employed to attenuate the effects of measurement noises on SOC estimates. The results are further expanded to derive an integrated algorithm to identify model parameters and initial SOC jointly. Simulations were performed to illustrate the capability and utility of the algorithms. Experimental verifications are conducted on Li-ion battery packs of different capacities under different load profiles.
Streptococcus milleri group: Renewed interest in an elusive pathogen
The following review examines the bacteriological characteristics, epidemiology, pathogenicity and antimicrobial susceptibility of the “Streptococcus milleri group”. “Streptococcus milleri group” is a term for a large group of streptococci which includesStreptococcus intermedius, Streptococcus constellatus andStreptococcus anginosus. Usually considered commensals, these organisms are often associated with various pyogenic infections including cardiac, abdominal, skin and central nervous system infections. Organisms of the “Streptococcus milleri group” are often unrecognized pathogens due to the lack of uniformity in classifications and difficulties in microbiological identification. Penicillin G, cephalosporins, clindamycin and vancomycin all possess activity against these streptococci. Use of agents with poor activity may promote infections with “Streptococcus milleri group” and allow it to exhibit its pathogenicity. An understanding of these organisms may aid in their recognition and proper treatment.
Treatment considerations for a patient with hypohidrotic ectodermal dysplasia: a case report.
AIM This clinical report describes the oral rehabilitation of a 6-year-old male ectodermal dysplasia (ED) patient diagnosed with hypodontia. BACKGROUND ED is a hereditary disease characterized by a congenital dysplasia of one or more ectodermal structures and their accessory appendages. Common manifestations include defective hair follicles and eyebrows, frontal bossing with prominent supraorbital ridges, nasal bridge depression, and protuberant lips. Intraorally, most common findings are anadontia or hypodontia, conical teeth, and generalize spaces. The patient may suffer from dry skin, hyperthermia, and unexplained high fever as a result of deficiency of sweat glands. REPORT A six-year-old boy who exhibited many of the manifestations of ED as well as behavioral problems and a severe gag reflex. The treatment was designed to improve his appearance and oral functions and included the fabrication of several removable prostheses and acid-etched composite resin restorations during his growth and development. SUMMARY Young patients with ED need to be evaluated early by a dental professional to determine the oral ramifications of the condition. When indicated, appropriate care needs to be rendered throughout the child's growth cycle to maintain oral functions as well as to address the esthetic needs of the patient. This clinical report demonstrates that removable partial dentures associated with direct composite restorations can be a reversible and inexpensive method of treatment for young ED patients.
Unconstrained Multimodal Multi-Label Learning
Multimodal learning has been mostly studied by assuming that multiple label assignments are independent of each other and all the modalities are available. In this paper, we consider a more general problem where the labels contain dependency relationships and some modalities are likely to be missing. To this end, we propose a multi-label conditional restricted Boltzmann machine (ML-CRBM), which handles modality completion , fusion, and multi-label prediction in a unified framework. The proposed model is able to generate missing modalities based on observed ones, by explicitly modelling and sampling their conditional distributions. After that, it can discriminatively fuse multiple modalities to obtain shared representations under the supervision of class labels. To consider the co-occurrence of the labels, the proposed model formulates the multi-label prediction as a max-margin-based multi-task learning problem. Model parameters can be jointly learned by seeking a balance between being generative for modality generation and being discriminative for label prediction. We perform a series of experiments in terms of classification, visualization, and retrieval, and the experimental results clearly demonstrate the effectiveness of our method.
Efficient and Enhanced Multi-Target Tracking with Doppler Measurements
In many radar and sonar tracking systems, the target state typically includes target position and velocity components that are estimated from a time sequence of target position and Doppler measurements. The use of measured Doppler information directly in the target trajectory estimation leads to a nonlinear filter implementation such as the extended Kalman filter (EKF), particle filter etc. We investigate a method for including Doppler measurements as part of the data association process and then assess the benefits of this approach. It is well understood that data association performance can dominate the total performance of a tracker that is designed to track targets in the presence of clutter. In this case, the Doppler component of the measurements may be used in combination with the target position measurement as an additional discriminant of measurement origin. We have developed a simple but efficient Doppler data association (DDA) method which utilises both position and Doppler measurements for single and multi-target tracking. If the Doppler measurements are not used in trajectory state estimation, then the nonlinear filters for the incorporation of Doppler measurements are not required, however a significant improvement in tracking performance is still observed. The proposed DDA method is demonstrated using both the linear multi-target integrated probabilistic data association algorithm (LMIPDA) and the linear multi-target integrated track splitting algorithm (LMITS) in an active sonar underwater multi-target tracking scenario.
The 3D Model Acquisition Pipeline
Three-dimensional (3D) image acquisition systems are rapidly becoming more affordable, especially systems based on commodity electronic cameras. At the same time, personal computers with graphics hardware capable of displaying complex 3D models are also becoming inexpensive enough to be available to a large population. As a result, there is potentially an opportunity to consider new virtual reality applications as diverse as cultural heritage and retail sales that will allow people to view realistic 3D objects on home computers. Although there are many physical techniques for acquiring 3D data—including laser scanners, structured light and time-of-flight—there is a basic pipeline of operations for taking the acquired data and producing a usable numerical model. We look at the fundamental problems of range image registration, line-of-sight errors, mesh integration, surface detail and color, and texture mapping. In the area of registration we consider both the problems of finding an initial global alignment using manual and automatic means, and refining this alignment with variations of the Iterative Closest Point methods. To account for scanner line-of-sight errors we compare several averaging approaches. In the area of mesh integration, that is finding a single mesh joining the data from all scans, we compare various methods for computing interpolating and approximating surfaces. We then look at various ways in which surface properties such as color (more properly, spectral reflectance) can be extracted from acquired imagery. Finally, we examine techniques for producing a final model representation that can be efficiently rendered using graphics hardware.
Self-supervised learning model for skin cancer diagnosis
Automated diagnosis of skin cancer is an active area of research with different classification methods proposed so far. However, classification models based on insufficient labeled training data can badly influence the diagnosis process if there is no self-advising and semi supervising capability in the model. This paper presents a semi supervised, self-advised learning model for automated recognition of melanoma using dermoscopic images. Deep belief architecture is constructed using labeled data together with unlabeled data, and fine tuning done by an exponential loss function in order to maximize separation of labeled data. In parallel a self-advised SVM algorithm is used to enhance classification results by counteracting the effect of misclassified data. To increase generalization capability and redundancy of the model, polynomial and radial basis function based SA-SVMs and Deep network are trained using training samples randomly chosen via a bootstrap technique. Then the results are aggregated using least square estimation weighting. The proposed model is tested on a collection of 100 dermoscopic images. The variation in classification error is analyzed with respect to the ratio of labeled and unlabeled data used in the training phase. The classification performance is compared with some popular classification methods and the proposed model using the deep neural processing outperforms most of the popular techniques including KNN, ANN, SVM and semi supervised algorithms like Expectation maximization and transductive SVM.
Approximate inverse preconditioners for general sparse matrices
The standard Incomplete LU (ILU) preconditioners often fail for general sparse indeenite matrices because they give rise tòunstable' factors L and U. In such cases, it may be attractive to approximate the inverse of the matrix directly. This paper focuses on approximate inverse preconditioners based on minimizing kI?AMk F , where AM is the preconditioned matrix. An iterative descent-type method is used to approximate each column of the inverse. For this approach to be eecient, the iteration must be done in sparse mode, i.e., with`sparse-matrix by sparse-vector' operations. Numerical dropping is applied to each column to maintain sparsity in the approximate inverse. Compared to previous methods, this is a natural way to determine the sparsity pattern of the approximate inverse. This paper discusses options such as Newton and`global' iteration, self-preconditioning, dropping strategies, and factorized forms. The performance of the options are compared on standard problems from the Harwell-Boeing collection. Theoretical results on general approximate inverses and the convergence behavior of the algorithms are derived. Finally, some ideas and experiments with practical variations and applications are presented.
Learning Composition Models for Phrase Embeddings
Lexical embeddings can serve as useful representations for words for a variety of NLP tasks, but learning embeddings for phrases can be challenging. While separate embeddings are learned for each word, this is infeasible for every phrase. We construct phrase embeddings by learning how to compose word embeddings using features that capture phrase structure and context. We propose efficient unsupervised and task-specific learning objectives that scale our model to large datasets. We demonstrate improvements on both language modeling and several phrase semantic similarity tasks with various phrase lengths. We make the implementation of our model and the datasets available for general use.
Hook plate fixation for acromioclavicular joint separations restores coracoclavicular distance more accurately than PDS augmentation, however presents with a high rate of acromial osteolysis
Hook plate fixation of acromioclavicular (AC) joint separations carries the disadvantage of compulsory implant removal, occasional implant fatigue and secondary loss of reduction. This study compares the clinical and radiological outcome of a new polyaxial angular stable hook plate (HP) with absorbable polydioxansulfate (PDS) sling. Between 2002 and 2009, out of a consecutive series of 81 patients with symptomatic Rockwood type V lesions 52 patients received clinical and radiographic follow-up (HP: n = 27; PDS: n = 25). HP patients were prospectively analyzed and retrospectively compared with the PDS group. Radiological follow-up included comparative coraco- and acromioclavicular distance (CCD/ACD) measurements as percentage of the uninjured shoulder. For clinical follow-up a standardized functional shoulder assessment with Constant Score, DASH Score, Taft Score and a self-report questionnaire including the visual analog scale (VAS) was carried out. Direct postoperative radiographs showed an overcorrection of CCD in the HP group (−4.4% of the uninjured side) and failure of anatomic correction in the PDS group (+11.0%). After implant removal, CCD increased in the HP group extensively to 16.7% (overall loss of reduction: 21.1%) and 23.9% in the PDS group. Redisplacement (100% increase of CCD) occurred in five cases (HP: 2, PDS: 3) and partial loss of reduction in four cases of each group. Comparing functional results no differences could be seen between both the groups (Constant-Score HP: 91.2 points, PDS: 94.6 points; Taft-Score HP: 9.4 points, PDS: 10.0 points). The DASH-Score revealed better results for PDS group (3.4 points, HP: 8.0 points). Signs of acromial osteolysis appeared in five cases (18.5%) in HP group. There was no case of implant failure. The X-rays of six patients (HP: 4, PDS: 2) showed AC-joint-osteoarthritis. Hook plate fixation employing a polyaxial angular stable plate finally restores the coracoclavicular distance more accurately than augmentation with a PDS sling. Although in HP group no implant failure occurred, major disadvantages are initial overcorrection and acromial osteolysis. Both have no influence on final functional results.
Smart Agriculture Based on Cloud Computing and IOT
Issues concerning agriculture, countryside and farmers have been always hindering China’s development. The only solution to these three problems is agricultural modernization. However, China's agriculture is far from modernized. The introduction of cloud computing and internet of things into agricultural modernization will probably solve the problem. Based on major features of cloud computing and key techniques of internet of things, cloud computing, visualization and SOA technologies can build massive data involved in agricultural production. Internet of things and RFID technologies can help build plant factory and realize automatic control production of agriculture. Cloud computing is closely related to internet of things. A perfect combination of them can promote fast development of agricultural modernization, realize smart agriculture and effectively solve the issues concerning agriculture, countryside and farmers.
Two contradictory conjectures concerning Carmichael numbers
Erdős conjectured that there are x1−o(1) Carmichael numbers up to x, whereas Shanks was skeptical as to whether one might even find an x up to which there are more than √ x Carmichael numbers. Alford, Granville and Pomerance showed that there are more than x2/7 Carmichael numbers up to x, and gave arguments which even convinced Shanks (in person-to-person discussions) that Erdős must be correct. Nonetheless, Shanks’s skepticism stemmed from an appropriate analysis of the data available to him (and his reasoning is still borne out by Pinch’s extended new data), and so we herein derive conjectures that are consistent with Shanks’s observations, while fitting in with the viewpoint of Erdős and the results of Alford, Granville and Pomerance.
Analysis using Imperfect Views from Spoken Language and Acoustic Modalities
Multimodal sentiment classification in practical applications may have to rely on erroneous and imperfect views, namely (a) language transcription from a speech recognizer and (b) under-performing acoustic views. This work focuses on improving the representations of these views by performing a deep canonical correlation analysis with the representations of the better performing manual transcription view. Enhanced representations of the imperfect views can be obtained even in absence of the perfect views and give an improved performance during test conditions. Evaluations on the CMU-MOSI and CMU-MOSEI datasets demonstrate the effectiveness of the proposed approach.
Spondylometaphyseal dysplasia (Kozlowski type): case report.
Kozlowski syndrome is the most common type of spondylometaphyseal dysplasia (SMD). It is characterized by short stature (130 to 150 cm), pectus carinatum, limited elbow and hip movement, mild bowleg deformity, and curvature of the spinal column. Children with Kozlowski dwarfism usually are not recognized at birth, since they have normal clinical features, weight, and size. This article reports the dental treatment and oral findings of a 14-year-old female patient with Kozlowski dwarfism.
On the Cost of Factoring RSA-1024
As many cryptographic schemes rely on the hardness of integer factorization, exploration of the concrete costs of factoring large integers is of considerable interest. Most research has focused on PC-based implementations of factoring algorithms; these have successfully factored 530-bit integers, but practically cannot scale much further. Recent works have placed the bottleneck at the sieving step of the Number Field Sieve algorithm. We present a new implementation of this step, based on a custom-built hardware device that achieves a very high level of parallelism ”for free”. The design combines algorithmic and technological aspects: by devising algorithms that take advantage of certain tradeoffs in chip manufacturing technology, efficiency is increased by many orders of magnitude compared to previous proposals. Using this hypothetical device (and ignoring the initial R&D costs), it appears possible to break a 1024-bit RSA key in one year using a device whose cost is about $10M (previous predictions were in the trillions of dollars).
Coagulation of a giant hemangioma in glans penis with holmium laser.
A 21-year-old man presented with an enlarged giant hemangioma on glans penis which also causes an erectile dysfunction (ED) that partially responded to the intracavernous injection stimulation test. Although the findings in magnetic resonance imaging (MRI) indicated a glandular hemangioma, penile colored Doppler ultrasound revealed an invaded cavernausal hemangioma to the glans. Surgical excision was avoided according to the broad extension of the gland lesion. Holmium laser coagulation was applied to the lesion due to the cosmetically concerns. However, the cosmetic results after holmium laser application was not impressive as expected without an improvement in intracavernous injection stimulation test. In conclusion, holmium laser application should not be used to the hemangiomas of glans penis related to the corpus cavernosum, but further studies are needed to reveal the effects of holmium laser application in small hemangiomas restricted to the glans penis.
Practicing Differential Privacy in Health Care: A Review
Differential privacy has gained a lot of attention in recent years as a general model for the protection of personal information when used and dis‐ closed for secondary purposes. It has also been proposed as an appropriate model for protecting health data. In this paper we review the current literature on differential privacy and highlight important general limitations to the model and the proposed mechanisms. We then examine some practical challenges to the application of differential privacy to health data. The most severe limitation is the theoretical nature of the privacy parameter . It has implications on our ability to quantify the level of anonymization that would be guaranteed to pa‐ tients, as well as assessing responsibilities when a privacy breach occurs. The review concludes by identifying the areas that researchers and practitioners need to address to increase the adoption of differential privacy for health data. 1 Background We consider private data analysis in the setting of a trusted data custodian that has a database consisting of rows of data (records). Each row represents infor‐ mation about an individual. The custodian needs to publish an anonymized ver‐ sion of the data, a version that is useful to data analysts and that protects the privacy of the individuals in the database. The problem is known as privacy‐ preserving data publishing. Various approaches/models have been proposed in different research commu‐ nities over the past few years [1]. These models depend on the background knowledge of adversaries; as these adversaries can infer sensitive information from the dataset by exploiting background information. Because of that de‐
Objective Reduction in Evolutionary Multiobjective Optimization: Theory and Applications
Many-objective problems represent a major challenge in the field of evolutionary multiobjective optimizationin terms of search efficiency, computational cost, decision making, visualization, and so on. This leads to various research questions, in particular whether certain objectives can be omitted in order to overcome or at least diminish the difficulties that arise when many, that is, more than three, objective functions are involved. This study addresses this question from different perspectives.First, we investigate how adding or omitting objectives affects the problem characteristics and propose a general notion of conflict between objective sets as a theoretical foundation for objective reduction. Second, we present both exact and heuristic algorithms to systematically reduce the number of objectives, while preserving as much as possible of the dominance structure of the underlying optimization problem. Third, we demonstrate the usefulness of the proposed objective reduction method in the context of both decision making and search for a radar waveform application as well as for well-known test functions.
Design for 5G Mobile Network Architecture
In this paper we propose a new design solution for network architecture of future 5G mobile networks. The proposed design is based on user-centric mobile environment with many wireless and mobile technologies on the ground. In heterogeneous wireless environment changes in all, either new or older wireless technologies, is not possible, so each solution towards the next generation mobile and wireless networks should be implemented in the service stratum, while the radio access technologies belong to the transport stratum regarding the Next Generation Networks approach. In the proposed design the user terminal has possibility to change the Radio Access Technology RAT based on certain criteria. For the purpose of transparent change of the RATs by the mobile terminal, we introduce so-called Policy-Router as node in the core network, which establishes IP tunnels to the mobile terminal via different available RATs to the terminal. The selection of the RAT is performed by the mobile terminal by using the proposed user agent for multi-criteria decision making based on the experience from the performance measurements performed by the mobile terminal. For the process of performance measurements we introduce the QoSPRO procedure for control information exchange between the mobile terminal and the Policy Router.
Anticonvulsant and Sedative-Hypnotic Activities of N-Acetyl / Methyl Isatin Derivatives
A series of N-methyl/acetyl 5-(un)-substituted isatin-3-semicarbazones were screened for anticonvulsant and sedative-hypnotic activities. The results revealed that protection was obtained in all the screens i.e., Maximal electroshock, (MES) subcutaneous pentylene tetrazole (scPTZ) and subcutaneous strychnine (scSTY) screens. Three compounds (5a, 5e and 5i) possessed anti-MES activity and all the compounds were less neurotoxic than phenytoin, carbamazepine and phenobarbital. All the compounds were completely non-toxic at 4h when compared to phenytoin, carbamazepine and phenobarbital, which were toxic at 100 and 300 mg/kg respectively. Compounds 5a, 5b, 5e, 5g and 5i emerged as the active compounds in oral MES screen. Selected compounds were evaluated for quantification studies in MES, scPTZ and neurotoxicity screens after i. p (5b, 5i) and oral administration (5a, 5g) in rats. Among all the compounds 5a, 5b and 5g emerged as broad-spectrum compounds as indicated by their protection in MES, scSTY and scPTZ screens. All the compounds except compound 5b showed significant sedative-hypnotic activity.
Unsupervised document classification using sequential information maximization
We present a novel sequential clustering algorithm which is motivated by the Information Bottleneck (IB) method. In contrast to the agglomerative IB algorithm, the new sequential (sIB) approach is guaranteed to converge to a local maximum of the information with time and space complexity typically linear in the data size. information, as required by the original IB principle. Moreover, the time and space complexity are significantly improved. We apply this algorithm to unsupervised document classification. In our evaluation, on small and medium size corpora, the sIB is found to be consistently superior to all the other clustering methods we examine, typically by a significant margin. Moreover, the sIB results are comparable to those obtained by a supervised Naive Bayes classifier. Finally, we propose a simple procedure for trading cluster's recall to gain higher precision, and show how this approach can extract clusters which match the existing topics of the corpus almost perfectly.
Inhibition of catechol-O-methyltransferase modifies acute homocysteine rise during repeated levodopa application in patients with Parkinson’s disease
Elevation of plasma total homocysteine concentrations were observed in levodopa/dopa decarboxylase inhibitor (DDI)-treated patients with Parkinson’s disease (PD). Degradation of levodopa to 3-O-methyldopa via the enzyme catechol-O-methyltransferase (COMT) is a methyl group demanding reaction. It generates homocysteine from the methyl group donor methionine. But there are inconsistent outcomes, as most investigators determined homocysteine after an overnight washout of levodopa. They did not consider the acute effects of levodopa/DDI intake in relation with COMT inhibition on homocysteine bioavailability. The purpose of this study is to measure levels of homocysteine, levodopa, and its metabolite 3-O-methyldopa in plasma after reiterated oral levodopa/DDI administration with and without the COMT-inhibitor entacapone (EN). Sixteen PD patients received 100 mg levodopa/carbidopa three times on day 1 and with EN on day 2 under standardized conditions. Homocysteine concentrations increased on day 1 and generally over the whole interval. No significant ascent of homocysteine appeared on day 2 only. Levodopa bioavailability was higher on day 2 due to the COMT inhibition. No change of 3-O-methyldopa appeared between both days. The correlation coefficients between homocysteine, levodopa, and 3-O-methyldopa were higher on day 1 than on day 2. Rise of homocysteine does not only depend on the oral levodopa dose, but also on the acute intake of levodopa/DDI with or without COMT inhibition. Measurements of homocysteine should consider acute repeated levodopa/DDI applications, as homocysteine and metabolically related 3-O-methyldopa accumulate due to their long plasma half-life in contrast to short-living levodopa.
Prenatal factors for childhood blood pressure mediated by intrauterine and/or childhood growth?
OBJECTIVE Some prenatal factors may program an offspring's blood pressure, but existing evidence is inconclusive and mechanisms remain unclear. We examined the mediating roles of intrauterine and childhood growth in the associations between childhood systolic blood pressure (SBP) and 5 potentially modifiable prenatal factors: maternal smoking during pregnancy; prepregnancy BMI; pregnancy weight gain; chronic hypertension; and preeclampsia-eclampsia. METHODS The sample contained 30 461 mother-child pairs in the Collaborative Perinatal Project. Prenatal data were extracted from obstetric forms, and children's SBP was measured at 7 years of age. Potential mediation by intrauterine growth restriction (IUGR) and childhood growth was examined by the causal step method. RESULTS Heavy maternal smoking during pregnancy was significantly associated with higher offspring SBP (adjusted mean difference versus nonsmoking: 0.73 mm Hg [95% confidence interval (CI): 0.32-1.14]), which attenuated to null (0.13 [95% CI: -0.27-0.54]) after adjustment for changes in BMI from birth to 7 years of age. Prepregnancy overweight-obesity was significantly associated with higher offspring SBP (versus normal weight: 0.89 mm Hg [95% CI: 0.52-1.26]), which also attenuated to null (-0.04 mm Hg [95% CI: -0.40-0.31]) after adjustment for childhood BMI trajectory. Adjustment for BMI trajectory augmented the association between maternal pregnancy weight gain and offspring SBP. Adjustment for childhood weight trajectory similarly changed these associations. However, all these associations were independent of IUGR. CONCLUSIONS Childhood BMI and weight trajectory, but not IUGR, may largely mediate the associations of maternal smoking during pregnancy and prepregnancy BMI with an offspring's SBP.
BERT: BEhavioral Regression Testing
During maintenance, it is common to run the new version of a program against its existing test suite to check whether the modifications in the program introduced unforeseen side effects. Although this kind of regression testing can be effective in identifying some change-related faults, it is limited by the quality of the existing test suite. Because generating tests for real programs is expensive, developers build test suites by finding acceptable tradeoffs between cost and thoroughness of the tests. Such test suites necessarily target only a small subset of the program's functionality and may miss many regression faults. To address this issue, we introduce the concept of behavioral regression testing, whose goal is to identify behavioral differences between two versions of a program through dynamic analysis. Intuitively, given a set of changes in the code, behavioral regression testing works by (1) generating a large number of test cases that focus on the changed parts of the code, (2) running the generated test cases on the old and new versions of the code and identifying differences in the tests' outcome, and (3) analyzing the identified differences and presenting them to the developers. By focusing on a subset of the code and leveraging differential behavior, our approach can provide developers with more (and more focused) information than traditional regression testing techniques. This paper presents our approach and performs a preliminary assessment of its feasibility.
Percentage and Relative Error Measures in Forecast Evaluation
Full terms and conditions of use: http://pubsonline.informs.org/page/terms-and-conditions This article may be used only for the purposes of research, teaching, and/or private study. Commercial use or systematic downloading (by robots or other automatic processes) is prohibited without explicit Publisher approval, unless otherwise noted. For more information, contact [email protected]. The Publisher does not warrant or guarantee the article’s accuracy, completeness, merchantability, fitness for a particular purpose, or non-infringement. Descriptions of, or references to, products or publications, or inclusion of an advertisement in this article, neither constitutes nor implies a guarantee, endorsement, or support of claims made of that product, publication, or service. Copyright © 2016, INFORMS
A Bayesian network-based framework for semantic image understanding
Current research in content-based semantic image understanding is largely confined to exemplar-based approaches built on low-level feature extraction and classification. The ability to extract both low-level and semantic features and perform knowledge integration of different types of features is expected to raise semantic image understanding to a new level. Belief networks, or Bayesian networks (BN), have proven to be an effective knowledge representation and inference engine in artificial intelligence and expert systems research. Their effectiveness is due to the ability to explicitly integrate domain knowledge in the network structure and to reduce a joint probability distribution to conditional independence relationships. In this paper, we present a general-purpose knowledge integration framework that employs BN in integrating both low-level and semantic features. The efficacy of this framework is demonstrated via three applications involving semantic understanding of pictorial images. The first application aims at detecting main photographic subjects in an image, the second aims at selecting the most appealing image in an event, and the third aims at classifying images into indoor or outdoor scenes. With these diverse examples, we demonstrate that effective inference engines can be built within this powerful and flexible framework according to specific domain knowledge and available training data to solve inherently uncertain vision problems. 2005 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.
Coordination of Multiagents Interacting Under Independent Position and Velocity Topologies
We consider the coordination control for multiagent systems in a very general framework where the position and velocity interactions among agents are modeled by independent graphs. Different algorithms are proposed and analyzed for different settings, including the case without leaders and the case with a virtual leader under fixed position and velocity interaction topologies, as well as the case with a group velocity reference signal under switching velocity interaction. It is finally shown that the proposed algorithms are feasible in achieving the desired coordination behavior provided the interaction topologies satisfy the weakest possible connectivity conditions. Such conditions relate only to the structure of the interactions among agents while irrelevant to their magnitudes and thus are easy to verify. Rigorous convergence analysis is preformed based on a combined use of tools from algebraic graph theory, matrix analysis as well as the Lyapunov stability theory.
Fasciola hepática infection in humans: overcoming problems for the diagnosis
Fascioliasis is a zoonosis actually considered as a foodborne trematode disease priority by the World Health Organization. Our study presents three cases of F. hepatica infection diagnosed by direct, indirect and/or imaging diagnostic techniques, showing the need of the combined use of them. In order to overcome some difficulties of the presently available methods we show for the first time the application of molecular tools to improve human fascioliasis diagnosis by employing a PCR protocol based on a repetitive element as target sequence. In conclusion, diagnosis of human fascioliasis has to be carried out by the combination of diagnostic techniques that allow the detection of infection in different disease phases, different epidemiological situations and known/new transmission patterns in the actual scenario.
Human-Centric Automation and Optimization for Smart Homes
A smart home needs to be human-centric, where it tries to fulfill human needs given the devices it has. Various works are developed to provide homes with reasoning and planning capability to fulfill goals, but most do not support complex sequence of plans or require significant manual effort in devising subplans. This is further aggravated by the need to optimize conflicting personal goals. A solution is to solve the planning problem represented as constraint satisfaction problem (CSP). But CSP uses hard constraints and, thus, cannot handle optimization and partial goal fulfillment efficiently. This paper aims to extend this approach to weighted CSP. Knowledge representation to help in generating planning rules is also proposed, as well as methods to improve performances. Case studies show that the system can provide intelligent and complex plans from activities generated from semantic annotations of the devices, as well as optimization to maximize personal constraints’ fulfillment. Note to Practitioners—Smart home should maximize the fulfillment of personal goals that are often conflicting. For example, it should try to fulfill as much as possible the requests made by both the mother and daughter who wants to watch TV but both having different channel preferences. That said, every person has a set of goals or constraints that they hope the smart home can fulfill. Therefore, human-centric system that automates the loosely coupled devices of the smart home to optimize the goals or constraints of individuals in the home is developed. Automated planning is done using converted services extracted from devices, where conversion is done using existing tools and concepts from Web technologies. Weighted constraint satisfaction that provides the declarative approach to cover large problem domain to realize the automated planner with optimization capability is proposed. Details to speed up planning through search space reduction are also given. Real-time case studies are run in a prototype smart home to demonstrate its applicability and intelligence, where every planning is performed under a maximum of 10 s. The vision of this paper is to be able to implement such system in a community, where devices everywhere can cooperate to ensure the well-being of the community.
TECHNICAL AND STRATEGIC HUMAN RESOURCE MANAGEMENT EFFECTIVENESS AS DETERMINANTS OF FIRM PERFORMANCE
We evaluated the impact of human resource (HR) managers’ capabilities on HR management effectiveness and the latter’s impact on corporate financial performance. For 293 U.S. firms, effectiveness was associated with capabilities and attributes of HR staff. We also found relationships between HR management effectiveness and productivity, cash flow, and market value. Findings were consistent across market and accounting measures of performance and with corrections for biases.
Planar Multi-Band Microwave Components Based on the Generalized Composite Right/Left Handed Transmission Line Concept
This paper is focused on the design of generalized composite right/left handed (CRLH) transmission lines in a fully planar configuration, that is, without the use of surface-mount components. These artificial lines exhibit multiple, alternating backward and forward-transmission bands, and are therefore useful for the synthesis of multi-band microwave components. Specifically, a quad-band power splitter, a quad-band branch line hybrid coupler and a dual-bandpass filter, all of them based on fourth-order CRLH lines (i.e., lines exhibiting 2 left-handed and 2 right-handed bands alternating), are presented in this paper. The accurate circuit models, including parasitics, of the structures under consideration (based on electrically small planar resonators), as well as the detailed procedure for the synthesis of these lines using such circuit models, are given. It will be shown that satisfactory results in terms of performance and size can be obtained through the proposed approach, fully compatible with planar technology.
Superimposed Sparse Parameter Classifiers for Face Recognition
In this paper, a novel classifier, called superimposed sparse parameter (SSP) classifier is proposed for face recognition. SSP is motivated by two phase test sample sparse representation (TPTSSR) and linear regression classification (LRC), which can be treated as the extended of sparse representation classification (SRC). SRC uses all the train samples to produce the sparse representation vector for classification. The LRC, which can be interpreted as L2-norm sparse representation, uses the distances between the test sample and the class subspaces for classification. TPTSSR is also L2-norm sparse representation and uses two phase to compute the distance for classification. Instead of the distances, the SSP classifier employs the SSPs, which can be expressed as the sum of the linear regression parameters of each class in iterations, is used for face classification. Further, the fast SSP (FSSP) classifier is also suggested to reduce the computation cost. A mass of experiments on Georgia Tech face database, ORL face database, CVL face database, AR face database, and CASIA face database are used to evaluate the proposed algorithms. The experimental results demonstrate that the proposed methods achieve better recognition rate than the LRC, SRC, collaborative representation-based classification, regularized robust coding, relaxed collaborative representation, support vector machine, and TPTSSR for face recognition under various conditions.
Intelligent Load Shedding Need for a Fast and Optimal Solution
To ensure system stability and availability during disturbances, industrial facilities equipped with on-site generation, generally utilize some type of load shedding scheme. In recent years, conventional underfrequency and PLC-based load shedding schemes have been integrated with computerized power management systems to provide an “automated” load shedding system. However, these automated solutions lack system operating knowledge and are still best-guess methods which typically result in excessive or insufficient load shedding. An intelligent load shedding system can provide faster and optimal load relief by utilizing actual operating conditions and knowledge of past system disturbances. This paper presents the need for an intelligent, automated load shedding system. Simulation of case studies for two industrial electrical networks are performed to demonstrate the advantages of an intelligent load shedding system over conventional load shedding methods from the design and operation perspectives. Index Terms — Load Shedding (LS), Intelligent Load Shedding (ILS), Power System Transient Stability, Frequency Relay, Programmable Logic Controller (PLC), Power Management System
Cellular planning for next generation wireless mobile network using novel energy efficient CoMP
Cellular planning also called radio network planning is a crucial stage for the deployment of a wireless network. The enormous increase in data traffic requires augmentation of coverage, capacity, throughput, quality of service and optimized cost of the network. The main goal of cellular planning is to enhance the spectral efficiency and hence the throughput of a network. A novel CoMP algorithm has been discussed with two-tier heterogeneous network. Number of clusters has been obtained using V-R (variance ratio) Criterion. The centroid of a cluster obtained using K-means algorithm provides the deployment of BS position. Application of CoMP in this network using DPS approach with sleep mode of power saving, provides higher energy efficiency, SINR and throughput as compared to nominal CoMP. CoMP basically describes a scheme in which a group of base stations (BS) dynamically co-ordinate and co-operate among themselves to convert interference into a beneficial signal. Network planning using stochastic method and Voronoi Tessellation with two-tier network has been applied to a dense region of Surat city in Gujarat state of India. The results show clear improvement in signal-to-interference plus noise ratio (SINR) by 25% and energy efficiency of the network by 28% using the proposed CoMP transmission.
Euthanasia-related strain and coping strategies in animal shelter employees.
OBJECTIVE To identify and evaluate coping strategies advocated by experienced animal shelter workers who directly engaged in euthanizing animals. DESIGN Cross-sectional study. SAMPLE POPULATION Animal shelters across the United States in which euthanasia was conducted (5 to 100 employees/shelter). PROCEDURES With the assistance of experts associated with the Humane Society of the United States, the authors identified 88 animal shelters throughout the United States in which animal euthanasia was actively conducted and for which contact information regarding the shelter director was available. Staff at 62 animal shelters agreed to participate in the survey. Survey packets were mailed to the 62 shelter directors, who then distributed them to employees. The survey included questions regarding respondent age, level of education, and role and asked those directly involved in the euthanasia of animals to provide advice on strategies for new euthanasia technicians to deal with the related stress. Employees completed the survey and returned it by mail. Content analysis techniques were used to summarize survey responses. RESULTS Coping strategies suggested by 242 euthanasia technicians were summarized into 26 distinct coping recommendations in 8 categories: competence or skills strategies, euthanasia behavioral strategies, cognitive or self-talk strategies, emotional regulation strategies, separation strategies, get-help strategies, seek long-term solution strategies, and withdrawal strategies. CONCLUSIONS AND CLINICAL RELEVANCE Euthanizing animals is a major stressor for many animal shelter workers. Information regarding the coping strategies identified in this study may be useful for training new euthanasia technicians.
Alienation and Emotional Maturity
Abstract Despite the lengthy interest in alienation, scholars have not addressed the question of differential individual response to social and personal problems. Research has indicated that the alienated are less likely to confront their problems, e.g., alienation correlates negatively with utilization of medical services and marital adjustment scores. Emotional maturity, on the other hand, has been found related to better marital adjustment, etc. One under-utilized approach, therefore, is to consider alienation as a manifestation of inadequate socialization. A study of 582 lowerclassmen in a midwest university found virtually zero correlations between social background factors and alienation, but significant, inverse relationships between alienation and emotional maturity. However, despite the link between alienation and emotional maturity, we suggest that further research be directed toward the interaction between personality and situation, based on the proposition that individuals may manifest alienat...
Complex heatmaps reveal patterns and correlations in multidimensional genomic data
UNLABELLED Parallel heatmaps with carefully designed annotation graphics are powerful for efficient visualization of patterns and relationships among high dimensional genomic data. Here we present the ComplexHeatmap package that provides rich functionalities for customizing heatmaps, arranging multiple parallel heatmaps and including user-defined annotation graphics. We demonstrate the power of ComplexHeatmap to easily reveal patterns and correlations among multiple sources of information with four real-world datasets. AVAILABILITY AND IMPLEMENTATION The ComplexHeatmap package and documentation are freely available from the Bioconductor project: http://www.bioconductor.org/packages/devel/bioc/html/ComplexHeatmap.html CONTACT [email protected] SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Adaptive Partitioning for Very Large RDF Data
State-of-the-art distributed RDF systems partition data across multiple computer nodes (workers). Some systems perform cheap hash partitioning, which may result in expensive query evaluation, while others apply heuristics aiming at minimizing inter-node communication during query evaluation. This requires an expensive data pre-processing phase, leading to high startup costs for very large RDF knowledge bases. Apriori knowledge of the query workload has also been used to create partitions, which however are static and do not adapt to workload changes; as a result, inter-node communication cannot be consistently avoided for queries that are not favored by the initial data partitioning. In this paper, we propose AdHash, a distributed RDF system, which addresses the shortcomings of previous work. First, AdHash applies lightweight partitioning on the initial data, that distributes triples by hashing on their subjects; this renders its startup overhead low. At the same time, the locality-aware query optimizer of AdHash takes full advantage of the partitioning to (i) support the fully parallel processing of join patterns on subjects and (ii) minimize data communication for general queries by applying hash distribution of intermediate results instead of broadcasting, wherever possible. Second, AdHash monitors the data access patterns and dynamically redistributes and replicates the instances of the most frequent ones among workers. R. Harbi · I. Abdelaziz · P. Kalnis · M. Sahli King Abdullah University of Science & Technology, Thuwal, Saudi Arabia E-mail: {first}.{last}@kaust.edu.sa N. Mamoulis University of Ioannina, Greece E-mail: [email protected] Y. Ebrahim Microsoft Corporation, Redmond, WA 98052, United States E-mail: [email protected] As a result, the communication cost for future queries is drastically reduced or even eliminated. To control replication, AdHash implements an eviction policy for the redistributed patterns. Our experiments with synthetic and real data verify that AdHash (i) starts faster than all existing systems, (ii) processes thousands of queries before other systems become online, and (iii) gracefully adapts to the query load, being able to evaluate queries on billion-scale RDF data in sub-seconds.
Gendering Cybercrime
Very few cybercrimes are committed by females. Therefore, there has been a dearth of research on this topic. It is important that we understand the relationships between gender and cybercrime, to inform crime prevention strategies and understand the particular problems female offenders may face. This research draws from extensive data gathered in relation to cybercrime offenders, both male and female. The research explores the types of roles female computer crime offenders take on, and their social experiences, finding that, compared to males, they experience more adverse life events. Reasons for the lack of female involvement in cybercrime include the barriers female face when engaging with the predominantly masculine online communities that are important for learning and sharing information.
Control of a two-dimensional movement signal by a noninvasive brain-computer interface in humans.
Brain-computer interfaces (BCIs) can provide communication and control to people who are totally paralyzed. BCIs can use noninvasive or invasive methods for recording the brain signals that convey the user's commands. Whereas noninvasive BCIs are already in use for simple applications, it has been widely assumed that only invasive BCIs, which use electrodes implanted in the brain, can provide multidimensional movement control of a robotic arm or a neuroprosthesis. We now show that a noninvasive BCI that uses scalp-recorded electroencephalographic activity and an adaptive algorithm can provide humans, including people with spinal cord injuries, with multidimensional point-to-point movement control that falls within the range of that reported with invasive methods in monkeys. In movement time, precision, and accuracy, the results are comparable to those with invasive BCIs. The adaptive algorithm used in this noninvasive BCI identifies and focuses on the electroencephalographic features that the person is best able to control and encourages further improvement in that control. The results suggest that people with severe motor disabilities could use brain signals to operate a robotic arm or a neuroprosthesis without needing to have electrodes implanted in their brains.
On the Global Convergence of Majorization Minimization Algorithms for Nonconvex Optimization Problems
In this paper, we study the global convergence of majorization minimization (MM) algorithms for solving nonconvex regularized optimization problems. MM algorithms have received great attention in machine learning. However, when applied to nonconvex optimization problems, the convergence of MM algorithms is a challenging issue. We introduce theory of the KurdykaLojasiewicz inequality to address this issue. In particular, we show that many nonconvex problems enjoy the KurdykaLojasiewicz property and establish the global convergence result of the corresponding MM procedure. We also extend our result to a well known method that called CCCP (concave-convex procedure).
Color and Texture Based Identification and Classification of food Grains using different Color Models and Haralick features
This paper presents the study on identification and classification of food grains using different color models such as L*a*b, HSV, HSI and YCbCr by combining color and texture features without performing preprocessing. The K-NN and minimum distance classifier are used to identify and classify the different types of food grains using local and global features. Texture and color features are the important features used in the classification of different objects. The local features like Haralick features are computed from co-occurrence matrix as texture features and global features from cumulative histogram are computed along with color features. The experiment was carried out on different food grains classes. The non-uniformity of RGB color space is eliminated by L*a*b, HSV, HSI and YCbCr color space. The correct classification result achieved for different color models is quite good. KeywordsFeature Extraction; co-occurrence matrix; texture information; Global Features; cumulative histogram; RGB, L*a*b, HSV, HSI and YCbCr color model.
Rhythm Nation: Pastiche and Spectral Heritage in English Music
ABSTRACT Peter Ackroyd's 1992 novel English Music offers a conflicted take on heritage. Appearing in the midst of debates about national heritage, the novel foregrounds legacies, both familial and cultural. The novel's allusions and intertexts suggest an ethnically restricted version of heritage; however, the novel's stress on pastiche and counterfeits suggest an alternative mode of conceiving legacy, one framed as a type of possession. This revised version of heritage offers a past open to those who might lay claim to it, a more fitting model for a multicultural society.
Measuring the effectiveness of ISO26262 compliant self test library
Automotive SoCs are constantly being tested for correct functional operation, even long after they have left fabrication. The testing is done at the start of operation (car ignition) and repeatedly during operation (during the drive) to check for faults. Faults can result from, but are not restricted to, a failure in a part of a semiconductor circuit such as a failed transistor, interconnect failure due to electromigration, or faults caused by soft errors (e.g., an alpha particle switching a bit in a RAM or other circuit element). While the tests can run long after the chip was taped-out, the safety definition and test plan effort is starting as early as the specification definitions. In this paper we give an introduction to functional safety concentrating on the ISO26262 standard and we touch on a couple of approaches to functional safety for an Intellectual Property (IP) part such as a microprocessor, including software self-test libraries and logic BIST. We discuss the additional effort needed for developing a design for the automotive market. Lastly, we focus on our experience of using fault grading as a method for developing a self-test library that periodically tests the circuit operation. We discuss the effect that implementation decisions have on this effort and why it is important to start with this effort early in the design process.
Review of halftoning techniques
Digital halftoning remains an active area of research with a plethora of new and enhanced methods. While several fine overviews exist, this purpose of this paper is to review retrospectively the basic classes of techniques. Halftoning algorithms are presented by the nature of the appearance of resulting patterns, including white noise, recursive tessellation, the classical screen, and blue noise. The metric of radially averaged power spectra is reviewed, and special attention is paid to frequency domain characteristics. The paper concludes with a look at the components that comprise a complete image rendering system. In particular when the number of output levels is not restricted to be a power of 2. A very efficient means of multilevel dithering is presented based on scaling order-dither arrays. The case of real-time video rendering is considered where the YUV-to-RGB conversion is incorporated in the dithering system. Example illustrations are included for each of the techniques described.
Prophylactic vesical instillations with 0.2% chondroitin sulfate may reduce symptoms of acute radiation cystitis in patients undergoing radiotherapy for gynecological malignancies
We studied the feasibility and efficacy of intravesical instillations with 40 ml chondroitin sulfate 0.2% solution to prevent or reduce acute radiation cystitis in women undergoing pelvic radiotherapy. In a comparative pilot study in 20 patients, half of the patients received instillations. Instillations' bother was measured with visual analog scores (VAS, 0–10); bladder pain, with VAS; micturition-related quality of life, with the urogenital distress inventory (UDI). One of the instilled patients discontinued the instillations. The first median “acceptability”-VAS was 0 (range, 0–3); the last median was 1 (range, 0–3). “Bladder pain”-VAS peaked halfway in the treatment among controls (median, 1; range, 0–5) and after treatment in the instilled patients (median, 1; range, 1–3). UDI scores showed over time median follow-up scores at or above median baseline scores in controls and at or below median baseline scores in instilled patients. Intravesical instillations with chondroitin sulfate 0.2% solution may decrease the bother related to bladder symptoms and are well tolerated.
Risk Perception and Sexual Risk Behaviors Among HIV-Positive Men on Antiretroviral Therapy
There are reports of increased sexual risk behavior among people on highly active antiretroviral therapy (HAART) due to beliefs about risk of HIV transmission when on HAART. In a cross-sectional study (Seropositive Urban Men’s Study), we examined the relationship between risk perception and sexual risk behavior among sexually active, culturally diverse HIV positive men who have sex with men (N = 456). Less than twenty-five percent engaged in unprotected anal sex (either with an HIV negative, or unknown-status partner, or an HIV positive partner) within the past 3 months. Most men believed there was significant health risk (to partner or self) associated with unprotected sex when on HAART. There was no increased risk behavior associated with being on HAART, although the perception of negative health consequences, including HIV transmission, when on HAART was significantly lower for the relatively small subset of men who reported unprotected sex. Prevention strategies need to be tailored to address risk perception associated with HAART.
Endurance exercise training during haemodialysis improves strength, power, fatigability and physical performance in maintenance haemodialysis patients.
BACKGROUND Endurance training improves cardiopulmonary fitness in maintenance haemodialysis (MHD). Because many MHD patients are profoundly deconditioned and exhibit significant muscle weakness, endurance training may also improve muscle strength and physical performance in these patients. This study assessed this possibility. METHODS Twelve MHD patients performed incremental and constant work rate cycle exercise tests to determine peak work rate, VO(2)peak and endurance time (ET). Lower extremity strength, power and fatigability, stair-climbing time, 10 m walk time and a timed up-and-go were assessed before and after 8.6+/-2.3 weeks of thrice weekly, progressive, semi-recumbent, leg-cycle training during haemodialysis. Initial training intensity and duration targets were set at 50% peak work rate (WR) and 20 min, respectively, with a goal of progressing to 40 min at the highest WR tolerable. Non-exercising MHD patients and healthy volunteers with similar age, gender and race/ethnicity served as comparison groups. RESULTS None of the subjects tolerated the initial target intensity. Therefore, WR was reduced to 19+/-9 watts (30% of peak WR) for 19.9 min/session. At end of training, subjects cycled at 29+/-25 watts (46% initial peak WR; P = 0.01) for 38+/-8 min (P<0.001). VO(2)peak and ET improved 22% (P = 0.018) and 144% (P = 0.001), respectively. Quadriceps strength, power and fatigability improved 16% (P = 0.002), 15% (P = 0.115) and 43% (P = 0.029), respectively. The three measures of physical performance improved by 14-17% (P<0.031). Total work performed in training increased by 5.5+/-21.1 kJ/week (17%); a 165% increase during the study period. CONCLUSIONS Nine weeks of leg-cycling during haemodialysis in MHD patients improves not only cardiopulmonary fitness and endurance but also muscle strength, power, fatigability and physical function. These data underscore the value of endurance training in MHD.
Resolution enhanced SAR tomography: A nonparametric iterative adaptive approach
The ground-volume separation of radar scattering plays an important role in the analysis of forested scenes. For this purpose, the data covariance matrix of multi-polarimetric (MP) multi-baseline (MB) SAR surveys can be represented thru a sum of two Kronecker products composed of the data covariance matrices and polarimetric signatures that correspond to the ground and canopy scattering mechanisms (SMs), respectively. The sum of Kronecker products (SKP) decomposition allows the use of different tomographic SAR focusing methods on the ground and canopy structural components separately, nevertheless, the main drawback of this technique relates to the rank-deficiencies of the resultant data covariance matrices, which restrict the usage of the adaptive beamforming techniques, requiring more advanced beamforming methods, such as compressed sensing (CS). This paper proposes a modification of the nonparametric iterative adaptive approach for amplitude and phase estimation (IAA-APES), which applied to MP-MB SAR data, serves as an alternative to the SKP-based techniques for ground-volume reconstruction, which main advantage relates precisely to the non-need of the SKP decomposition technique as a pre-processing step.
Cultural differences in self-recognition: the early development of autonomous and related selves?
Fifteen- to 18-month-old infants from three nationalities were observed interacting with their mothers and during two self-recognition tasks. Scottish interactions were characterized by distal contact, Zambian interactions by proximal contact, and Turkish interactions by a mixture of contact strategies. These culturally distinct experiences may scaffold different perspectives on self. In support, Scottish infants performed best in a task requiring recognition of the self in an individualistic context (mirror self-recognition), whereas Zambian infants performed best in a task requiring recognition of the self in a less individualistic context (body-as-obstacle task). Turkish infants performed similarly to Zambian infants on the body-as-obstacle task, but outperformed Zambians on the mirror self-recognition task. Verbal contact (a distal strategy) was positively related to mirror self-recognition and negatively related to passing the body-as-obstacle task. Directive action and speech (proximal strategies) were negatively related to mirror self-recognition. Self-awareness performance was best predicted by cultural context; autonomous settings predicted success in mirror self-recognition, and related settings predicted success in the body-as-obstacle task. These novel data substantiate the idea that cultural factors may play a role in the early expression of self-awareness. More broadly, the results highlight the importance of moving beyond the mark test, and designing culturally sensitive tests of self-awareness.
Comparison of the L10M consolidation regimen to an alternative regimen including escalating methotrexate/L-asparaginase for adult acute lymphoblastic leukemia: a Southwest Oncology Group Study
The effectiveness of intensive post-remission chemotherapy regimens for adult patients with acute lymphoblastic leukemia (ALL) is limited by both a high rate of disease recurrence and a substantial incidence of treatment toxicity. To evaluate a potentially more effective and less toxic approach, we conducted a multicenter phase III trial of consolidation therapies comparing the standard L10M regimen with one combining the brief, intensive L17M regimen and escalating methotrexate (MTX) and L-asparaginase (L-asp). Patients over age 15 with previously untreated ALL were eligible. Induction therapy included vincristine, prednisone, doxorubicin, cyclophosphamide and intrathecal methotrexate administered over 36 days. Patients who achieved complete remission (CR) were randomized to receive consolidation with either the L10M regimen or with DAT (daunomycin, cytosine arabinoside, 6-thioguanine) and escalating MTX and L-asp. The randomization was stratified by age, WBC and Ph chromosome status. Maintenance therapy was the same in both arms. Of 353 eligible patients, 218 (62%) achieved CR and 195 were randomized. The treatment arms did not differ significantly with respect to disease-free survival (DFS; P = 0.46) or overall survival (P = 0.39). Estimated DFS at 5 years was 32% (95% confidence interval (CI) 23–42%) in the L10M arm and 25% (95% CI 16–33%) in the DAT/MTX/L-asp arm. In each arm, 4% of patients died of toxicities (infection in all but one case). Infections and nausea/vomiting were somewhat more common in the L10M arm (occurring in 68% and 53% of patients respectively) than the DAT/MTX/L-asp arm (56% and 33%). The DAT/MTX/L-asp consolidation regimen was associated with some reduction in nonfatal toxicities, but no significant improvement in DFS, overall survival or non-relapse mortality when compared to the standard L10M regimen.
The Impact of Spectrum Sensing Frequency and Packet-Loading Scheme on Multimedia Transmission Over Cognitive Radio Networks
Recently, multimedia transmission over cognitive radio networks (CRNs) becomes an important topic due to the CR's capability of using unoccupied spectrum for data transmission. Conventional work has focused on typical quality-of-service (QoS) factors such as radio link reliability, maximum tolerable communication delay, and spectral efficiency. However, there is no work considering the impact of CR spectrum sensing frequency and packet-loading scheme on multimedia QoS. Here the spectrum sensing frequency means how frequently a CR user detects the free spectrum. Continuous, frequent spectrum sensing could increase the medium access control (MAC) layer processing overhead and delay, and cause some multimedia packets to miss the receiving deadline, and thus decrease the multimedia quality at the receiver side. In this research, we will derive the math model between the spectrum sensing frequency and the number of remaining packets that need to be sent, as well as the relationship between spectrum sensing frequency and the new channel availability time during which the CRN user is allowed to use a new channel (after the current channel is re-occupied by primary users) to continue packet transmission. A smaller number of remaining packets and a larger value of new channel availability time will help to transmit multimedia packets within a delay deadline. Based on the above relationship model, we select appropriate spectrum sensing frequency under single-channel case, and study the trade-offs among the number of selected channels, optimal spectrum sensing frequency, and packet-loading scheme under multi-channel case. The optimal spectrum sensing frequency and packet-loading solutions for multi-channel case are obtained by using the combination of Hughes-Hartogs and discrete particle swarm optimization (DPSO) algorithms. Our experiments of JPEG2000 packet-stream and H.264 video packet-stream transmission over CRN demonstrate the validity of our spectrum sensing frequency selection and packet-loading scheme.
Multi-unit differential auction-barter model for electronic marketplaces
Differential auction–barter (DAB) model augments the well-known double auction (DA) model with barter bids so that besides the usual purchase and sale activities, bidders can also carry out direct bartering of items. The DAB model also provides a mechanism for making or receiving a differential money payment as part of the direct bartering of items, hence, allowing bartering of different valued items. In this paper, we propose an extension to the DAB model, called the multi-unit differential auction–barter (MUDAB) model for e-marketplaces in which multiple instances of commodities are exchanged. Furthermore, a more powerful and flexible bidding language is designed which allows bidders to express their complex preferences of purchase, sell and exchange requests, and hence increases the allocative efficiency of the market compared to the DAB. The winner determination problem of the MUDAB model is formally defined, and a fast polynomial-time network flow based algorithm is proposed for solving the problem. The fast performance of the algorithm is also demonstrated on various test cases containing up to one million bids. Thus, the proposed model can be used in large-scale online auctions without worrying about the running times of the solver. 2010 Elsevier B.V. All rights reserved.
Wound healing and catheter thrombosis after implantable venous access device placement in 266 breast cancers treated with bevacizumab therapy.
The aim of this study was to determine, in a population with metastatic breast cancer treated with bevacizumab therapy, the incidence of wound dehiscence after placement of an implantable venous access device (VAD) and to study the risk of catheter thrombosis. This study enrolled all VADs placed by 14 anesthetists between 1 January 2007 and 31 December 2009: 273 VADs in patients treated with bevacizumab therapy and 4196 VADs in patients not treated with bevacizumab therapy. In the bevacizumab therapy group, 13 cases of wound dehiscence occurred in 12 patients requiring removal of the VAD (4.76%). All cases of dehiscence occurred when bevacizumab therapy was initiated less than 7 days after VAD placement. Bevacizumab therapy was initiated less than 7 days after VAD placement in 150 cases (13 of 150: 8.6%). The risk of dehiscence was the same from 0 to 7 days. In parallel, the VAD wound dehiscence rate in patients not receiving bevacizumab therapy was eight of 4197 cases (0.19%) (Fisher's test significant, P<0.001). No risk factors of dehiscence were identified: anesthetists, learning curves, and irradiated patients. VAD thrombosis occurred in four patients (1.5%). In parallel, VAD thrombosis occurred in 51 of 4197 patients (1.2%) not receiving bevacizumab therapy (Fisher's test not significant; P=0.43). Bevacizumab therapy was permanently discontinued in five patients related to wound dehiscence and in one patient due to extensive skin necrosis. These data suggest the need to observe an interval of at least 7 days between VAD placement and initiation of bevacizumab therapy to avoid the risk of a wound dehiscence requiring chest wall port explant. The risk of VAD thrombosis does not require any particular primary prevention.
Unsupervised Multi-modal Neural Machine Translation
Unsupervised neural machine translation (UNMT) has recently achieved remarkable results [19] with only large monolingual corpora in each language. However, the uncertainty of associating target with source sentences makes UNMT theoretically an ill-posed problem. This work investigates the possibility of utilizing images for disambiguation to improve the performance of UNMT. Our assumption is intuitively based on the invariant property of image, i.e., the description of the same visual content by different languages should be approximately similar. We propose an unsupervised multi-modal machine translation (UMNMT) framework based on the language translation cycle consistency loss conditional on the image, targeting to learn the bidirectional multi-modal translation simultaneously. Through an alternate training between multi-modal and uni-modal, our inference model can translate with or without the image. On the widely used Multi30K dataset, the experimental results of our approach are significantly better than those of the text-only UNMT on the 2016 test dataset.
Ecological momentary assessment.
Assessment in clinical psychology typically relies on global retrospective self-reports collected at research or clinic visits, which are limited by recall bias and are not well suited to address how behavior changes over time and across contexts. Ecological momentary assessment (EMA) involves repeated sampling of subjects' current behaviors and experiences in real time, in subjects' natural environments. EMA aims to minimize recall bias, maximize ecological validity, and allow study of microprocesses that influence behavior in real-world contexts. EMA studies assess particular events in subjects' lives or assess subjects at periodic intervals, often by random time sampling, using technologies ranging from written diaries and telephones to electronic diaries and physiological sensors. We discuss the rationale for EMA, EMA designs, methodological and practical issues, and comparisons of EMA and recall data. EMA holds unique promise to advance the science and practice of clinical psychology by shedding light on the dynamics of behavior in real-world settings.
Tips and Outcomes of a New DIEP Flap Inset in Delayed Breast Reconstruction: The Dual-Plane Technique.
Purpose The dual-plane deep inferior epigastric perforator (DIEP) flap inset technique is herein presented with tips for optimizing the aesthetic outcome in delayed autologous breast reconstruction after radiation therapy. Patients and Methods A total of 42 women who underwent microsurgical reconstruction with a free DIEP flap participated in this prospective study. The flap was inset in a dual plane lying behind the pectoralis major at the upper pole and in front of the muscle at the lower pole of the reconstructed breast. Results The dual-plane flap inset resulted in natural transition from native and reconstructed tissues, excellent scar quality, optimal outline of the breast, and overall breast appearance. Moreover, dual-plane reconstruction was associated with constantly high patient satisfaction without wearing brassiere due to fullness of the upper pole and minimal ptosis with time. Conclusion The dual-plane DIEP flap inset results in optimal scar quality, breast shape, and fullness of the upper pole, resulting in high patient satisfaction.
The security and privacy of smart vehicles
Road safety, traffic management, and driver convenience continue to improve, in large part thanks to appropriate usage of information technology. But this evolution has deep implications for security and privacy, which the research community has overlooked so far.
Space-efficient approximate Voronoi diagrams
(MATH) Given a set $S$ of $n$ points in $\IR^d$, a {\em $(t,\epsilon)$-approximate Voronoi diagram (AVD)} is a partition of space into constant complexity cells, where each cell $c$ is associated with $t$ representative points of $S$, such that for any point in $c$, one of the associated representatives approximates the nearest neighbor to within a factor of $(1+\epsilon)$. Like the Voronoi diagram, this structure defines a spatial subdivision. It also has the desirable properties of being easy to construct and providing a simple and practical data structure for answering approximate nearest neighbor queries. The goal is to minimize the number and complexity of the cells in the AVD.(MATH) We assume that the dimension $d$ is fixed. Given a real parameter $\gamma$, where $2 \le \gamma \le 1/\epsilon$, we show that it is possible to construct a $(t,\epsilon)$-AVD consisting of \[O(n \epsilon^{\frac{d-1}{2}} \gamma^{\frac{3(d-1)}{2}} \log \gamma) \] cells for $t = O(1/(\epsilon \gamma)^{(d-1)/2})$. This yields a data structure of $O(n \gamma^{d-1} \log \gamma)$ space (including the space for representatives) that can answer $\epsilon$-NN queries in time $O(\log(n \gamma) + 1/(\epsilon \gamma)^{(d-1)/2})$. (Hidden constants may depend exponentially on $d$, but do not depend on $\epsilon$ or $\gamma$).(MATH) In the case $\gamma = 1/\epsilon$, we show that the additional $\log \gamma$ factor in space can be avoided, and so we have a data structure that answers $\epsilon$-approximate nearest neighbor queries in time $O(\log (n/\epsilon))$ with space $O(n/\epsilon^{d-1})$, improving upon the best known space bounds for this query time. In the case $\gamma = 2$, we have a data structure that can answer approximate nearest neighbor queries in $O(\log n + 1/\epsilon^{(d-1)/2})$ time using optimal $O(n)$ space. This dramatically improves the previous best space bound for this query time by a factor of $O(1/\epsilon^{(d-1)/2})$.(MATH) We also provide lower bounds on the worst-case number of cells assuming that cells are axis-aligned rectangles of bounded aspect ratio. In the important extreme cases $\gamma \in \{2, 1/\epsilon\}$, our lower bounds match our upper bounds asymptotically. For intermediate values of $\gamma$ we show that our upper bounds are within a factor of $O((1/\epsilon)^{(d-1)/2}\log \gamma)$ of the lower bound.
A cluster-randomized trial of provider-initiated (opt-out) HIV counseling and testing of tuberculosis patients in South Africa.
OBJECTIVE To determine whether implementation of provider-initiated human immunodeficiency virus (HIV) counseling would increase the proportion of tuberculosis (TB) patients who received HIV counseling and testing. DESIGN Cluster-randomized trial with clinic as the unit of randomization. SETTING Twenty, medium-sized primary care TB clinics in the Nelson Mandela Metropolitan Municipality, Port Elizabeth, Eastern Cape Province, South Africa. SUBJECTS A total of 754 adults (18 years and older) newly registered as TB patients in the 20 study clinics. INTERVENTION Implementation of provider-initiated HIV counseling and testing. MAIN OUTCOME MEASURES Percentage of TB patients HIV counseled and tested. SECONDARY: Percentage of patients with HIV test positive, and percentage of those who received cotrimoxazole and who were referred for HIV care. RESULTS : A total of 754 adults newly registered as TB patients were enrolled. In clinics randomly assigned to implement provider-initiated HIV counseling and testing, 20.7% (73/352) patients were counseled versus 7.7% (31/402) in the control clinics (P = 0.011), and 20.2% (n = 71) versus 6.5% (n = 26) underwent HIV testing (P = 0.009). Of those patients counseled, 97% in the intervention clinics accepted testing versus 79% in control clinics (P = 0.12). The proportion of patients identified as HIV infected in intervention clinics was 8.5% versus 2.5% in control clinics (P = 0.044). Fewer than 40% of patients with a positive HIV test were prescribed cotrimoxazole or referred for HIV care in either study arm. CONCLUSIONS Provider-initiated HIV counseling significantly increased the proportion of adult TB patients who received HIV counseling and testing, but the magnitude of the effect was small. Additional interventions to optimize HIV testing for TB patients urgently need to be evaluated.
A mobile health application for falls detection and biofeedback monitoring
A mobile health application solution with biofeedback based on body sensors is very useful to perform a data collection for patients remote monitoring. This system allows comfort, mobility, and efficiency in all the process of data collection providing more confidence and operability. Falls represent a high risk for debilitated elderly people. Falls can be detected by the accelerometer presented in most of the available mobile devices. To reverse this tendency, more accurate data for patients monitoring can be obtained from the body sensors attached to a human body (such as, electro cardiogram, electromyography, blood pressure, electro dermal activity, and temperature). Then, this paper proposes a mobile solution for falls detection and biofeedback monitoring. The proposed system collects sensed data from body that is forwarded to a smartphone or tablet through Bluetooth. Mobile devices are used to display information graphically to users. All the process of data acquisition is performed in real time. The proposed system is evaluated, demonstrated, and validated through a prototype and it is ready for use.
StarCraft Micromanagement With Reinforcement Learning and Curriculum Transfer Learning
Real-time strategy games have been an important field of game artificial intelligence in recent years. This paper presents a reinforcement learning and curriculum transfer learning method to control multiple units in StarCraft micromanagement. We define an efficient state representation, which breaks down the complexity caused by the large state space in the game environment. Then, a parameter sharing multi-agent gradient-descent Sarsa($\lambda$) algorithm is proposed to train the units. The learning policy is shared among our units to encourage cooperative behaviors. We use a neural network as a function approximator to estimate the action–value function, and propose a reward function to help units balance their move and attack. In addition, a transfer learning method is used to extend our model to more difficult scenarios, which accelerates the training process and improves the learning performance. In small-scale scenarios, our units successfully learn to combat and defeat the built-in AI with 100% win rates. In large-scale scenarios, the curriculum transfer learning method is used to progressively train a group of units, and it shows superior performance over some baseline methods in target scenarios. With reinforcement learning and curriculum transfer learning, our units are able to learn appropriate strategies in StarCraft micromanagement scenarios.
ECOLOGICAL TAX REFORM AND EMISSIONS TRADING - CAN THEY WORK TOGETHER?
Overview Industrialised countries are using more natural resources than the earth can provide in the long run. Fossil energies are scarce and limited. At the same time the emission of greenhouse gases into the atmosphere, causing climate change, is increasing. A promising way of tackling these global challenges is the implementation of economic instruments that increase the price of conventional energy and give incentives for economising and rationalising the use of energy and switching over to renewable energies. One important economic instrument of such a manner is the German Eco Tax in the context of the Ecological Tax Reform (ETR), introduced in 1999 and further developed until 2003. The ETR increases the taxes on energy and at the same time lowers non-wage labour costs in order to stimulate employment. The European Emissions Trading Scheme (EU ETS), started in the beginning of 2005, introduced another important economic instrument to tackle climate change. Energy-intensive industry and utilities have to have allowances for CO2-emissions. For compliance with reduction targets either reducing own emissions or buying allowances on the market is necessary. As both instruments address industry, a discussion emerged on how this ‘double burden’ for industry could be removed. Suggestions ranged from generous exemptions for industry either in the system of the Ecological Tax Reform or the Emissions Trading to a complete disposal of the Eco Tax. This paper will show that the Ecological Tax Reform and the Emissions Trading Scheme are not causing a double burden for industry, that they are complementary instruments and that both can work purposefully side by side.
BG9719 (CVT-124), an A1 adenosine receptor antagonist, protects against the decline in renal function observed with diuretic therapy.
BACKGROUND Adenosine may adversely affect renal function via its effects on renal arterioles and tubuloglomerular feedback, but effects of adenosine blockade in humans receiving furosemide and ACE inhibitors is unknown. METHODS AND RESULTS This was a randomized, double-blind, ascending-dose, crossover study evaluating 3 doses of BG9719 in 63 patients with congestive heart failure. Patients received placebo or 1 of 3 doses of BG9719 on 1 day and the same medication plus furosemide on a separate day. Renal function and electrolyte and water excretion were assessed. BG9719 alone caused an increase in urine output and sodium excretion (P<0.05). Although administration of furosemide alone caused a large diuresis, addition of BG9719 to furosemide increased diuresis, which was significant at the 0.75-microg/mL concentration. BG9719 alone improved glomerular filtration rate (GFR) at the 2 lower doses. Furosemide alone caused a decline in GFR. When BG9719 was added to furosemide, however, creatinine clearance remained at baseline at the 2 lower doses. CONCLUSIONS In patients with congestive heart failure on standard therapy, including ACE inhibitors, BG9719 increased both urine output and GFR. In these same patients, furosemide increased urine output at the expense of decreased GFR. When BG9719 was given in addition to furosemide, urine volume additionally increased and there was no deterioration in GFR. A1 adenosine antagonism might preserve renal function while simultaneously promoting natriuresis during treatment for heart failure.
Joint Cell Nuclei Detection and Segmentation in Microscopy Images Using 3D Convolutional Networks
We propose a 3D convolutional neural network to simultaneously segment and detect cell nuclei in confocal microscopy images. Mirroring the co-dependency of these tasks, our proposed model consists of two serial components: the first part computes a segmentation of cell bodies, while the second module identifies the centers of these cells. Our model is trained end-to-end from scratch on a mouse parotid salivary gland stem cell nuclei dataset comprising 107 image stacks from three independent cell preparations, each containing several hundred individual cell nuclei in 3D. In our experiments, we conduct a thorough evaluation of both detection accuracy and segmentation quality, on two different datasets. The results show that the proposed method provides significantly improved detection and segmentation accuracy compared to state-of-the-art and benchmark algorithms. Finally, we use a previously described test-time drop-out strategy to obtain uncertainty estimates on our predictions and validate these estimates by demonstrating that they are strongly correlated with accuracy.
Brand equity estimation model
a r t i c l e i n f o Although a consensus exists among marketing scholars and practitioners about the importance of brand equity, a uniformly accepted estimation model has yet to emerge. Most consumer-based brand equity (CBBE) models do not offer a monetary estimation of brand equity while many financial-based brand equity (FBBE) models do not consider consumers' perceptions. In this paper, the authors develop a model that combines these two approaches: CBBE and FBBE. The former considers consumers' purchase intentions and brand-switching probabilities using Markov matrices, while the latter calculates the monetary value of a brand using net present value of future generated cash flows. Additionally, the model enables the comparison of brand performance in relation to its competitors and the estimation of financial returns of marketing actions, thus distinguishing between the contributions of the different drivers of brand equity. Marketing professionals still face the challenge of estimating the value of a brand. As Keller (1998) points out, various forms of estimation with different measurement purposes are available. Consequently, researchers propose many different approaches for capturing brand equity (Shankar, Azar, & Fuller, 2008). However, research in the marketing field has not yet come up with a single, uniformly accepted theoretical basis for brand valuation (Raggio & Leone, 2007). Thus, although the corporate world recognizes the estimation of brand equity as an important marketing activity, the estimation of brand equity (Madden, Fehle, & Fournier, 2006) and the quantification of the returns on marketing activities in financial terms continues to be a major challenge for marketing and brand managers (Mizik & Jacobson, 2008). Adoption of a new measurement of brand equity results from the informational requirements of the following groups of people: (a) marketers, who seek to increase their organizational credibility by demonstrating the value of branding in clear financial terms (Madden et al., 2006), in order to obtain budgets for their departments and to better manage their brands; (b) scholars, who are under pressure to supply theoretical and methodological support to marketers in order to better measure brand equity, evaluate their brand performance and estimate its investment returns; (c) accountants, who set the price of a brand to be sold or purchased, and include a brand in the company's balance sheet (Feldwick, 1996), especially in mergers and acquisitions; and (d) shareholders and financial analysts, who verify the financial performance and the association between brand equity and shareholder …
Toward strategic management of shale gas development: Regional, collective impacts on water resources
Shale gas resources are relatively plentiful in the United States and in many countries and regions around the world. Development of these resources is moving ahead amidst concerns regarding environmental risks, especially to water resources. The complex nature of this distributed extractive industry, combined with limited impact data, makes establishing possible effects and designing appropriate regulatory responses challenging. Here we move beyond the project level impact assessment approach to use regional collective impact analysis in order to assess a subset of potential water management policy options. Specifically, we examine hypothetical water withdrawals for hydraulic fracturing and the subsequent treatment of wastewater that could be returned or produced from future active shale gas wells in the currently undeveloped Susquehanna River Basin region of New York. Our results indicate that proposed water withdrawal management strategies may not provide greater environmental protection than simpler approaches. We suggest a strategy that maximizes protectiveness while reducing regulatory complexity. For wastewater treatment, we show that the Susquehanna River Basin region of New York State has limited capacity to treat wastewater using extant municipal infrastructure. We suggest that modest private investment in industrial treatment facilities can achieve treatment goals without putting public systems at risk. We conclude that regulation of deterministic water resource impacts of shale gas extraction should be approached on a regional, collective basis, and suggest that water resource management objectives can be met by balancing the need for development with environmental considerations and regulatory constraints. # 2011 Elsevier Ltd. All rights reserved. * Corresponding author. Tel.: +1 607 254 7163. E-mail addresses: [email protected] (B.G. Rahm), [email protected] (S.J. Riha). Available online at www.sciencedirect.com journal homepage: www.elsevier.com/locate/envsci 1462-9011/$ – see front matter # 2011 Elsevier Ltd. All rights reserved. doi:10.1016/j.envsci.2011.12.004 Author's personal copy Canada, and more recently the United Kingdom have moved forward with development while France and some regional governments (Quebec, Canada) have placed temporary or permanent moratoria on the high-volume hydraulic fracturing process, citing concerns with respect to environmental safety, public health, and consistency with current policies. Shale gas development entails a range of activities that have various environmental impacts. More comprehensive discussions of these activities and the risks associated with various impacts have been presented from various perspectives (e.g. Christopherson, 2011; Kargbo et al., 2010; NYSDEC, 2011; Zoback et al., 2010). Major environmental concerns generally revolve around several key activities associated with shale gas development. Water resources, their use, and their potential contamination as a result of a wide range of development activities figure prominently among those concerns. Other issues involve potential noise, visual and air quality impacts associated with vehicle traffic, well pad construction and land clearing activities, and use of diesel fuel for on-site compressors and equipment. Activities associated with establishment and construction of well pads and associated service roads and delivery pipeline networks have the potential to disrupt land use patterns, disturb sensitive habitat, and introduce invasive species. Trucking demands related to transportation of materials, water, and waste lead to concerns over road use, road safety, and road maintenance. Still other impacts are possible that are related to community character and the ‘‘boom and bust’’ cycle associated with extractive development. Our focus here, however, is on water resources. Multiple activities associated with shale gas development have the potential to impact water resources and/or waterrelated infrastructure (e.g. Arthur et al., 2010; Soeder and Kappel, 2009; Veil, 2010). Developing shale gas requires a range of typical construction-associated activities. To establish well pads, soil is often removed and sometimes stored. Material and chemical storage areas are established. Roads, parking, and vehicle maintenance areas must be constructed. All of these activities lead to concerns over the risk of spills and leaks that could impact surface and groundwater quality, as well as erosion and water contamination resulting from storm events. Developing shale gas also involves more unique activities such as vertical drilling, often through potable groundwater supplies; and horizontal drilling through the shale formation itself. During these operations, millions of gallons of water need to be acquired and transported to the well pad, mixed with a number of chemical additives, and pumped under high pressure into the well in order to fracture the shale (high-volume hydraulic fracturing). This water then interacts with native constituents present at depth in the shale geology. When pressure is taken off the well, some of this water returns to the surface relatively quickly (flowback water), where it is sometimes treated and reused for hydraulic fracturing of other gas wells. Flowback water that is not reused, as well as water that is returned to the surface over the life of the gas well (produced water), must be stored and then treated and/or disposed of. Improper or poorly managed drilling, water withdrawal, or water treatment could potentially lead to water quantity and quality impacts. Because of concerns for potential environmental impacts such as those discussed above, coupled with the broad occurrence of shale gas throughout the world, managing and regulating the development of shale gas resources is a growing global interest and challenge. In the United States, shale gas resources are currently being extracted in Pennsylvania, Texas, and several other states, despite concerns from some stakeholders that it carries understudied or unacceptable environmental risks (USEPA, 2011a). While rapid development has occurred, regulation of this growing industry has evolved more slowly, and has taken many forms. The federal government provides some oversight through the Clean Water Act, Safe Drinking Water Act, Clean Air Act, and National Environmental Policy Act. The implications of this legislation are discussed at length elsewhere (GWPC and AC, 2009; Tiemann and Vann, 2011). However, the role of the federal government in the US has so far been limited, although recent efforts by federal agencies such as the EPA could mean that this might change (USEPA, 2011a). For the most part states, and in some cases regional authorities, have taken the lead role in regulation of shale gas development in the US (GWPC, 2009). Within the US, states regulate shale gas development and its impact on water resources in different ways (GWPC, 2009). In Pennsylvania (PA), for example, drilling for gas in the Marcellus Shale has increased dramatically in the last several years and regulatory approaches have been reactive in nature, administered to address and mitigate recognized environmental issues after they have occurred. Over time, regulations have become increasingly stringent, and regulatory agencies have moved toward establishment of publically accessible databases which track and compile water resource related information (PADEP, 2010). In New York (NY), parts of which are underlain by the Marcellus Shale, policy makers have chosen not to permit high-volume hydraulic fracturing activities needed to exploit this resource while the NY State Department of Conservation (NYSDEC) reviews possible environmental impacts and proposes regulations to mitigate those impacts (NYSDEC, 2011). While NY is the only state thus far to essentially prohibit shale gas development until an environmental assessment is completed, the focus of this assessment has been the subject of debate. The preliminary environmental impact assessment (dSGEIS) developed by the NYSDEC was undertaken in response to a state law (State Environmental Quality Review Act) that directs the agency to conduct a comprehensive review of all the potential environmental impacts of new development activities. The dSGEIS contains a description of the activities associated with high-volume hydraulic fracturing and shale gas development in general, the potential environmental impacts associated with those activities, and proposed measures and regulations that have been identified to mitigate those impacts. It focuses largely on the project level and pays considerable attention to the potential impacts of an individual shale gas well on its immediate surroundings. This is valuable, especially to the state regulatory community who would be in charge of overseeing the day to day operations of developers. A major criticism of the dSGEIS, however, has been the lack of attention paid to cumulative impacts, which can be briefly defined as impacts resulting from interactions of multiple activities, and/or the collective impact of many similar activities over time and space. Although the dSGEIS acknowledges these impacts, it does not include a full analysis e n v i r o n m e n t a l s c i e n c e & p o l i c y 1 7 ( 2 0 1 2 ) 1 2 – 2 3 13 Author's personal copy of cumulative environmental impacts at the regional scale, nor does it strategically assess policy alternatives and their potential effect on regional environmental impacts. For complex developments such as shale gas, environmental assessment approaches that explicitly analyze cumulative impacts from a state/regional perspective have been shown to be essential (e.g. CEQ, 1997; Kay et al., 2010; Zhao et al., 2009). Moreover, it is recognized that environmental assessments that combine the ‘‘project-focused’’ approach of the dSGEIS with more strategic ‘‘planning-based’’ approaches can be effective for minimizing negative cumulative
Six-and 12-month follow-up of an interdisciplinary fibromyalgia treatment programme: results of a randomised trial.
OBJECTIVES To assess the efficacy of a 6-week interdisciplinary treatment that combines coordinated psychological, medical, educational, and physiotherapeutic components (PSYMEPHY) over time compared to standard pharmacologic care. METHODS Randomised controlled trial with follow-up at 6 months for the PSYMEPHY and control groups and 12 months for the PSYMEPHY group. Participants were 153 outpatients with FM recruited from a hospital pain management unit. Patients randomly allocated to the control group (CG) received standard pharmacologic therapy. The experimental group (EG) received an interdisciplinary treatment (12 sessions). The main outcome was changes in quality of life, and secondary outcomes were pain, physical function, anxiety, depression, use of pain coping strategies, and satisfaction with treatment as measured by the Fibromyalgia Impact Questionnaire, the Hospital Anxiety and Depression Scale, the Coping with Chronic Pain Questionnaire, and a question regarding satisfaction with the treatment. RESULTS Six months after the intervention, significant improvements in quality of life (p=0.04), physical function (p=0.01), and pain (p=0.03) were seen in the PSYMEPHY group (n=54) compared with controls (n=56). Patients receiving the intervention reported greater satisfaction with treatment. Twelve months after the intervention, patients in the PSYMEPHY group (n=58) maintained statistically significant improvements in quality of life, physical functioning, pain, and symptoms of anxiety and depression, and were less likely to use maladaptive passive coping strategies compared to baseline. CONCLUSIONS An interdisciplinary treatment for FM was associated with improvements in quality of life, pain, physical function, anxiety and depression, and pain coping strategies up to 12 months after the intervention.
Data Mining and Analytics in the Process Industry: The Role of Machine Learning
Data mining and analytics have played an important role in knowledge discovery and decision making/supports in the process industry over the past several decades. As a computational engine to data mining and analytics, machine learning serves as basic tools for information extraction, data pattern recognition and predictions. From the perspective of machine learning, this paper provides a review on existing data mining and analytics applications in the process industry over the past several decades. The state-of-the-art of data mining and analytics are reviewed through eight unsupervised learning and ten supervised learning algorithms, as well as the application status of semi-supervised learning algorithms. Several perspectives are highlighted and discussed for future researches on data mining and analytics in the process industry.
A Knowledge Base for Automatic Feature Recognition from Point Clouds in an Urban Scene
LiDAR technology can provide very detailed and highly accurate geospatial information on an urban scene for the creation of Virtual Geographic Environments (VGEs) for different applications. However, automatic 3D modeling and feature recognition from LiDAR point clouds are very complex tasks. This becomes even more complex when the data is incomplete (occlusion problem) or uncertain. In this paper, we propose to build a knowledge base comprising of ontology and semantic rules aiming at automatic feature recognition from point clouds in support of 3D modeling. First, several modules for ontology are defined from different perspectives to describe an urban scene. For instance, the spatial relations module allows the formalized representation of possible topological relations extracted from point clouds. Then, a knowledge base is proposed that contains different concepts, their properties and their relations, together with constraints and semantic rules. Then, instances and their specific relations form an urban scene and are added to the knowledge base as facts. Based on the knowledge and semantic rules, a reasoning process is carried out to extract semantic features of the objects and their components in the urban scene. Finally, several experiments are presented to show the validity of our approach to recognize different semantic features of buildings from LiDAR point clouds.
Sexist Games=Sexist Gamers? A Longitudinal Study on the Relationship Between Video Game Use and Sexist Attitudes
From the oversexualized characters in fighting games, such as Dead or Alive or Ninja Gaiden, to the overuse of the damsel in distress trope in popular titles, such as the Super Mario series, the under- and misrepresentation of females in video games has been well documented in several content analyses. Cultivation theory suggests that long-term exposure to media content can affect perceptions of social realities in a way that they become more similar to the representations in the media and, in turn, impact one's beliefs and attitudes. Previous studies on video games and cultivation have often been cross-sectional or experimental, and the limited longitudinal work in this area has only considered time intervals of up to 1 month. Additionally, previous work in this area has focused on the effects of violent content and relied on self-selected or convenience samples composed mostly of adolescents or college students. Enlisting a 3 year longitudinal design, the present study assessed the relationship between video game use and sexist attitudes, using data from a representative sample of German players aged 14 and older (N=824). Controlling for age and education, it was found that sexist attitudes--measured with a brief scale assessing beliefs about gender roles in society--were not related to the amount of daily video game use or preference for specific genres for both female and male players. Implications for research on sexism in video games and cultivation effects of video games in general are discussed.
Back to the Blocks World: Learning New Actions through Situated Human-Robot Dialogue
This paper describes an approach for a robotic arm to learn new actions through dialogue in a simplified blocks world. In particular, we have developed a threetier action knowledge representation that on one hand, supports the connection between symbolic representations of language and continuous sensorimotor representations of the robot; and on the other hand, supports the application of existing planning algorithms to address novel situations. Our empirical studies have shown that, based on this representation the robot was able to learn and execute basic actions in the blocks world. When a human is engaged in a dialogue to teach the robot new actions, step-by-step instructions lead to better learning performance compared to one-shot instructions.
First impressions: making up your mind after a 100-ms exposure to a face.
People often draw trait inferences from the facial appearance of other people. We investigated the minimal conditions under which people make such inferences. In five experiments, each focusing on a specific trait judgment, we manipulated the exposure time of unfamiliar faces. Judgments made after a 100-ms exposure correlated highly with judgments made in the absence of time constraints, suggesting that this exposure time was sufficient for participants to form an impression. In fact, for all judgments-attractiveness, likeability, trustworthiness, competence, and aggressiveness-increased exposure time did not significantly increase the correlations. When exposure time increased from 100 to 500 ms, participants' judgments became more negative, response times for judgments decreased, and confidence in judgments increased. When exposure time increased from 500 to 1,000 ms, trait judgments and response times did not change significantly (with one exception), but confidence increased for some of the judgments; this result suggests that additional time may simply boost confidence in judgments. However, increased exposure time led to more differentiated person impressions.
A Robot Finger Design Using a Dual-Mode Twisting Mechanism to Achieve High-Speed Motion and Large Grasping Force
A dual-mode robot finger is proposed to achieve a high-speed motion and large grasping force with a single motor. The robot finger has two actuator modes, which consist of the speed mode and the force mode. Based on the geometric analysis of each mode, the main design parameters of the proposed robot finger are derived, and their effectiveness is verified by simulations. In addition, using experiments with a prototype of a robot finger, the validity of the proposed approach is demonstrated.
Combining pixel domain and compressed domain index for sketch based image retrieval
Sketch-based image retrieval (SBIR) lets one express a precise visual query with simple and widespread means. In the SBIR approaches, the challenge consists in representing the image dataset features in a structure that allows one to efficiently and effectively retrieve images in a scalable system. We put forward a sketch-based image retrieval solution where sketches and natural image contours are represented and compared, in both, the compressed-domain of wavelet and in the pixel domain. The query is efficiently performed in the wavelet domain, while effectiveness refinements are achieved using the pixel domain to verify the spatial consistency between the sketch strokes and the natural image contours. Also, we present an efficient scheme of inverted lists for sketch-based image retrieval using the compressed-domain of wavelets. Our proposal of indexing presents two main advantages, the amount of the data to compute the query is smaller than the traditional method while it presents a better effectiveness.
Extrapyramidal motor side-effects of first- and second-generation antipsychotic drugs.
BACKGROUND Second-generation antipsychotics have been thought to cause fewer extrapyramidal side-effects (EPS) than first-generation antipsychotics, but recent pragmatic trials have indicated equivalence. AIMS To determine whether second-generation antipsychotics had better outcomes in terms of EPS than first-generation drugs. METHOD We conducted an intention-to-treat, secondary analysis of data from an earlier randomised controlled trial (n = 227). A clinically significant difference was defined as double or half the symptoms in groups prescribed first- v. second-generation antipsychotics, represented by odds ratios greater than 2.0 (indicating advantage for first-generation drugs) or less than 0.5 (indicating advantage for the newer drugs). We also examined EPS in terms of symptoms emergent at 12 weeks and 52 weeks, and symptoms that had resolved at these time points. RESULTS At baseline those randomised to the first-generation antipsychotic group (n = 118) had similar EPS to the second-generation group (n = 109). Indications of resolved Parkinsonism (OR = 0.5) and akathisia (OR = 0.4) and increased tardive dyskinesia (OR = 2.2) in the second-generation drug group at 12 weeks were not statistically significant and the effects were not present by 52 weeks. Patients in the second-generation group were dramatically (30-fold) less likely to be prescribed adjunctive anticholinergic medication, despite equivalence in terms of EPS. CONCLUSIONS The expected improvement in EPS profiles for participants randomised to second-generation drugs was not found; the prognosis over 1 year of those in the first-generation arm was no worse in these terms. The place of careful prescription of first-generation drugs in contemporary practice remains to be defined, potentially improving clinical effectiveness and avoiding life-shortening metabolic disturbances in some patients currently treated with the narrow range of second-generation antipsychotics used in routine practice. This has educational implications because a generation of psychiatrists now has little or no experience with first-generation antipsychotic prescription.
Deliver Us from Evil: An Interpretation of American Prohibition
It shows the profound impact of the prohibition movement on political history before 1916 and analyzes its ambiguous triumph in the 1920s. In doing so, it reveals the relationship between liquor control and the unique moral history of the American family. Here is social history at its best, wiping away the myth and legends of the past.
Distributionally Robust Stochastic Optimization with Wasserstein Distance
Distributionally robust stochastic optimization (DRSO) is an approach to optimization under uncertainty in which, instead of assuming that there is an underlying probability distribution that is known exactly, one hedges against a chosen set of distributions. In this paper we first point out that the set of distributions should be chosen to be appropriate for the application at hand, and that some of the choices that have been popular until recently are, for many applications, not good choices. We consider sets of distributions that are within a chosen Wasserstein distance from a nominal distribution, for example an empirical distribution resulting from available data. The paper argues that such a choice of sets has two advantages: (1) The resulting distributions hedged against are more reasonable than those resulting from other popular choices of sets. (2) The problem of determining the worst-case expectation over the resulting set of distributions has desirable tractability properties. We derive a dual reformulation of the corresponding DRSO problem and construct approximate worst-case distributions (or an exact worst-case distribution if it exists) explicitly via the first-order optimality conditions of the dual problem. Our contributions are five-fold. (i) We identify necessary and sufficient conditions for the existence of a worst-case distribution, which are naturally related to the growth rate of the objective function. (ii) We show that the worst-case distributions resulting from an appropriate Wasserstein distance have a concise structure and a clear interpretation. (iii) Using this structure, we show that data-driven DRSO problems can be approximated to any accuracy by robust optimization problems, and thereby many DRSO problems become tractable by using tools from robust optimization. (iv) To the best of our knowledge, our proof of strong duality is the first constructive proof for DRSO problems, and we show that the constructive proof technique is also useful in other contexts. (v) Our strong duality result holds in a very general setting, and we show that it can be applied to infinite dimensional process control problems and worst-case value-at-risk analysis.
Has the bug really been fixed?
Software has bugs, and fixing those bugs pervades the software engineering process. It is folklore that bug fixes are often buggy themselves, resulting in bad fixes, either failing to fix a bug or creating new bugs. To confirm this folklore, we explored bug databases of the Ant, AspectJ, and Rhino projects, and found that bad fixes comprise as much as 9% of all bugs. Thus, detecting and correcting bad fixes is important for improving the quality and reliability of software. However, no prior work has systematically considered this bad fix problem, which this paper introduces and formalizes. In particular, the paper formalizes two criteria to determine whether a fix resolves a bug: coverage and disruption. The coverage of a fix measures the extent to which the fix correctly handles all inputs that may trigger a bug, while disruption measures the deviations from the program's intended behavior after the application of a fix. This paper also introduces a novel notion of distance-bounded weakest precondition as the basis for the developed practical techniques to compute the coverage and disruption of a fix. To validate our approach, we implemented Fixation, a prototype that automatically detects bad fixes for Java programs. When it detects a bad fix, Fixation returns an input that still triggers the bug or reports a newly introduced bug. Programmers can then use that bug-triggering input to refine or reformulate their fix. We manually extracted fixes drawn from real-world projects and evaluated Fixation against them: Fixation successfully detected the extracted bad fixes.
ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression
We propose an efficient and unified framework, namely ThiNet, to simultaneously accelerate and compress CNN models in both training and inference stages. We focus on the filter level pruning, i.e., the whole filter would be discarded if it is less important. Our method does not change the original network structure, thus it can be perfectly supported by any off-the-shelf deep learning libraries. We formally establish filter pruning as an optimization problem, and reveal that we need to prune filters based on statistics information computed from its next layer, not the current layer, which differentiates ThiNet from existing methods. Experimental results demonstrate the effectiveness of this strategy, which has advanced the state-of-the-art. We also show the performance of ThiNet on ILSVRC-12 benchmark. ThiNet achieves 3.31 x FLOPs reduction and 16.63× compression on VGG-16, with only 0.52% top-5 accuracy drop. Similar experiments with ResNet-50 reveal that even for a compact network, ThiNet can also reduce more than half of the parameters and FLOPs, at the cost of roughly 1% top-5 accuracy drop. Moreover, the original VGG-16 model can be further pruned into a very small model with only 5.05MB model size, preserving AlexNet level accuracy but showing much stronger generalization ability.
Visualizing Sequential Patterns for Text Mining
A sequential pattern in data mining is a finite series of elements such as A → B → C → D where A, B, C, and D are elements of the same domain. The mining of sequential patterns is designed to find patterns of discrete events that frequently happen in the same arrangement along a timeline. Like association and clustering, the mining of sequential patterns is among the most popular knowledge discovery techniques that apply statistical measures to extract useful information from large datasets. As our computers become more powerful, we are able to mine bigger datasets and obtain hundreds of thousands of sequential patterns in full detail. With this vast amount of data, we argue that neither data mining nor visualization by itself can manage the information and reflect the knowledge effectively. Subsequently, we apply visualization to augment data mining in a study of sequential patterns in large text corpora. The result shows that we can learn more and more quickly in an integrated visual datamining environment.