title
stringlengths
8
300
abstract
stringlengths
0
10k
Using LIP to Gloss Over Faces in Single-Stage Face Detection Networks
This work shows that it is possible to fool/attack recent state-of-the-art face detectors which are based on the single-stage networks. Successfully attacking face detectors could be a serious malware vulnerability when deploying a smart surveillance system utilizing face detectors. In addition, for the privacy concern, it helps prevent faces being harvested and stored in the server. We show that existing adversarial perturbation methods are not effective to perform such an attack, especially when there are multiple faces in the input image. This is because the adversarial perturbation specifically generated for one face may disrupt the adversarial perturbation for another face. In this paper, we call this problem the Instance Perturbation Interference (IPI) problem. This IPI problem is addressed by studying the relationship between the deep neural network receptive field and the adversarial perturbation. As such, we propose the Localized Instance Perturbation (LIP) that confines the adversarial perturbation inside the Effective Receptive Field (ERF) of a target to perform the attack. Experimental results show the LIP method massively outperforms existing adversarial perturbation generation methods – often by a factor of 2 to 10.
End-user feature labeling: Supervised and semi-supervised approaches based on locally-weighted logistic regression
When intelligent interfaces, such as intelligent desktop assistants, email classif iers, and recommender systems, customize themselves to a particular end user , such customizations can decrease productivity and increase frustration due to inacc urate predictions—especially in early stages when training data is limited. The end user can improve the learning algorithm by tediously labeling a substantial amoun t of additional training data, but this takes time and is too ad hoc to target a particular area of inaccuracy. To solve this problem, we propose new supervised and semi-supervised learning algorithms based on locally weighted logistic regression for feature labeling by end users , enabling them to point out which features are important for a class, rath er than provide new training instances. We first evaluate our algorithms against other feature labeling algorit hms under idealized conditions using feature labels generated by an oracle . In addition, another of our contributions is an evaluation of feature labeling algorithms under real world conditions using feature labels harvested from actual end users in our user study. Our user study is the first statistical user study for feature labeling invo lvi g a large number of end users (43 participants), all of whom have no background in machine learning. Our supervised and semi-supervised algorithms were among the best perfo rmers when compared to other feature labeling algorithms in the idealized setting and they are also robust to poor quality feature labels provided by ordinary end u sers in our study. We also perform an analysis to investigate the relative gains of incorporatin g the different sources of knowledge available in the labeled training set, the feature labels and the unlabeled data. Together, our results strongly suggest that feature labeling by e d users is both viable and effective for allowing end users to improve the learnin g algorithm behind their customized applicati ons. * Corresponding author E-mail address: [email protected] (Shubhomoy Das, 1148 Kelley Engineering Center, Corvallis, OR 97331-5501, USA, Ph: 1-541-908-6949). 1 Early versions of portions of this work appeared in [36, 37] S. Das, T. Moore, W-K. Wong et. al./Artificial Intelligence © 2012 Elsevier B.V. All rights reserved.
All-digital TX frequency synthesizer and discrete-time receiver for Bluetooth radio in 130-nm CMOS
We present a single-chip fully compliant Bluetooth radio fabricated in a digital 130-nm CMOS process. The transceiver is architectured from the ground up to be compatible with digital deep-submicron CMOS processes and be readily integrated with a digital baseband and application processor. The conventional RF frequency synthesizer architecture, based on the voltage-controlled oscillator and the phase/frequency detector and charge-pump combination, has been replaced with a digitally controlled oscillator and a time-to-digital converter, respectively. The transmitter architecture takes advantage of the wideband frequency modulation capability of the all-digital phase-locked loop with built-in automatic compensation to ensure modulation accuracy. The receiver employs a discrete-time architecture in which the RF signal is directly sampled and processed using analog and digital signal processing techniques. The complete chip also integrates power management functions and a digital baseband processor. Application of the presented ideas has resulted in significant area and power savings while producing structures that are amenable to migration to more advanced deep-submicron processes, as they become available. The entire IC occupies 10 mm/sup 2/ and consumes 28 mA during transmit and 41 mA during receive at 1.5-V supply.
Long short-term memory recurrent neural network architectures for large scale acoustic modeling
Long Short-Term Memory (LSTM) is a specific recurrent neural network (RNN) architecture that was designed to model temporal sequences and their long-range dependencies more accurately than conventional RNNs. In this paper, we explore LSTM RNN architectures for large scale acoustic modeling in speech recognition. We recently showed that LSTM RNNs are more effective than DNNs and conventional RNNs for acoustic modeling, considering moderately-sized models trained on a single machine. Here, we introduce the first distributed training of LSTM RNNs using asynchronous stochastic gradient descent optimization on a large cluster of machines. We show that a two-layer deep LSTM RNN where each LSTM layer has a linear recurrent projection layer can exceed state-of-the-art speech recognition performance. This architecture makes more effective use of model parameters than the others considered, converges quickly, and outperforms a deep feed forward neural network having an order of magnitude more parameters.
Pet in the therapy room: an attachment perspective on Animal-Assisted Therapy.
John Bowlby's ( 1973, 1980, 1982) attachment theory is one of the most influential theories in personality and developmental psychology and provides insights into adjustment and psychopathology across the lifespan. The theory is also helpful in defining the target of change in psychotherapy, understanding the processes by which change occurs, and conceptualizing cases and planning treatment (Daniel, 2006; Obegi & Berant, 2008; Sable, 2004 ; Wallin, 2007). Here, we propose a model of Animal-Assisted Therapy (AAT) based on attachment theory and on the unique characteristics of human-pet relationships. The model includes clients' unmet attachment needs, individual differences in attachment insecurity, coping, and responsiveness to therapy. It also suggests ways to foster the development of more adaptive patterns of attachment and healthier modes of relating to others.
Water Use in Coastal Georgia by County and Source for 1997 and Trends, 1980-97
Water use during 1997 was estimated for each county in the 24-county area of coastal Georgia by water-use category, using data obtained from various Federal and State agencies. Categories of offstream water use include public supply, domestic, commercial, industrial, mining, irrigation, livestock, and thermoelectric power generation. Total offstream water use from groundand surfacewater sources was estimated to be about 1,225 million gallons per day (Mgal/d) in 1997 for the study area, of which ground water supplied 28 percent and surface water supplied 72 percent. Water withdrawal in coastal Georgia increased from 1,153 Mgal/d in 1980 to 1,225 Mgal/d in 1997, a 6-percent increase. During this period, surface-water withdrawal increased by 111 Mgal/d and ground-water withdrawal decreased by 38 Mgal/d.
Women in Sport : Gender Stereotypes in the Past and Present
The impact of online video lecture recordings and automated feedback on student performance
To what extent a blended learning configuration of face-to-face lectures, online on-demand video recordings of the face-to-face lectures and the offering of online quizzes with appropriate feedback has an additional positive impact on the performance of these students compared to the traditional face-toface course approach? In a between-subjects design in which students were randomly assigned to a group having access to the online lectures including multiple choice quizzes and appropriate feedback or to a group having access to the online lectures only, 474 students (161 men and 313 women) of a course on European Law agreed to participate in the experiment. By using regression analysis we found that the course grade of the students was predicted by their grade point average, their study discipline, their grade goal for the course, the expected difficulty-level of the course, the number of online lectures they viewed, the number of lectures the students attended in person and the interaction between the lectures they viewed online and attended in person. Students who attended few lectures had more benefit from viewing online lectures than students who attended many lectures. In contrast to our expectations, the regression analysis did not show a significant effect of automated feedback on student performance. Offering recordings of face-to-face lectures is an easy extension of a traditional course and is of practical importance, because it enables students who are often absent from the regular face-to-face lectures to be able to improve their course grade by viewing the lectures online.
Detection of Small Bowel Mucosal Healing and Deep Remission in Patients With Known Small Bowel Crohn’s Disease Using Biomarkers, Capsule Endoscopy, and Imaging
Objectives:Mucosal healing (MH) and deep remission (DR) are associated with improved outcomes in Crohn’s disease (CD). However, most of the current data pertain to colonic MH and DR, whereas the evidence regarding the prevalence and impact of small bowel (SB) MH is scarce. The aim of this study was to to evaluate the prevalence of SBMH and DR in quiescent SBCD.Methods:Patients with known SBCD in clinical remission (CDAI<150) or with mild symptoms (CDAI<220) were prospectively recruited and underwent video capsule endoscopy after verification of SB patency. Inflammation was quantified using the Lewis score (LS). SBMH was defined as LS<135, whereas a significant inflammation was defined as LS>790. Clinico-biomarker remission was defined as a combination of clinical remission and normal biomarkers. DR was defined as a combination of clinico-biomarker remission and MH.Results:Fifty-six patients with proven SB patency were enrolled; 52 (92.9%) patients were in clinical remission and 21 (40.4%) in clinico-biomarker remission. SBMH was demonstrated in 8/52 (15.4%) of patients in clinical remission. Moderate-to-severe SB inflammation was demonstrated in 11/52 (21.1%) of patients in clinical remission and in 1/21 (4.7%) of patients in clinical and biomarker remission. Only 7/52 (13.5%) patients were in DR.Conclusions:SB inflammation is detected in the majority of CD patients in clinical and biomarker remission. SBMH and DR were rare and were independent of treatment modality. Our findings represent the true inflammatory burden in quiescent patients with SBCD.
Dosimetric comparison of doses to organs at risk using 3-D conformal radiotherapy versus intensity modulated radiotherapy in postoperative radiotherapy of periampullary cancers: implications for radiation dose escalation.
CONTEXT Postoperative periampullary cancers with high risk features are managed with adjuvant chemo radiotherapy. Doses of 40-50 Gy have generally been used in conventional radiotherapy. Dose escalation with conventional radiotherapy has been restricted due to surrounding critical organs. OBJECTIVE The objective of this dosimetric analysis was to evaluate the dose of radiation received by organs at risk using 3D conformal radiotherapy (3DCRT) and intensity modulated radiotherapy (IMRT). METHODS Ten postoperative patients of periampullary cancers were selected for this dosimetric analysis. Planning CT scans films were taken with slice thickness of 2.5 mm and transferred to Eclipse treatment planning system. The clinical target volume (CTV) included the postoperative tumor bed and draining lymph nodal areas. A 1 cm margin was taken around the CTV to generate the planning target volume (PTV). Critical structures contoured for evaluation included bowel bag, bilateral kidneys, liver, stomach and spinal cord. IMRT plans were generated using seven field coplanar beams and 3DCRT planning was done using one anterior and two lateral fields. A dose of 45 Gy in 25 fractions was prescribed to the PTV. RESULTS V45 for bowel bag was 212.3 ± 159.0 cc (mean volume ± standard deviation) versus 80.9 ± 57.4 cc in 3DCRT versus IMRT (P=0.033). The V28 dose analysis for bilateral kidneys showed a value of 32.7±23.5 cc (mean volume ± standard deviation) versus 7.9 ± 7.4 cc for 3DCRT versus IMRT, respectively (P=0.013). The D60 for liver using 3DCRT and IMRT was 28.4 ± 8.6 Gy (mean dose ± standard deviation) and 19.9 ± 3.2 Gy, respectively (P=0.020). CONCLUSIONS Doses to bowel bag, liver and kidneys was significantly reduced using IMRT leaving ample scope for dose escalation.
Flash Organizations: Crowdsourcing Complex Work by Structuring Crowds As Organizations
This paper introduces flash organizations: crowds structured like organizations to achieve complex and open-ended goals. Microtask workflows, the dominant crowdsourcing structures today, only enable goals that are so simple and modular that their path can be entirely pre-defined. We present a system that organizes crowd workers into computationally-represented structures inspired by those used in organizations - roles, teams, and hierarchies - which support emergent and adaptive coordination toward open-ended goals. Our system introduces two technical contributions: 1) encoding the crowd's division of labor into de-individualized roles, much as movie crews or disaster response teams use roles to support coordination between on-demand workers who have not worked together before; and 2) reconfiguring these structures through a model inspired by version control, enabling continuous adaptation of the work and the division of labor. We report a deployment in which flash organizations successfully carried out open-ended and complex goals previously out of reach for crowdsourcing, including product design, software development, and game production. This research demonstrates digitally networked organizations that flexibly assemble and reassemble themselves from a globally distributed online workforce to accomplish complex work.
The Influence of Vertical and Horizontal Habitat Structure on Nationwide Patterns of Avian Biodiversity
—With limited resources for habitat conservation, the accurate identification of high-value avian habitat is crucial. Habitat structure affects avian biodiversity but is difficult to quantify over broad extents. Our goal was to identify which measures of vertical and horizontal habitat structure are most strongly related to patterns of avian biodiversity across the conterminous United States and to determine whether new measures of vertical structure are complementary to existing, primarily horizontal, measures. For 2,546 North American Breeding Bird Survey routes across the conterminous United States, we calculated canopy height and biomass from the National Biomass and Carbon Dataset (NBCD) as measures of vertical habitat structure and used land-cover composition and configuration metrics from the 2001 National Land Cover Database (NLCD) as measures of horizontal habitat structure. Avian species richness was calculated for each route for all birds and three habitat guilds. Avian species richness was significantly related to measures derived from both the NBCD and NLCD. The combination of horizontal and vertical habitat structure measures was most powerful, yielding high R2 values for nationwide models of forest (0.70) and grassland (0.48) bird species richness. New measures of vertical structure proved complementary to measures of horizontal structure. These data allow the efficient quantification of habitat structure over broad scales, thus informing better land management and bird conservation. Received 10 January 2013, accepted 30 September 2013.
High current planar transformer for very high efficiency isolated boost dc-dc converters
This paper presents a design and optimization of a high current planar transformer for very high efficiency dc-dc isolated boost converters. The analysis considers different winding arrangements, including very high copper thickness windings. The analysis is focused on the winding ac-resistance and transformer leakage inductance. Design and optimization procedures are validated based on an experimental prototype of a 6 kW dc-dc isolated full bridge boost converter developed on fully planar magnetics. The prototype is rated at 30-80 V 0-80 A on the low voltage side and 700-800 V on the high voltage side with a peak efficiency of 97.8% at 80 V 3.5 kW. Results highlights that thick copper windings can provide good performance at low switching frequencies due to the high transformer filling factor. PCB windings can also provide very high efficiency if stacked in parallel utilizing the transformer winding window in an optimal way.
A Parallel Graph Coloring Heuristic
The problem of computing good graph colorings arises in many diverse applications , such as in the estimation of sparse Jacobians and in the development of eecient, parallel iterative methods for solving sparse linear systems. In this paper we present an asynchronous graph coloring heuristic well suited to distributed memory parallel computers. We present experimental results obtained on an Intel iPSC/860 which demonstrate that, for graphs arising from nite element applications , the heuristic exhibits scalable performance and generates colorings usually within three or four colors of the best-known linear time sequential heuristics. For bounded degree graphs, we show that the expected running time of the heuristic under the PRAM computation model is bounded by EO(log(n)= log log(n)). This bound is an improvement over the previously known best upper bound for the expected running time of a random heuristic for the graph coloring problem.
A Statistical Approach to Anaphora Resolution
This paper presents an algorithm for identifying pronominal anaphora and two experiments based upon this algorithm. We incorporate multiple anaphora resolution factors into a statistical framework specifically the distance between the pronoun and the proposed antecedent, gender/number/animaticity of the proposed antecedent, governing head information and noun phrase repetition. We combine them into a single probability that enables us to identify the referent. Our first experiment shows the relative contribution of each source Of information and demonstrates a success rate of 82.9% for all sources combined. The second experiment investigates a method for unsupervised learning of gender/number/animaticity information. We present some experiments illustrating the accuracy of the method and note that with this information added, our pronoun resolution method achieves 84.2% accuracy. 1 I n t r o d u c t i o n We present a statistical method for determining pronoun anaphora. This program differs from earlier work in its almost complete lack of hand-crafting, relying instead on a very small corpus of Penn Wall Street Journal Tree-bank text (Marcus et al., 1993) that has been marked with co-reference information. The first sections of this paper describe this program: the probabilistic model behind it, its implementation, and its performance. The second half of the paper describes a method for using (portions of) t~e aforementioned program to learn automatically the typical gender of English words, information that is itself used in the pronoun resolution program. In particular, the scheme infers the gender of a referent from the gender of the pronouns that 161 refer to it and selects referents using the pronoun anaphora program. We present some typical results as well as the more rigorous results of a blind evaluation of its output. 2 A P r o b a b i l i s t i c M o d e l There are many factors, both syntactic and semantic, upon which a pronoun resolution system relies. (Mitkov (1997) does a detailed study on factors in anaphora resolution.) We first discuss the training features we use and then derive the probability equations from them. The first piece of useful information we consider is the distance between the pronoun and the candidate antecedent. Obviously the greater the distance the lower the probability. Secondly, we look at the syntactic situation in which the pronoun finds itself. The most well studied constraints are those involving reflexive pronouns. One classical approach to resolving pronouns in text that takes some syntactic factors into consideration is that of Hobbs (1976). This algorithm searches the parse tree in a leftto-right, breadth-first fashion that obeys the major reflexive pronoun constraints while giving a preference to antecedents that are closer to the pronoun. In resolving inter-sentential pronouns, the algorithm searches the previous sentence, again in left-to-right, breadth-first order. This implements the observed preference for subject position antecedents. Next, the actual words in a proposed nounphrase antecedent give us information regarding the gender, number, and animaticity of the proposed referent. For example: M a r i e Giraud carries historical significance as one of the last women to be ezecuted in France. S h e became an abortionist because it enabled her to buy jam, cocoa and other war-rationed goodies. Here it is helpful to recognize that "Marie" is probably female and thus is unlikely to be referred to by "he" or "it". Given the words in the proposed antecedent we want to find the probability that it is the referent of the pronoun in question. We collect these probabilities on the training data, which are marked with reference links. The words in the antecedent sometimes also let us test for number agreement. Generally, a singular pronoun cannot refer to a plural noun phrase, so that in resolving such a pronoun any plural candidates should be ruled out. However a singular noun phrase can be the referent of a plural pronoun, as illustrated by the following example: "I think if I tell Viacom I need more time, they will take 'Cosby' across the street," says the general manager ol a network a~liate. It is also useful to note the interaction between the head constituent of the pronoun p and the antecedent. For example: A Japanese company might make television picture tubes in Japan, assemble the T V sets in Malaysia and extort them to Indonesia. Here we would compare the degree to which each possible candidate antecedent (A Japanese company, television picture tubes, Japan, T V sets, and Malaysia in this example) could serve as the direct object of "export". These probabilities give us a way to implement selectional restriction. A canonical example of selectional restriction is that of the verb "eat", which selects food as its direct object. In the case of "export" the restriction is not as clearcut. Nevertheless it can still give us guidance on which candidates are more probable than others. The last factor we consider is referents' mention count. Noun phrases that are mentioned repeatedly are preferred. The training corpus is marked with the number of times a referent has been mentioned up to that point in the story. Here we are concerned with the probability that a proposed antecedent is correct given that it has been repeated a certain number of times. 162 In effect, we use this probability information to identify the topic of the segment with the belief that the topic is more likely to be referred to by a pronoun. The idea is similar to tha t used in the centering approach (Brennan et al., 1987) where a continued topic is the highest-ranked candidate for pronominalization. Given the above possible sources of informar tion, we arrive at the following equation, where F(p) denotes a function from pronouns to their antecedents: F(p) = argmaxP( A(p) = alp, h, l~', t, l, so, d~ A~') where A(p) is a random variable denoting the referent of the pronoun p and a is a proposed antecedent. In the conditioning events, h is the head constituent above p, l~ r is the list of candidate antecedents to be considered, t is the type of phrase of the proposed antecedent (always a noun-phrase in this s tudy), I is the type of the head constituent, sp describes the syntactic structure in which p appears, dspecifies the distance of each antecedent from p and M" is the number of times the referent is mentioned. Note that 17r ", d'~ and A~ are vector quantities in which each entry corresponds to a possible antecedent. When viewed in this way, a can be regarded as an index into these vectors that specifies which value is relevant to the particular choice of antecedent. This equation is decomposed into pieces that correspond to all the above factors but are more statistically manageable. The decomposition makes use of Bayes' theorem and is based on certain independence assumptions discussed below. P( A(p) = alp, h, fir, t, l, sp, d~ .Q') = P(alA~)P(p,h, fir, t,l, sp,~a, 2~) (1) P(p, h, fir, t, t, sp, diM ) o¢ PCalM)P(p, h, fir, t, l, sp, ~a, .Q') (2) = P(a[:Q)P(.%, ~a, :~'I) P(p,h, fir, t, l la ,~ ,sp , i) (3) = P(all~)P(sp, d~a,.Q ) PCh, t, Zla, ~'0", so, i) PC.. ~ la , .~', so, d, h, t, l) (4) oc P(a]l~)P(So,~a,M') P(p, 14tin, ]Q, s o, d, h, t, I) (5) = P(al.Q)P(sp, d~a, 3~r) P(ffrla, I~, s o, d, h, t, I). (6) P(pla. l~, sf,, d. h, t, l, l~) cx P(a163P(dtt la)P(f f ' lh , t, I, a) P(plw°) (7) Equation (1) is simply an application of Bayes' rule. The denominator is eliminated in the usual fashion, resulting in equation (2). Selectively applying the chain rule results in equations (3) and (4). In equation (4), the term P(h. t, lla, .~, So, d) is the same for every antecedent and is thus removed. Equat ion (6) follows when we break the last component of (5) into two probability distributions. In equation (7) we make the following independence assumptions: • Given a particular choice of the antecedent candidates, the distance is independent of distances of candidates other than the antecedent (and the distance to non-referents can be ignored): P(so, d~a, 2~) o¢ P(so, dola , IC4) • The syntnctic s t ructure st, and the distance from the pronoun da are independent of the number of times the referent is mentioned. Thus P(sp, dola, M) = P(sp, d.la) Then we combine sp and de into one variable dIt, Hobbs distance, since the Hobbs algorithm takes both the syntax and distance into account. The words in the antecedent depend only on the parent consti tuent h, the type of the words t, and the type of the parent I. Hence e(ff'la, M, sp, ~, h, t, l) = P ( ~ l h , t, l, a) • The choice pronoun depends only on the words in the antecedent, i.e. P{pla, M, sp, d, h, t, l, ~ = P(pla, W) 163 • If we treat a as an index into the vector 1~, then (a, I.V') is simply the a th candidate in the list ffz. We assume the selection of the pronoun is independent of the candidates other than the antecedent. Hence P(pla, W) = P(plw,~) Since I~" is a vector, we need to normalize P(ff ' lh, t,l, a) to obtain the probability of each element in the vector. It is reasonable to assume tha t the antecedents in W are independent of each other; in other words, P(wo+llwo, h , t , l ,a ) = P(wo+llh, t , l ,a}. Thus,
Real-time RGB-D based template matching pedestrian detection
Pedestrian detection is one of the most popular topics in computer vision and robotics. Considering challenging issues in multiple pedestrian detection, we present a real-time depth-based template matching people detector. In this paper, we propose different approaches for training the depth-based template. We train multiple templates for handling issues due to various upper-body orientations of the pedestrians and different levels of detail in depth-map of the pedestrians with various distances from the camera. And, we take into account the degree of reliability for different regions of sliding window by proposing the weighted template approach. Furthermore, we combine the depth-detector with an appearance based detector as a verifier to take advantage of the appearance cues for dealing with the limitations of depth data. We evaluate our method on the challenging ETH dataset sequence. We show that our method outperforms the state-of-the-art approaches.
STRING v10: protein–protein interaction networks, integrated over the tree of life
The many functional partnerships and interactions that occur between proteins are at the core of cellular processing and their systematic characterization helps to provide context in molecular systems biology. However, known and predicted interactions are scattered over multiple resources, and the available data exhibit notable differences in terms of quality and completeness. The STRING database (http://string-db.org) aims to provide a critical assessment and integration of protein-protein interactions, including direct (physical) as well as indirect (functional) associations. The new version 10.0 of STRING covers more than 2000 organisms, which has necessitated novel, scalable algorithms for transferring interaction information between organisms. For this purpose, we have introduced hierarchical and self-consistent orthology annotations for all interacting proteins, grouping the proteins into families at various levels of phylogenetic resolution. Further improvements in version 10.0 include a completely redesigned prediction pipeline for inferring protein-protein associations from co-expression data, an API interface for the R computing environment and improved statistical analysis for enrichment tests in user-provided networks.
High-Performance Distributed ML at Scale through Parameter Server Consistency Models
As Machine Learning (ML) applications embrace greater data size and model complexity, practitioners turn to distributed clusters to satisfy the increased computational and memory demands. Effective use of clusters for ML programs requires considerable expertise in writing distributed code, but existing highlyabstracted frameworks like Hadoop that pose low bar-ed frameworks like Hadoop that pose low barriers to distributed-programming have not, in practice, matched the performance seen in highly specialized and advanced ML implementations. The recent Parameter Server (PS) paradigm is a middle ground between these extremes, allowing easy conversion of single-machine parallel ML programs into distributed ones, while maintaining high throughput through relaxed “consistency models” that allow asynchronous (and, hence, inconsistent) parameter reads. However, due to insufficient theoretical study, it is not clear which of these consistency models can really ensure correct ML algorithm output; at the same time, there remain many theoreticallymotivated but undiscovered opportunities to maximize computational throughput. Inspired by this challenge, we study both the theoretical guarantees and empirical behavior of iterative-convergent ML algorithms in existing PS consistency models. We then use the gleaned insights to improve a consistency model using an “eager” PS communication mechanism, and implement it as a new PS system that enables ML programs to reach their solution more quickly.
A Novel Efficient Pairing-Free CP-ABE Based on Elliptic Curve Cryptography for IoT
Ciphertext-policy attribute-based encryption (CP-ABE) is a promising cryptographic technique that integrates data encryption with access control for ensuring data security in IoT systems. However, the efficiency problem of CP-ABE is still a bottleneck limiting its development and application. A widespread consensus is that the computation overhead of bilinear pairing is excessive in the practical application of ABE, especially for the devices or the processors with limited computational resources and power supply. In this paper, we proposed a novel pairing-free data access control scheme based on CP-ABE using elliptic curve cryptography, abbreviated PF-CP-ABE. We replace complicated bilinear pairing with simple scalar multiplication on elliptic curves, thereby reducing the overall computation overhead. And we designed a new way of key distribution that it can directly revoke a user or an attribute without updating other users’ keys during the attribute revocation phase. Besides, our scheme use linear secret sharing scheme access structure to enhance the expressiveness of the access policy. The security and performance analysis show that our scheme significantly improved the overall efficiency as well as ensured the security.
Restoration of Hearing in the VGLUT3 Knockout Mouse Using Virally Mediated Gene Therapy
Mice lacking the vesicular glutamate transporter-3 (VGLUT3) are congenitally deaf due to loss of glutamate release at the inner hair cell afferent synapse. Cochlear delivery of VGLUT3 using adeno-associated virus type 1 (AAV1) leads to transgene expression in only inner hair cells (IHCs), despite broader viral uptake. Within 2 weeks of AAV1-VGLUT3 delivery, auditory brainstem response (ABR) thresholds normalize, along with partial rescue of the startle response. Lastly, we demonstrate partial reversal of the morphologic changes seen within the afferent IHC ribbon synapse. These findings represent a successful restoration of hearing by gene replacement in mice, which is a significant advance toward gene therapy of human deafness.
Graph Convolutional Neural Networks for ADME Prediction in Drug Discovery
ADME in-silico methods have grown increasingly powerful over the past twenty years, driven by advances in machine learning and the abundance of high-quality training data generated by laboratory automation. Meanwhile, in the technology industry, deep-learning has taken o↵, driven by advances in topology design, computation, and data. The key premise of these methods is that the model is able to pass gradients back into the feature structure, engineering its own problem-specific representation for the data. Graph Convolutional Networks (GC-DNNs), a variation of neural fingerprints, allow for true deep-learning in the chemistry domain. We use this new approach to build human plasma protein binding, lipophilicty, and humans clearance models that significantly outperform random-forests and support-vector-regression.
Predicting the severity of a reported bug
The severity of a reported bug is a critical factor in deciding how soon it needs to be fixed. Unfortunately, while clear guidelines exist on how to assign the severity of a bug, it remains an inherent manual process left to the person reporting the bug. In this paper we investigate whether we can accurately predict the severity of a reported bug by analyzing its textual description using text mining algorithms. Based on three cases drawn from the open-source community (Mozilla, Eclipse and GNOME), we conclude that given a training set of sufficient size (approximately 500 reports per severity), it is possible to predict the severity with a reasonable accuracy (both precision and recall vary between 0.65–0.75 with Mozilla and Eclipse; 0.70–0.85 in the case of GNOME).
[Observations on swarming of Chromatium okenii].
Multiplexed protein quantitation in Saccharomyces cerevisiae using amine-reactive isobaric tagging reagents.
We describe here a multiplexed protein quantitation strategy that provides relative and absolute measurements of proteins in complex mixtures. At the core of this methodology is a multiplexed set of isobaric reagents that yield amine-derivatized peptides. The derivatized peptides are indistinguishable in MS, but exhibit intense low-mass MS/MS signature ions that support quantitation. In this study, we have examined the global protein expression of a wild-type yeast strain and the isogenic upf1Delta and xrn1Delta mutant strains that are defective in the nonsense-mediated mRNA decay and the general 5' to 3' decay pathways, respectively. We also demonstrate the use of 4-fold multiplexing to enable relative protein measurements simultaneously with determination of absolute levels of a target protein using synthetic isobaric peptide standards. We find that inactivation of Upf1p and Xrn1p causes common as well as unique effects on protein expression.
Energy-Efficient, Large-Scale Distributed-Antenna System (L-DAS) for Multiple Users
Large-scale distributed-antenna system (L-DAS) with very large number of distributed antennas, possibly up to a few hundred antennas, is considered. A few major issues of the L-DAS, such as high latency, energy consumption, computational complexity, and large feedback (signaling) overhead, are identified. The potential capability of the L-DAS is illuminated in terms of an energy efficiency (EE) throughout the paper. We firstly and generally model the power consumption of an L-DAS, and formulate an EE maximization problem. To tackle two crucial issues, namely the huge computational complexity and large amount of feedback (signaling) information, we propose a channel-gain-based antenna selection (AS) method and an interference-based user clustering (UC) method. The original problem is then split into multiple subproblems by a cluster, and each cluster's precoding and power control are managed in parallel for high EE. Simulation results reveal that i) using all antennas for zero-forcing multiuser multiple-input multiple-output (MU-MIMO) is energy inefficient if there is nonnegligible overhead power consumption on MU-MIMO processing, and ii) increasing the number of antennas does not necessarily result in a high EE. Furthermore, the results validate and underpin the EE merit of the proposed L-DAS complied with the AS, UC, precoding, and power control by comparing with non-clustering L-DAS and colocated antenna systems.
Satellite Imagery Multiscale Rapid Detection with Windowed Networks
Detecting small objects over large areas remains a significant challenge in satellite imagery analytics. Among the challenges is the sheer number of pixels and geographical extent per image: a single DigitalGlobe satellite image encompasses over 64 km and over 250 million pixels. Another challenge is that objects of interest are often minuscule (∼ 10 pixels in extent even for the highest resolution imagery), which complicates traditional computer vision techniques. To address these issues, we propose a pipeline (SIMRDWN) that evaluates satellite images of arbitrarily large size at native resolution at a rate of ≥ 0.2 km/s. Building upon the tensorflow object detection API paper [9], this pipeline offers a unified approach to multiple object detection frameworks that can run inference on images of arbitrary size. The SIMRDWN pipeline includes a modified version of YOLO (known as YOLT [25]), along with the models in [9]: SSD [14], Faster R-CNN [22], and R-FCN [3]. The proposed approach allows comparison of the performance of these four frameworks, and can rapidly detect objects of vastly different scales with relatively little training data over multiple sensors. For objects of very different scales (e.g. airplanes versus airports) we find that using two different detectors at different scales is very effective with negligible runtime cost. We evaluate large test images at native resolution and find mAP scores of 0.2 to 0.8 for vehicle localization, with the YOLT architecture achieving both the highest mAP and fastest inference speed.
PPSGen : Learning to Generate Presentation Slides for
In this paper, we investigate a very challenging task of automatically generating presentation slides for academic papers. The generated presentation slides can be used as drafts to help the presenters prepare their formal slides in a quicker way. A novel system called PPSGen is proposed to address this task. It first employs regression methods to learn the importance of the sentences in an academic paper, and then exploits the integer linear programming (ILP) method to generate well-structured slides by selecting and aligning key phrases and sentences. Evaluation results on a test set of 200 pairs of papers and slides collected on the web demonstrate that our proposed PPSGen system can generate slides with better quality. A user study is also illustrated to show that PPSGen has a few evident advantages over baseline methods.
Poisson shape interpolation
In this paper, we propose a novel shape interpolation approach based on Poisson equation. We formulate the trajectory problem of shape interpolation as solving Poisson equations defined on a domain mesh. A non-linear gradient field interpolation method is proposed to take both vertex coordinates and surface orientation into account. With proper boundary conditions, the in-between shapes are reconstructed implicitly from the interpolated gradient fields, while traditional methods usually manipulate vertex coordinates directly. Besides of global shape interpolation, our method is also applicable to local shape interpolation, and can be further enhanced by incorporating with deformation. Our approach can generate visual pleasing and physical plausible morphing sequences with stable area and volume changes. Experimental results demonstrate that our technique can avoid the shrinkage problem appeared in linear shape interpolation.
Experiences with node virtualization for scalable network emulation
During the development of network protocols and distributed applications, their performance has to be analyzed in appropriate environments. Network emulation testbeds provide a synthetic, configurable network environment for comparative performance measurements of real implementations. Realistic scenarios have to consider hundreds of communicating nodes. Common network emulation approaches limit the number of nodes in a scenario to the number of computers in an emulation testbed. To overcome this limitation, we introduce a virtual node concept for network emulation. The key problem for node virtualization is a transparent, yet efficient separation of node resources. In this paper, we provide a brief survey of candidate node virtualization approaches to facilitate scalable network emulation. Based on the gathered insights, we propose a lightweight virtualization solution to achieve maximum scalability and discuss the main points regarding its implementation. We present extensive evaluations that show the scalability and transparency of our approach in both a traditional wired infrastructure-based, and in two wireless ad hoc network emulation scenarios. The measurements indicate that our solution can push the upper limit of emulation scenario sizes by a factor of 10 to 28. Given our emulation testbed consisting of 64 computers, this translates to possible scenario sizes of up to 1792 nodes. In addition to the evaluation of our virtualization approach, we discuss key concepts for controlling comprehensive emulation scenarios to support scalability of our system as a whole.
DoubleFusion: Real-Time Capture of Human Performances with Inner Body Shapes from a Single Depth Sensor
We propose DoubleFusion, a new real-time system that combines volumetric dynamic reconstruction with data-driven template fitting to simultaneously reconstruct detailed geometry, non-rigid motion and the inner human body shape from a single depth camera. One of the key contributions of this method is a double layer representation consisting of a complete parametric body shape inside, and a gradually fused outer surface layer. A pre-defined node graph on the body surface parameterizes the non-rigid deformations near the body, and a free-form dynamically changing graph parameterizes the outer surface layer far from the body, which allows more general reconstruction. We further propose a joint motion tracking method based on the double layer representation to enable robust and fast motion tracking performance. Moreover, the inner body shape is optimized online and forced to fit inside the outer surface layer. Overall, our method enables increasingly denoised, detailed and complete surface reconstructions, fast motion tracking performance and plausible inner body shape reconstruction in real-time. In particular, experiments show improved fast motion tracking and loop closure performance on more challenging scenarios.
The Locational Impact of Wal-Mart Entrance: A Panel Study of the Retail Trade Sector in West Virginia
This paper examines the retail trade sector in 14 West Virginia counties from 1989 through 1996. A series of random effects models are tested on these panel data to measure the effect of the entrance of WalMart stores in the county and in adjacent counties, and business cycle effects. This paper differs from earlier research in that it controls for endogeneity in the entrance decision of Wal-Mart in faster growing counties. This research finds a dramatic net increase in employment and wages in the Retail Trade sector (SIC 52) and a mild increase in the number of firms. The study finds a per capita wage increase in this industry, which is surprising but small. The paper concludes with further research recommendations. The views expressed in this paper are the authors’ and do not reflect the policy or opinion of the Lewis College of Business, Marshall University or any of its entities. 2 Introduction The putative economic impact of the entrance of a discount store in a community has been hotly debated among local leaders across the U.S. throughout the past decade. The extension of these stores, especially Wal-Mart, Inc. overseas provides grist to an ongoing critique of everything from unfettered markets to cultural imperialism. Among the perceived impacts of these types of discounts stores, which we feel competent to address, include lower regional wages, capital outflows and loss of independent, locally owned businesses. These costs are balanced against proponent arguments of increased efficiency, employment and tax revenues. Less frequently discussed are the consumption benefits. Economists, for the most part, have been silent on these issues, feeling that the economies of scale inherent in a discount store, and the success of local market mechanisms meant increased welfare effects for local communities. Simply, the matter was not investigated on a broad scale within the academic community. The few studies that have been performed do not generate a consensus result. Nearly all work in this area has focused on the location of Wal-Mart stores in local communities since the mid 1980's. The earliest of these studies (Keon, et. al., 1989) performed a static comparison of economic conditions in 14 Missouri counties with and without Wal-Mart stores. These researchers found no evidence of a negative impact of Wal-Mart location, instead finding increases in broad measures of income, retail employment and income, and sales tax revenues. They found that the overall number of retail stores had declined, but that in the sector there were more employees and higher payrolls. They did not note per capita wages in the retail sector, nor did they account for potential growth related entrance by Wal-Mart. This final problem plagues all the studies in the current literature. Simply, the This study is notable for its use of actual statistical analysis, not merely comparisons of growth rates. The authors used the difference-in-difference-in-difference method (Gruber, 1994) to approximate second derivative analysis for functions whose empirical specifications were not continuous. 3 question of endogeneity in the growth variable in explicit or implicit modeling has not been adequately addressed, static analysis fails to capture the possibility that Wal-Mart stores enter counties with higher growth rates. This problem was addressed in a later paper (Ozment, et. al., 1990) which found few significant positive effects of Wal-Mart in a sample of rural counties. The study examined population, income, number of establishments, per capita bank deposits, employment, sales revenue and tax receipts. The authors suggested that Wal-Mart may have selected the better performing counties for store locations, and that the mildly better performance of the economies in these counties was not likely caused by the store entrance. A study of employment and wages in Maine (Ketchum, et. al., 1997) concluded that there was no evidence of negative impact in a sample of 12 counties with and 12 counties without Wal-Mart stores. These findings were similar to the earlier studies in that the impacts were not significant and the authors recognized the possibility that Wal-Mart entered counties with higher growth rates. A broader study (Barnes and Connell, 1996) examined regional variation in Wal-Mart impacts across several northeastern states. The study examined impacts on specific industries finding a pattern of results. They found increased sales of general merchandise, but with number of establishments unchanged, little or no change in the food stores and sales, decreased auto and furniture sales and increases in eating, apparel and drug store sales. The authors sought to find patterns, not causation, and the results point to that effort. Two historians conducted a study that evaluated the social and economic effect of Wal-Mart The inclusion of an historical study in this review suggests the paucity of economic research on the subject. The weakness of the economic analysis by these authors (and their interpretation of the existing economic literature) highlights the problems that surround the analysis of discount stores. For example, the authors write “Walton hit upon another innovation. . . that by lowering his price per item he could sell a greater quantity of goods.” (Vance & Scott, 1994, pg. 8, our italics). These type of wild assertions are endemic to journalistic reviews of the company, and sadly among some academic writing. The notion that Sam Walton discovered the demand curve is incredible, and casts real doubt on the veracity of the remaining analysis of the book. 4 entrance in southern towns (Vance & Scott, 1992). The study addressed several sticky issues such as capital outflows and the benefits of locally owned firms. They concluded that the benefits Wal-Mart brought to the local communities outweighed the costs. Perhaps the best known research on Wal-Mart impacts focused on small towns in Iowa (Stone, 1989, 1995, 1997). The 1997 comparative study of 34 towns (5,000 40,000 pop.) examined changes in the same sets of variables as did the earlier studies, examining a metric known as the pull factor (the proportion of sales in a county as a proportion of statewide sales) in several industry sectors. The short run effects pointed to Wal-Mart induced increases (or slower decreases) in several sectors. This took the form of comparison between cities with and without Wal-Marts. The author suggested that long term growth rates would diverge much more modestly than the pronounced short run effects. He attributed this, in part, to the travel of shoppers to the Wal-Mart area from adjoining counties. Changes in the pull factors were used to capture this inter-county movement of consumers. We call this the travel-substitution effect. Stone’s work was primarily aimed at describing the local effects of Wal-Mart entrance by focusing on large samples of towns. This work is both the most extensive and analytical of the existing literature. This study went further in establishing retail strategies for local stores facing Wal-Mart entrance, expanding this section into a widely read book on retail strategies for competing with discount stores. The prime element of Stone’s analysis we
From Markov Chains to Stochastic Games
Markov chains1 and Markov decision processes (MDPs) are special cases of stochastic games. Markov chains describe the dynamics of the states of a stochastic game where each player has a single action in each state. Similarly, the dynamics of the states of a stochastic game form a Markov chain whenever the players’ strategies are stationary. Markov decision processes are stochastic games with a single player. In addition, the decision problem faced by a player in a stochastic game when all other players choose a fixed profile of stationary strategies is equivalent to an MDP. The present chapter states classical results on Markov chains and Markov decision processes. The proofs use methods that introduce the reader to proofs of more general analog results on stochastic games.
A fuel-based assessment of off-road diesel engine emissions.
The use of diesel engines in off-road applications is a significant source of nitrogen oxides (NOx) and particulate matter (PM10). Such off-road applications include railroad locomotives, marine vessels, and equipment used for agriculture, construction, logging, and mining. Emissions from these sources are only beginning to be controlled. Due to the large number of these engines and their wide range of applications, total activity and emissions from these sources are uncertain. A method for estimating the emissions from off-road diesel engines based on the quantity of diesel fuel consumed is presented. Emission factors are normalized by fuel consumption, and total activity is estimated by the total fuel consumed. Total exhaust emissions from off-road diesel equipment (excluding locomotives and marine vessels) in the United States during 1996 have been estimated to be 1.2 x 10(9) kg NOx and 1.2 x 10(8) kg PM10. Emissions estimates published by the U.S. Environmental Protection Agency are 2.3 times higher for both NOx and exhaust PM10 emissions than estimates based directly on fuel consumption. These emissions estimates disagree mainly due to differences in activity estimates, rather than to differences in the emission factors. All current emission inventories for off-road engines are uncertain because of the limited in-use emissions testing that has been performed on these engines. Regional- and state-level breakdowns in diesel fuel consumption by off-road mobile sources are also presented. Taken together with on-road measurements of diesel engine emissions, results of this study suggest that in 1996, off-road diesel equipment (including agriculture, construction, logging, and mining equipment, but not locomotives or marine vessels) was responsible for 10% of mobile source NOx emissions nationally, whereas on-road diesel vehicles contributed 33%.
Inducing Features of Random Fields
We present a technique for constructing random elds from a set of training samples. The learning paradigm builds increasingly complex elds by allowing potential functions, or features, that are supported by increasingly large subgraphs. Each feature has a weight that is trained by minimizing the Kullback-Leibler divergence between the model and the empirical distribution of the training data. A greedy algorithm determines how features are incrementally added to the eld and an iterative scaling algorithm is used to estimate the optimal values of the weights. The random eld models and techniques introduced in this paper di er from those common to much of the computer vision literature in that the underlying random elds are nonMarkovian and have a large number of parameters that must be estimated. Relations to other learning approaches including decision trees and Boltzmann machines are given. As a demonstration of the method, we describe its application to the problem of automatic word classi cation in natural language processing.
An experimental comparison of gender classification methods
Successful face analysis requires robust methods. It has been hard to compare the methods due to different experimental setups. We carried out a comparison study for the state-of-the-art gender classification methods to find out their actual reliability. The main contributions are comprehensive and comparable classification results for the gender classification methods combined with automatic real-time face detection and, in addition, with manual face normalization. We also experimented by combining gender classifier outputs arithmetically. This lead to increased classification accuracies. Furthermore, we contribute guidelines to carry out classification experiments, knowledge on the strengths and weaknesses of the gender classification methods, and two new variants of the known methods. 2008 Elsevier B.V. All rights reserved.
Things rank and gross in nature: a review and synthesis of moral disgust.
Much like unpalatable foods, filthy restrooms, and bloody wounds, moral transgressions are often described as "disgusting." This linguistic similarity suggests that there is a link between moral disgust and more rudimentary forms of disgust associated with toxicity and disease. Critics have argued, however, that such references are purely metaphorical, or that moral disgust may be limited to transgressions that remind us of more basic disgust stimuli. Here we review the evidence that moral transgressions do genuinely evoke disgust, even when they do not reference physical disgust stimuli such as unusual sexual behaviors or the violation of purity norms. Moral transgressions presented verbally or visually and those presented as social transactions reliably elicit disgust, as assessed by implicit measures, explicit self-report, and facial behavior. Evoking physical disgust experimentally renders moral judgments more severe, and physical cleansing renders them more permissive or more stringent, depending on the object of the cleansing. Last, individual differences in the tendency to experience disgust toward physical stimuli are associated with variation in moral judgments and morally relevant sociopolitical attitudes. Taken together, these findings converge to support the conclusion that moral transgressions can in fact elicit disgust, suggesting that moral cognition may draw upon a primitive rejection response. We highlight a number of outstanding issues and conclude by describing 3 models of moral disgust, each of which aims to provide an account of the relationship between moral and physical disgust.
Streaming Technology in 3G Mobile Communication Systems
M any portal sites offer streaming audio and video services for accessing news and entertainment content on the Internet from a PC. 1,2 Currently, three incompatible proprietary solutions— offered by RealNetworks, Microsoft, and Apple— dominate the Internet streaming software market. In the near future, third-generation mobile communication systems will extend the scope of today's Internet streaming solutions by introducing standardized streaming services, targeting the mobile user's specific needs. 3 By offering data-transmission rates up to 384 Kbps for wide-area coverage and 2 Mbps for local-area coverage, 4 3G systems will be able to provide high-quality streamed Internet content to the rapidly growing mobile market. In addition to higher data rates, these systems also will offer value-added applications supported by an underlying network that combines streaming services with a range of unique mobile-specific services such as geographical positioning, user profiling, and mobile payment. 5 Mobile cinema ticketing is one example of such a service. First, the mobile network or a terminal integrated positioning system such as GPS would determine the user's geographical location. Then, the service would access a cinema database to generate a list of nearby movie theatres and a user profile database to determine what kind of movies the user likes best. Based on the geographical location information and user-defined preferences, the service would offer the user a selection of available movies and show times. The user would then have the option of using the mobile device to view corresponding movie trailers through a streaming service. Upon choosing a film, the user could purchase a ticket through payment software on the mobile device. This and other mobile application scenarios present numerous challenges, such as how to provide spectrum-efficient streaming services over varied radio-access networks to different types of end-user terminals. Our standard-based Interactive Media platform addresses these challenges by using an architecture that fits seamlessly into 3G mobile communication systems. An integral part of this architecture is a streaming proxy, which acts on both the service and transport levels. We recently conducted several field trials, which demonstrated that this platform is flexible enough to deal with different operator requirements and that it can provide high-quality streaming services in a mobile application environment. International Mobile Telecommunications-2000 (IMT-2000) and the Universal Mobile Telecommunications System (UMTS) 4 will be among the first 3G mobile communication systems to offer wireless wide-band multimedia services using the Internet protocol. Two important technological changes will facilitate …
Reviewer recommendation for pull-requests in GitHub: What can we learn from code review and bug assignment?
Context: The pull-based model, widely used in distributed software development, offers an extremely low barrier to entry for potential contributors (anyone can submit of contributions to any project, through pull-requests). Meanwhile, the project’s core team must act as guardians of code quality, ensuring that pull-requests are carefully inspected before being merged into the main development line. However, with pull-requests becoming increasingly popular, the need for qualified reviewers also increases. GitHub facilitates this, by enabling the crowd-sourcing of pull-request reviews to a larger community of coders than just the project’s core team, as a part of their social coding philosophy. However, having access to more potential reviewers does not necessarily mean that it’s easier to find the right ones (the “needle in a haystack” problem). If left unsupervised, this process may result in communication overhead and delayed pull-request processing. Objective: This study aims to investigate whether and how previous approaches used in bug triaging and code review can be adapted to recommending reviewers for pull-requests, and how to improve the recommendation performance. Method: First, we extend three typical approaches used in bug triaging and code review for the new challenge of assigning reviewers to pull-requests. Second, we analyze social relations between contributors and reviewers, and propose a novel approach by mining each project’s comment networks (CNs). Finally, we combine the CNs with traditional approaches, and evaluate the effectiveness of all these methods on 84 GitHub projects through both quantitative and qualitative analysis. Results: We find that CN-based recommendation can achieve, by itself, similar performance as the traditional approaches. However, the mixed approaches can achieve significant improvements compared to using either of them independently. Conclusion: Our study confirms that traditional approaches to bug triaging and code review are feasible for pull-request reviewer recommendations on GitHub. Furthermore, their performance can be improved significantly by combining them with information extracted from prior social interactions between developers on GitHub. These results prompt for novel tools to support process automation in social coding platforms, that combine social (e.g., common interests among developers) and technical factors (e.g., developers’ expertise). © 2016 Elsevier B.V. All rights reserved.
Heart failure etiology and response to milrinone in decompensated heart failure: results from the OPTIME-CHF study.
OBJECTIVES The goal of this study was to assess the interaction between heart failure (HF) etiology and response to milrinone in decompensated HF. BACKGROUND Etiology has prognostic and therapeutic implications in HF, but its relationship to response to inotropic therapy is unknown. METHODS The Outcomes of a Prospective Trial of Intravenous Milrinone for Exacerbations of Chronic Heart Failure (OPTIME-CHF) study randomized 949 patients with systolic dysfunction and decompensated HF to receive 48 to 72 h of intravenous milrinone or placebo. The primary end point was days hospitalized from cardiovascular causes within 60 days. In a post-hoc analysis, we evaluated the interaction between response to milrinone and etiology of HF. RESULTS The primary end point was 13.0 days for ischemic patients and 11.7 days for nonischemic patients (p = 0.2). Sixty-day mortality was 11.6% for the ischemic group and 7.5% for the nonischemic group (p = 0.03). After adjustment for baseline differences, there was a significant interaction between etiology and the effect of milrinone. Milrinone-treated patients with ischemic etiology tended to have worse outcomes than those treated with placebo in terms of the primary end point (13.6 days for milrinone vs. 12.4 days for placebo, p = 0.055 for interaction) and the composite of death or rehospitalization (42% vs. 36% for placebo, p = 0.01 for interaction). In contrast, outcomes in nonischemic patients treated with milrinone tended to be improved in terms of the primary end point (10.9 vs. 12.6 days placebo) and the composite of death or rehospitalization (28% vs. 35% placebo). CONCLUSIONS Milrinone may have a bidirectional effect based on etiology in decompensated HF. Milrinone may be deleterious in ischemic HF, but neutral to beneficial in nonischemic cardiomyopathy.
The State of the Art in Multiple Object Tracking Under Occlusion in Video Sequences
In this paper, we present a review of existing techniques and systems for tracking multiple occluding objects using one or more cameras. Following a formulation of the occlusion problem, we divide these techniques into two groups: mergesplit (MS) approaches and straight-through (ST) approaches. Then, we consider tracking in ball game applications, with emphasis on soccer. Based on this assessment of the state of the art, we identify what appear to be the most promising approaches for tracking in general and for soccer in
Music therapy and music medicine for children and adolescents.
This article summarizes the research on music therapy and music medicine for children and adolescents with diagnoses commonly treated by psychiatrists. Music therapy and music medicine are defined, effects of music on the brain are described, and music therapy research in psychiatric treatment is discussed. Music therapy research with specific child/adolescent populations is summarized, including disorders usually diagnosed in childhood, substance abuse, mood/anxiety disorders, and eating disorders. Clinical implications are listed, including suggestions for health care professionals seeking to use music medicine techniques. Strengths and weaknesses of music therapy treatment are discussed, as well as areas for future research.
Findings from the NIMH Multimodal Treatment Study of ADHD (MTA): implications and applications for primary care providers.
In 1992, the National Institute of Mental Health and 6 teams of investigators began a multisite clinical trial, the Multimodal Treatment of Attention-Deficit Hyperactivity Disorder (MTA) study. Five hundred seventy-nine children were randomly assigned to either routine community care (CC) or one of three study-delivered treatments, all lasting 14 months. The three MTA treatments-monthly medication management (usually methylphenidate) following weekly titration (MedMgt), intensive behavioral treatment (Beh), and the combination (Comb)-were designed to reflect known best practices within each treatment approach. Children were assessed at four time points in multiple outcome. Results indicated that Comb and MedMgt interventions were substantially superior to Beh and CC interventions for attention-deficit hyperactivity disorder symptoms. For other functioning domains (social skills, academics, parent-child relations, oppositional behavior, anxiety/depression), results suggested slight advantages of Comb over single treatments (MedMgt, Beh) and community care. High quality medication treatment characterized by careful yet adequate dosing, three times daily methylphenidate administration, monthly follow-up visits, and communication with schools conveyed substantial benefits to those children that received it. In contrast to the overall study findings that showed the largest benefits for high quality medication management (regardless of whether given in the MedMgt or Comb group), secondary analyses revealed that Comb had a significant incremental effect over MedMgt (with a small effect size for this comparison) when categorical indicators of excellent response and when composite outcome measures were used. In addition, children with parent-defined comorbid anxiety disorders, particularly those with overlapping disruptive disorder comorbidities, showed preferential benefits to the Beh and Comb interventions. Parental attitudes and disciplinary practices appeared to mediate improved response to the Beh and Comb interventions.
Deep Predictive Coding Network for Object Recognition
Based on the predictive coding theory in neuroscience, we designed a bi-directional and recurrent neural net, namely deep predictive coding networks (PCN), that has feedforward, feedback, and recurrent connections. Feedback connections from a higher layer carry the prediction of its lower-layer representation; feedforward connections carry the prediction errors to its higher-layer. Given image input, PCN runs recursive cycles of bottom-up and top-down computation to update its internal representations and reduce the difference between bottom-up input and top-down prediction at every layer. After multiple cycles of recursive updating, the representation is used for image classification. With benchmark datasets (CIFAR-10/100, SVHN, and MNIST), PCN was found to always outperform its feedforward-only counterpart: a model without any mechanism for recurrent dynamics, and its performance tended to improve given more cycles of computation over time. In short, PCN reuses a single architecture to recursively run bottom-up and top-down processes to refine its representation towards more accurate and definitive object recognition.
Learning to classify species with barcodes
According to many field experts, specimens classification based on morphological keys needs to be supported with automated techniques based on the analysis of DNA fragments. The most successful results in this area are those obtained from a particular fragment of mitochondrial DNA, the gene cytochrome c oxidase I (COI) (the "barcode"). Since 2004 the Consortium for the Barcode of Life (CBOL) promotes the collection of barcode specimens and the development of methods to analyze the barcode for several tasks, among which the identification of rules to correctly classify an individual into its species by reading its barcode. We adopt a Logic Mining method based on two optimization models and present the results obtained on two datasets where a number of COI fragments are used to describe the individuals that belong to different species. The method proposed exhibits high correct recognition rates on a training-testing split of the available data using a small proportion of the information available (e.g., correct recognition approx. 97% when only 20 sites of the 648 available are used). The method is able to provide compact formulas on the values (A, C, G, T) at the selected sites that synthesize the characteristic of each species, a relevant information for taxonomists. We have presented a Logic Mining technique designed to analyze barcode data and to provide detailed output of interest to the taxonomists and the barcode community represented in the CBOL Consortium. The method has proven to be effective, efficient and precise.
Implant-supported full-arch zirconia-based mandibular fixed dental prostheses. Eight-year results from a clinical pilot study.
OBJECTIVE The purpose of this pilot study was to evaluate the long-term clinical performance of implant-supported full-arch zirconia-based fixed dental prostheses (FDPs). MATERIALS AND METHODS Ten patients received full-arch zirconia-based (Cercon) mandibular FDPs supported by four implants (Astra Tech). Nine patients received 10-unit FDPs and one patient received a 9-unit FDP. The FDPs were cemented onto individually prepared titanium abutments and were evaluated at baseline and after 12, 24, 36 and 96 months. RESULTS Nine patients attended the 8-year follow-up. None of the restorations showed bulk fracture, all FDPs were in use. Fractures of the veneering porcelain were, however, observed in eight patients. A total of 36 out of 89 units (40%) showed such fractures. Patient satisfaction was excellent despite the veneering material fractures. CONCLUSION Results from this 8-year pilot study suggest that implant-supported full-arch zirconia-based FDPs can be an acceptable treatment alternative.
Quench Behavior and Protection in Cryocooler-Cooled YBCO Pancake Coil for SMES
The thermal behavior of a high-temperature superconducting (HTS) coil is significantly different from that of low-temperature superconducting (LTS) coil because it has a greater volumetric heat capacity at the temperature required for practical use. Therefore, the possibility of quench in HTS coils is much lower than that in LTS coils. In the application of the YBCO coil to Superconducting Magnetic Energy Storage (SMES) system, electrical charging and discharging are repeated; therefore, the superconducting characteristics of the YBCO coated conductor may deteriorate as a result of cyclic subjection to tensile strain. To enhance the reliability and safety of HTS coil, protection scheme assuming a quench is also required for HTS coil. In this study, we focus on the coil wound with YBCO laminated bundle (parallel) conductor supposing SMES application and investigate the characteristics of normal-zone propagation and the thermal behavior within the coil during a quench by using a newly developed computer code based on the finite element method and an equivalent circuit. And we also propose a quench detection method using observation of nonuniform current in YBCO laminated bundle conductor and discuss the validity of the detection method comparing with conventional quench-voltage detection.
Performance analysis of stochastic behavior trees
This paper presents a mathematical framework for performance analysis of Behavior Trees (BTs). BTs are a recent alternative to Finite State Machines (FSMs), for doing modular task switching in robot control architectures. By encoding the switching logic in a tree structure, instead of distributing it in the states of a FSM, modularity and reusability are improved. In this paper, we compute performance measures, such as success/failure probabilities and execution times, for plans encoded and executed by BTs. To do this, we first introduce Stochastic Behavior Trees (SBT), where we assume that the probabilistic performance measures of the basic action controllers are given. We then show how Discrete Time Markov Chains (DTMC) can be used to aggregate these measures from one level of the tree to the next. The recursive structure of the tree then enables us to step by step propagate such estimates from the leaves (basic action controllers) to the root (complete task execution). Finally, we verify our analytical results using massive Monte Carlo simulations, and provide an illustrative example of the results for a complex robotic task.
Reinforcement learning with analogue memristor arrays
The neocarzinostatin biosynthetic gene cluster from Streptomyces carzinostaticus ATCC 15944 involving two iterative type I polyketide synthases.
The biosynthetic gene cluster for the enediyne antitumor antibiotic neocarzinostatin (NCS) was localized to 130 kb continuous DNA from Streptomyces carzinostaticus ATCC15944 and confirmed by gene inactivation. DNA sequence analysis of 92 kb of the cloned region revealed 68 open reading frames (ORFs), 47 of which were determined to constitute the NCS cluster. Sequence analysis of the genes within the NCS cluster suggested dNDP-D-mannose as a precursor for the deoxy aminosugar, revealed two distinct type I polyketide synthases (PKSs), and supported a convergent model for NCS chromophore biosynthesis from the deoxy aminosugar, naphthoic acid, and enediyne core building blocks. These findings shed light into deoxysugar biosynthesis, further support the iterative type I PKS paradigm for enediyne core biosynthesis, and unveil a mechanism for microbial polycyclic aromatic polyketide biosynthesis by an iterative type I PKS.
Predicting taxi demand at high spatial resolution: Approaching the limit of predictability
In big cities, taxi service is imbalanced. In some areas, passengers wait too long for a taxi, while in others, many taxis roam without passengers. Knowledge of where a taxi will become available can help us solve the taxi demand imbalance problem. In this paper, we employ a holistic approach to predict taxi demand at high spatial resolution. We showcase our techniques using two real-world data sets, yellow cabs and Uber trips in New York City, and perform an evaluation over 9,940 building blocks in Manhattan. Our approach consists of two key steps. First, we use entropy and the temporal correlation of human mobility to measure the demand uncertainty at the building block level. Second, to identify which predictive algorithm can approach the theoretical maximum predictability, we implement and compare three predictors: the Markov predictor (a probability-based predictive algorithm), the Lempel-Ziv-Welch predictor (a sequence-based predictive algorithm), and the Neural Network predictor (a predictive algorithm that uses machine learning). The results show that predictability varies by building block and, on average, the theoretical maximum predictability can be as high as 83%. The performance of the predictors also vary: the Neural Network predictor provides better accuracy for blocks with low predictability, and the Markov predictor provides better accuracy for blocks with high predictability. In blocks with high maximum predictability, the Markov predictor is able to predict the taxi demand with an 89% accuracy, 11% better than the Neural Network predictor, while requiring only 0.03% computation time. These findings indicate that the maximum predictability can be a good metric for selecting prediction algorithms.
A Correction to the Sentential Calculus of Tarski's Introduction to Logic
Received December 8, 1941. 1 Alfred Tarski, Introduction to logic and to the methodology of deductive sciences, New York 1941. See page 147. 2Page 151, last paragraph. I have brought this error to the attention of Professor Tarski. He attributes it to inadvertence in writing down the axioms, and plans to make the necessary changes in the next edition of his book. Note added January 6, 1942. I have just noticed that the above matrices for "--" and ".-" are the same, except for differences in designation of the elements, as those given by Lukasiewicz (1868, p. 65) for his three-valued calculus.
Design Research in the Technology of Information Systems : Truth or Dare
This essay develops the philosophical foundations for design research in the Technology of Information Systems (TIS). Traditional writings on philosophy of science cannot fully describe this mode of research, which dares to intervene and improve to realize alternative futures instead of explaining or interpreting the past to discover truth. Accordingly, in addition to philosophy of science, the essay draws on writings about the act of designing, philosophy of technology and the substantive (IS) discipline. I define design research in TIS as in(ter)vention in the representational world defined by the hierarchy of concerns following semiotics. The complementary nature of the representational (internal) and real (external) environments provides the basis to articulate the dual ontological and epistemological bases. Understanding design research in TIS in this manner suggests operational principles in the internal world as the form of knowledge created by design researchers, and artifacts that embody these are seen as situated instantiations of normative theories that affect the external phenomena of interest. Throughout the paper, multiple examples illustrate the arguments. Finally, I position the resulting ‘method’ for design research vis-à-vis existing research methods and argue for its legitimacy as a viable candidate for research in the IS discipline.
Estimating the Maximum Expected Value: An Analysis of (Nested) Cross Validation and the Maximum Sample Average
We investigate the accuracy of the two most common estimators for the maximum expected value of a general set of random variables: a generalization of the maximum sample average, and cross validation. No unbiased estimator exists and we show that it is non-trivial to select a good estimator without knowledge about the distributions of the random variables. We investigate and bound the bias and variance of the aforementioned estimators and prove consistency. The variance of cross validation can be significantly reduced, but not without risking a large bias. The bias and variance of different variants of cross validation are shown to be very problem-dependent, and a wrong choice can lead to very inaccurate estimates.
MDMA-assisted psychotherapy using low doses in a small sample of women with chronic posttraumatic stress disorder.
The purpose of this study was to investigate the safety of different doses of MDMA-assisted psychotherapy administered in a psychotherapeutic setting to women with chronic PTSD secondary to a sexual assault, and also to obtain preliminary data regarding efficacy. Although this study was originally planned to include 29 subjects, political pressures led to the closing of the study before it could be finished, at which time only six subjects had been treated. Preliminary results from those six subjects are presented here. We found that low doses of MDMA (between 50 and 75 mg) were both psychologically and physiologically safe for all the subjects. Future studies in larger samples and using larger doses are needed in order to further clarify the safety and efficacy of MDMA in the clinical setting in subjects with PTSD.
Formalizing information security knowledge
Unified and formal knowledge models of the information security domain are fundamental requirements for supporting and enhancing existing risk management approaches. This paper describes a security ontology which provides an ontological structure for information security domain knowledge. Besides existing best-practice guidelines such as the German IT Grundschutz Manual also concrete knowledge of the considered organization is incorporated. An evaluation conducted by an information security expert team has shown that this knowledge model can be used to support a broad range of information security risk management approaches.
Indirect measures of gene flow and migration: FST≠1/(4Nm+1)
The difficulty of directly measuring gene flow has lead to the common use of indirect measures extrapolated from genetic frequency data. These measures are variants of FST, a standardized measure of the genetic variance among populations, and are used to solve for Nm, the number of migrants successfully entering a population per generation. Unfortunately, the mathematical model underlying this translation makes many biologically unrealistic assumptions; real populations are very likely to violate these assumptions, such that there is often limited quantitative information to be gained about dispersal from using gene frequency data. While studies of genetic structure per se are often worthwhile, and FST is an excellent measure of the extent of this population structure, it is rare that FST can be translated into an accurate estimate of Nm.
Simple Resolver Demodulation
A simple method to demodulate the resolver signal is introduced in this article. The proposed method can provide the demodulated resolver quadrature signals, sine and cosine, without requiring the resolver excitation signal as the carrier signal. Therefore, the phase delay between the stator winding and the rotor winding is avoided. The configuration of this approach consists of commercial avariable circuit building blocks such as comparator, simple and hold (S/H) and monostable multivibrators. Thus, the proposed approach is obtained to the economical attraction. The operation of the proposed demodulator and its performances are confirmed by simulation and experimental results.
A 100–300-GHz Free-Space Scalar Network Analyzer Using Compact Tx and Rx Modules
This paper presents a 100-300-GHz quasi-optical network analyzer using compact transmitter and receiver modules. The transmitter includes a wideband double bow-tie slot antenna and employs a Schottky diode as a frequency harmonic multiplier. The receiver includes a similar antenna, a Schottky diode used as a subharmonic mixer, and an LO/IF diplexer. The 100-300-GHz RF signals are the 5th-11th harmonics generated by the frequency multiplier when an 18-27-GHz LO signal is applied. The measured transmitter conversion gain with Pin = 18$ dBm is from -35 to -59 dB for the 5th-11th harmonic, respectively, and results in a transmitter EIRP from +3 to -20 dBm up to 300 GHz. The measured mixer conversion gain is from -30 to -47 dB at the 5th-11th harmonic, respectively. The system has a dynamic range > 60 dB at 200 GHz in a 100-Hz bandwidth for a transmit and receive system based on 12-mm lenses and spaced 60 cm from each other. Frequency-selective surfaces at 150 and 200 GHz are tested by the proposed design and their measured results agree with simulations. Application areas are low-cost scalar network analyzers for wideband quasi-optical 100 GHz-1 THz measurements.
A novel network coded parallel transmission framework for high-speed Ethernet
Parallel transmission, as defined in high-speed Ethernet standards, enables to use less expensive optoelectronics and offers backwards compatibility with legacy Optical Transport Network (OTN) infrastructure. However, optimal parallel transmission does not scale to large networks, as it requires computationally expensive optimal multipath routing algorithms to minimize differential delay, and thus the required buffer size to ensure frame synchronization. In this paper, we propose a novel parallel transmission framework for high-speed Ethernet, which we refer to as network coded parallel transmission, capable of effective buffer management and frame synchronization without the need for complex multipath algorithms. We show that using network coding can reduce the delay caused by packet reordering at the receiver, thus requiring a smaller overall buffer size, while improving the network throughput. We design the framework in full compliance with high-speed Ethernet standards specified in IEEE802.3ba and present detailed schemes including encoding, data structure of coded parallel transmission, buffer management and decoding at the receiver side. The proposed network coded parallel transmission framework is simple to implement and presents a potential major breakthrough in the system design of future high-speed Ethernet.
Cleanroom Software Engineering for Zero-Defect Software
Cleanroom software engineering is a theory-based, team-oriented process for developing very high quality software under statistical quality control. Cleanroom combines formal methods of object-based box structure specification and design, functiontheoretic correctness veri/ication, and statistical usage testing for quality certification, to produce sofmare that is zero defects with high probability. Cleanroom management is based on a l fe cycle of incremental development of userfunction sofmare increments that accumulate into the PnaI product. Cleanroom teams in IBM and other organizations are achieving remarkable quality results in both new system development and modifications and extensions to existing systems.
Temporal disease trajectories condensed from population-wide registry data covering 6.2 million patients
A key prerequisite for precision medicine is the estimation of disease progression from the current patient state. Disease correlations and temporal disease progression (trajectories) have mainly been analysed with focus on a small number of diseases or using large-scale approaches without time consideration, exceeding a few years. So far, no large-scale studies have focused on defining a comprehensive set of disease trajectories. Here we present a discovery-driven analysis of temporal disease progression patterns using data from an electronic health registry covering the whole population of Denmark. We use the entire spectrum of diseases and convert 14.9 years of registry data on 6.2 million patients into 1,171 significant trajectories. We group these into patterns centred on a small number of key diagnoses such as chronic obstructive pulmonary disease (COPD) and gout, which are central to disease progression and hence important to diagnose early to mitigate the risk of adverse outcomes. We suggest such trajectory analyses may be useful for predicting and preventing future diseases of individual patients.
Efficacy and tolerability of a fixed-dose combination of telmisartan plus hydrochlorothiazide in patients uncontrolled with telmisartan monotherapy
The antihypertensive effects of a telmisartan 80 mg/hydrochlorothiazide (HCTZ) 12.5 mg fixed-dose combination and telmisartan 80 mg monotherapy were compared in patients with a history of mild-to-moderate essential hypertension and inadequate BP control (DBP ⩾90 mm Hg) following 8 weeks of telmisartan monotherapy. At the end of this period, 491 patients (62.9% men; mean age 55.3 years) whose DBP was ⩾90 mm Hg were double-blind randomised to once-daily telmisartan 80 mg/HCTZ 12.5 mg (n = 246) or telmisartan 80 mg (n = 245). Trough (24 h post-dose) clinic BP was measured after 4 and 8 weeks of double-blind therapy. At the end of double-blind treatment, patients receiving telmisartan 80 mg/HCTZ 12.5 mg had significant additional decrements in clinic SBP/DBP over telmisartan 80 mg of −5.7/−3.1 mm Hg (P < 0.01). Most of the additional effect occurred during the first 4 weeks of treatment. The proportion of patients with normalised BP (SBP <140 mm Hg and DBP <90 mm Hg) was significantly greater in the telmisartan 80 mg/HCTZ 12.5 mg group than the telmisartan 80 mg group (41.5%vs 26.1%;P  < 0.05). Both treatments were well tolerated. The incidence of adverse events was similar except for diarrhoea, which occurred more frequently in the telmisartan 80 mg/HCTZ 12.5 mg group, and oedema, which occurred more frequently in the telmisartan group. Our results indicate that a telmisartan 80 mg/HCTZ 12.5 mg fixed-dose combination confers significant additional BP reductions compared with continuation of telmisartan monotherapy in non-responders.
Shallow-water gravity-flow deposits, Chapel Island Formation, southeast Newfoundland, Canada
A remarkable suite of shallow-water, gravity-flow deposits are found within very thinly-bedded siltstones and storm-generated sandstones of member 2 of the Chapel Island Formation in southeast Newfoundland. Medium to thick siltstone beds, termed unifites, range from non-graded and structureless (Type 1) to slightly graded with poorly developed lamination (Type 2) to well graded with lamination similar to that described for fine-grained turbidites (Type 3). Unifite beds record deposition from a continuum of flow types from liquefied flows (Type 1) to turbidity currents (Type 3). Calculations of time for pore-fluid pressure dissipation support the feasibility of such transitions. Raft-bearing beds consist of siltstone with large blocks or 'rafts" of thinly bedded strata derived from the underlying and adjacent substrate^ Characteristics suggest deposition from debris flows of variable strength. Estimates of debris strength and depositional slope are calculated for a pebbly mudstone bed using measurable and assumed parameters. An assumed density of 2 0 gem"' and a compaction estimate of 50% gives a strength estimate of ''9 7 dyn cm" ̂ and a depositional slope estimate of 0 77°. The lithologies and sedimentary structures in member 2 indicate an overall grain-size distribution susceptible to liquefaction. Inferred high sediment accumulation rates created underconsolidated sediments (metastable packing). Types of sediment failure included in situ liquefaction ('disturbed bedding'), sliding and slumping Raft-bearing debrites resulted from sliding and incorporation of water. Locally, hummocky cross-stratified sandstone directly overlies slide deposits and raft-bearing beds, linking sediment failure to the cyclical wave loading associated with large storms. The gravity flows of the Chapel Island Formation closely resemble those described from the surfaces of modern, mud-rich, marine deltas. Details of deltaic gravity-flow deposition from this and other outcrop studies further our understanding of modem deposits by adding a third dimension to studies primarily carried out with side-scan sonar.
A fundamental plant evolutionary problem: the origin of land-plant sporophyte; is a new hypothesis possible?
The origin of the sporophyte in land plants represents a fundamental phase in the plant evolution. Today this subject is controversial and, in my opinion, scarcely considered in our textbooks and journals of botany, in spite of its importance. There are two conflicting theories concerning the origin of the alternating generations in land plants: the "antithetic" and the "homologous" theory. These have never been fully resolved. The antithetic theory maintains that the sporophyte and gametophyte generations are fundamentally dissimilar and that the sporophyte originated in an ancestor organism with haplontic cycle by the zygote dividing mitotically rather than meiotically, and with a developmental pattern not copying the developmental events of the gametophyte. The sporophyte generation was an innovation of critical significance for the land-plant evolution. By contrast, the homologous theory simply stated that a mass of cells forming mitotically from the zygote adopted the same developmental plan of the gametophyte, but giving origin to a diploid sporophyte. In this context, a very important question concerns the possible ancestor or ancestors of the land plants. Considerable evidences at morphological, cytological, ultrastructural, biochemical and, especially, molecular level, strongly suggest that the land plants or Embryophyta (both vascular and non-vascular) evolved from green algal ancestor(s), similar to those belonging to the genus Coleochaete, Chara and Nitella, living today. Their organism is haploid for most of their life cycle, and diploid only in the zygote phase (haplontic cycle). On the contrary, the land plants are characterized by a diplo-haplontic life cycle. Several questions are implied in these theories, and numerous problems remain to be solved, such as, for example, the morphological difference between gametophyte and sporophyte (heteromorphism, already present in the first land plants, the bryophytes), and the strong gap existing between these last with a sporophyte dependent on the gametophyte, and the pteridophytes having the gametophyte and sporophyte generations independent. On the ground of all of the evidences on the ancestors of the land plants, the antithetic theory is considered more plausible than the homologous theory. Unfortunately, no phylogenetic relationship exists between some green algae with diplontic life cycle and the land plants. Otherwise, perhaps, it should be possible to hypothesize another scenario in which to place the origin of the alternating generations of the land plants. In this case, could the gametophyte be formed by gametes produced from the sporophyte, through their mitoses or a delayed fertilization process?
Computer and internet use by persons after traumatic spinal cord injury.
OBJECTIVE To determine whether computer and internet use by persons post spinal cord injury (SCI) is sufficiently prevalent and broad-based to consider using this technology as a long-term treatment modality for patients who have sustained SCI. DESIGN A multicenter cohort study. SETTING Twenty-six past and current U.S. regional Model Spinal Cord Injury Systems. PARTICIPANTS Patients with traumatic SCI (N=2926) with follow-up interviews between 2004 and 2006, conducted at 1 or 5 years postinjury. INTERVENTIONS Not applicable. RESULTS Results revealed that 69.2% of participants with SCI used a computer; 94.2% of computer users accessed the internet. Among computer users, 19.1% used assistive devices for computer access. Of the internet users, 68.6% went online 5 to 7 days a week. The most frequent use for internet was e-mail (90.5%) and shopping sites (65.8%), followed by health sites (61.1%). We found no statistically significant difference in computer use by sex or level of neurologic injury, and no difference in internet use by level of neurologic injury. Computer and internet access differed significantly by age, with use decreasing as age group increased. The highest computer and internet access rates were seen among participants injured before the age of 18. Computer and internet use varied by race: 76% of white compared with 46% of black subjects were computer users (P<.001), and 95.3% of white respondents who used computers used the internet, compared with 87.6% of black respondents (P<.001). Internet use increased with education level (P<.001): eighty-six percent of participants who did not graduate from high school or receive a degree used the internet, while over 97% of those with a college or associate's degree did. CONCLUSIONS While the internet holds considerable potential as a long-term treatment modality after SCI, limited access to the internet by those who are black, those injured after age 18, and those with less education does reduce its usefulness in the short term for these subgroups.
A Fast , Adaptive Variant of the Goemans-Williamson Scheme for the Prize-Collecting Steiner Tree Problem
We introduce a new variant of the Goemans-Williamson (GW) scheme for the Prize-Collecting Steiner Tree Problem (PCST). Motivated by applications in signal processing, the focus of our contribution is to construct a very fast algorithm for the PCST problem that still achieves a provable approximation guarantee. Our overall algorithm runs in time O(dm logn) on a graph with m edges, where all edge costs and node prizes are specified with d bits of precision. Moreover, our algorithm maintains the Lagrangian-preserving factor-2 approximation guarantee of the GW scheme. Similar to [Cole, Hariharan, Lewenstein, and Porat, SODA 2001], we use dynamic edge splitting in order to efficiently process all cluster merge and deactivation events in the moat-growing stage of the GW scheme. Our edge splitting rules are more adaptive to the input, thereby reducing the amount of time spent on processing intermediate edge events. Numerical experiments based on the public DIMACS test instances show that our edge splitting rules are very effective in practice. In most test cases, the number of edge events processed per edge is less than 2 on average. On a laptop computer from 2010, the longest running time of our implementation on a DIMACS challenge instance is roughly 1.3 seconds (the corresponding instance has about 340,000 edges). Since the running time of our algorithm scales nearly linearly with the input size and exhibits good constant factors, we believe that our algorithm could potentially be useful in a variety of applied settings.
Hierarchical Multi-class Iris Classification for Liveness Detection
In modern society, iris recognition has become increasingly popular. The security risk of iris recognition is increasing rapidly because of the attack by various patterns of fake iris. A German hacker organization called Chaos Computer Club cracked the iris recognition system of Samsung Galaxy S8 recently. In view of these risks, iris liveness detection has shown its significant importance to iris recognition systems. The state-of-the-art algorithms mainly rely on hand-crafted texture features which can only identify fake iris images with single pattern. In this paper, we proposed a Hierarchical Multi-class Iris Classification (HMC) for liveness detection based on CNN. HMC mainly focuses on iris liveness detection of multi-pattern fake iris. The proposed method learns the features of different fake iris patterns by CNN and classifies the genuine or fake iris images by hierarchical multi-class classification. This classification takes various characteristics of different fake iris patterns into account. All kinds of fake iris patterns are divided into two categories by their fake areas. The process is designed as two steps to identify two categories of fake iris images respectively. Experimental results demonstrate an extremely higher accuracy of iris liveness detection than other state-of-the-art algorithms. The proposed HMC remarkably achieves the best results with nearly 100% accuracy on ND-Contact, CASIA-Iris-Interval, CASIA-Iris-Syn and LivDet-Iris-2017-Warsaw datasets. The method also achieves the best results with 100% accuracy on a hybrid dataset which consists of ND-Contact and LivDet-Iris-2017-Warsaw datasets.
Optimizing rating scale category effectiveness.
Rating scales are employed as a means of extracting more information out of an item than would be obtained from a mere "yes/no", "right/wrong" or other dichotomy. But does this additional information increase measurement accuracy and precision? Eight guidelines are suggested to aid the analyst in optimizing the manner in which rating scales categories cooperate in order to improve the utility of the resultant measures. Though these guidelines are presented within the context of Rasch analysis, they reflect aspects of rating scale functioning which impact all methods of analysis. The guidelines feature rating-scale-based data such as category frequency, ordering, rating-to-measure inferential coherence, and the quality of the scale from measurement and statistical perspectives. The manner in which the guidelines prompt recategorization or reconceptualization of the rating scale is indicated. Utilization of the guidelines is illustrated through their application to two published data sets.
Multi-parameter paraproducts
We prove that the classical Coifman-Meyer theorem holds on any polydisc $\T^d$ of arbitrary dimension $d\geq 1$.
The GNOME project: a case study of open source, global software development
Many successful free/open source software (FOSS) projects start with the premise that their contributors are rarely colocated, and as a consequence, these projects are cases of global software development (GSD). This article describes how the GNOME Project, a large FOSS project, has tried to overcome the disadvantages of GSD. The main goal of GNOME is to create a GUI desktop for Unix systems, and encompasses close to two million lines of code. More than 500 individuals (distributed across the world) have contributed to the project. This article also describes the software development methods and practices used by the members of the project, and its organizational structure. The article ends by proposing a list of practices that could benefit other global software development projects, both FOSS and commercial. Copyright  2004 John Wiley & Sons, Ltd.
Parents of children with psychopathology: psychiatric problems and the association with their child’s problems
Knowledge is lacking regarding current psychopathology in parents whose children are evaluated in a psychiatric outpatient clinic. This especially accounts for fathers. We provide insight into the prevalence rates of parental psychopathology and the association with their offspring psychopathology by analyzing data on psychiatric problems collected in 701 mothers and 530 fathers of 757 referred children. Prevalence rates of parental psychopathology were based on (sub)clinical scores on the adult self report. Parent–offspring associations were investigated in multivariate analyses taking into account co-morbidity. Around 20 % of the parents had a (sub)clinical score on internalizing problems and around 10 % on attention deficit hyperactivity (ADH) problems. Prevalence rates did not differ between mothers and fathers. Parent–offspring associations did not differ between girls and boys. Maternal anxiety was associated with all offspring problem scores. In addition, maternal ADH problems were associated with offspring ADH problems. Paternal anxiety and ADH problems scores were specifically associated with offspring internalizing and externalizing problem scores, respectively. Associations with offspring psychopathology were of similar magnitude for mothers and fathers and were not influenced by spousal resemblance. Our study shows that both fathers and mothers are at increased risk for psychiatric problems at the time of a child’s evaluation and that their problems are equally associated with their offspring problems. The results emphasize the need to screen mothers as well as fathers for psychiatric problems. Specific treatment programs should be developed for these families in especially high need.
Peeling the flow: a sketch-based interface to generate stream surfaces
We present a user-centric approach for stream surface generation. Given a set of densely traced streamlines over the flow field, we design a sketch-based interface that allows users to draw simple strokes directly on top of the streamline visualization result. Based on the 2D stroke, we identify a 3D seeding curve and generate a stream surface that captures the flow pattern of streamlines at the outermost layer. Then, we remove the streamlines whose patterns are covered by the stream surface. Repeating this process, users can peel the flow by replacing the streamlines with customized surfaces layer by layer. Our sketch-based interface leverages an intuitive painting metaphor which most users are familiar with. We present results using multiple data sets to show the effectiveness of our approach, and discuss the limitations and future directions.
Moving beyond linearity and independence in top-N recommender systems
This paper suggests a number of research directions in which the recommender systems can improve their quality, by moving beyond the assumptions of linearity and independence that are traditionally made. These assumptions, while producing effective and meaningful results, can be suboptimal, as in lots of cases they do not represent the real datasets. In this paper, we discuss three different ways to address some of the previous constraints. More specifically, we focus on the development of methods capturing higher-order relations between the items, cross-feature interactions and intra-set dependencies which can potentially lead to a considerable enhancement of the recommendation accuracy.
Morbidity Rate Prediction of Dengue Hemorrhagic Fever (DHF) Using the Support Vector Machine and the Aedes aegypti Infection Rate in Similar Climates and Geographical Areas
BACKGROUND In the past few decades, several researchers have proposed highly accurate prediction models that have typically relied on climate parameters. However, climate factors can be unreliable and can lower the effectiveness of prediction when they are applied in locations where climate factors do not differ significantly. The purpose of this study was to improve a dengue surveillance system in areas with similar climate by exploiting the infection rate in the Aedes aegypti mosquito and using the support vector machine (SVM) technique for forecasting the dengue morbidity rate. METHODS AND FINDINGS Areas with high incidence of dengue outbreaks in central Thailand were studied. The proposed framework consisted of the following three major parts: 1) data integration, 2) model construction, and 3) model evaluation. We discovered that the Ae. aegypti female and larvae mosquito infection rates were significantly positively associated with the morbidity rate. Thus, the increasing infection rate of female mosquitoes and larvae led to a higher number of dengue cases, and the prediction performance increased when those predictors were integrated into a predictive model. In this research, we applied the SVM with the radial basis function (RBF) kernel to forecast the high morbidity rate and take precautions to prevent the development of pervasive dengue epidemics. The experimental results showed that the introduced parameters significantly increased the prediction accuracy to 88.37% when used on the test set data, and these parameters led to the highest performance compared to state-of-the-art forecasting models. CONCLUSIONS The infection rates of the Ae. aegypti female mosquitoes and larvae improved the morbidity rate forecasting efficiency better than the climate parameters used in classical frameworks. We demonstrated that the SVM-R-based model has high generalization performance and obtained the highest prediction performance compared to classical models as measured by the accuracy, sensitivity, specificity, and mean absolute error (MAE).
On algorithms for simplicial depth
Simplicial depth is a way to measure how deep a point is among a set of points. EEcient algorithms to compute it are important to the usefulness of its applications, such as in multivariate analysis in statistics. A straightforward method takes O(n d+1) time when the points are in d-dimensional space. We discuss an algorithm that takes O(n 2) time when the points are in three-dimensional space, and we generalize it to four-dimensional space with a time complexity of O(n 4). For spaces higher than four-dimensional, there are no known algorithms faster than the straightforward method. A simplex in d-dimensional Euclidean space E d is the set of points that are convex combinations of d + 1 aanely independent points; that is, if the points are p 1 ; p 2 ; : : : ; p d+1 , they bring about the simplex fp : p = a 1 p 1 + a 2 p 2 + : : : + a d+1 p d+1 ; a i 0 and X a i = 1g: A simplex is a line segment in E 1 , a triangle in E 2 , and a tetrahedron in E 3. It is customary to say that a set of points in E d are in general position if any d + 1 points in the set are aanely independent. Let P be a set of n points in general position in E d ; take every d + 1 distinct points from P and they uniquely identify a simplex in E d ; let S P be the set of all such simplices; then, jS P j = (n d+1). Let (p; P) denote the simplicial depth of a point p with respect to a set P , deened by (p; P) = jfs 2 S P : p 2 sgj: For example, if p is outside the convex hull of P , then (p; P) = 0, and if p is a vertex of the convex hull of P , then (p; P) = (n?1 d). When p is in P , the following identity holds:
Quality of life in advanced prostate cancer: results of a randomized therapeutic trial.
BACKGROUND For patients with metastatic prostate cancer, treatment is primarily palliative, relying mainly on the suppression of systemic androgen hormone levels. To help document the achievement of palliation and to characterize positive and negative effects of treatment, we evaluated quality-of-life (QOL) parameters in patients with metastatic prostate cancer who were randomly assigned to two methods of androgen deprivation. METHODS Patients (n = 739) with stage M1 (bone or soft tissue metastasis) prostate cancer were enrolled in a QOL protocol that was a companion to Southwest Oncology Group INT-0105, a randomized double-blind trial comparing treatment with bilateral orchiectomy (surgical castration) plus either flutamide or placebo. Patients completed a comprehensive battery of QOL questionnaires at random assignment to treatment and at 1, 3, and 6 months later. Data were collected on three treatment-specific symptoms (diarrhea, gas pain, and body image), on physical functioning, and on emotional functioning. All P values are two-sided. RESULTS Questionnaire return rates for this study never dropped below 80%; only 2% of the patients did not submit baseline QOL assessments. Cross-sectional analyses (corrected for multiple testing) identified statistically significant differences that favored orchiectomy plus placebo for two of the five primary QOL parameters as follows: patients receiving flutamide reported more diarrhea at 3 months (P = .001) and worse emotional functioning at 3 and 6 months (both P<.003). Longitudinal analyses replicated these findings. Other analyzed QOL parameters favored the group receiving placebo but were not statistically significant after adjustment for multiple testing. CONCLUSIONS We found a consistent pattern of better QOL outcomes at each follow-up assessment during the first 6 months of treatment for orchiectomized patients with metastatic prostate cancer who received placebo versus flutamide. Improvement over time was evident in both treatment groups but more so for patients receiving placebo.
The Epipolar Geometry Toolbox : multiple view geometry and visual servoing for MATLAB
The Epipolar Geometry Toolbox (EGT) was realized to provide a MATLAB user with an extensible framework for the creation and visualization of multi-camera scenarios a nd the manipulation of the visual information and the geometry between them. Functions provided, for both pin-hole and panoramic vision sensors, include camera placement and visualizatio n, computation and estimation of epipolar geometry entities a nd many others. The compatibility of EGT with the Robotics Toolbox [7] allows to address general vision-based control issues. Two applications of EGT to visual servoing tasks are here provided. This article introduces the Toolbox in tutorial form. Examples are provided to show its capabilities. The complet toolbox, the detailed manual and demo examples are freely available on the EGT web site [21]. I. I NTRODUCTION The Epipolar Geometry Toolbox (EGT) is a toolbox designed for MATLAB [29]. MATLAB is a software environment, available for a wide range of platforms, designed around linear algebra principles and graphical presentati ons also for large datasets. Its core functionalities are exten ded by the use of many additional toolboxes. Combined with interactive MATLAB environment and advanced graphical functions, EGT provides a wide set of functions to approach computer vision problems especially with multiple views. The Epipolar Geometry Toolbox allows to design visionbased control systems for both pin-hole and central panoram ic cameras. EGT is fully compatible with the well known Robotics Toolbox by Corke [7]. The increasing interest in robotic visual servoing for both6DOF kinematic chains and mobile robots equipped with pin-hole or panoramic cameras fixed to the workspace or to the robot, motivated the development of EGT. Several authors, such as [4], [9], [18], [20], [24], have proposed new visual servoing strategies based on the geometry relating multiple views acquired from different camer a configurations, i.e. the Epipolar Geometry [14]. In these years we have observed the necessity to develop a software environment that could help researchers to rapid ly create a multiple camera setup, use visual data and design new visual servoing algorithms. With EGT we provide a wide set of easy-to-use and completely customizable functions t design general multi-camera scenarios and manipulate the visual information between them. Let us emphasize that EGT can also be successfully employed in many other contexts when single and multiple view geometry is involved as, for example, in visual odometry and structure from motion applications [23] [22]. For example i n the first work an interesting “visual odometry” approach for robot SLAM is proposed in which the multiple view geometry is used to estimate the camera motion from pairs of images without requiring the knowledge of the observed scene. EGT, as the Robotics Toolbox, is a simulation environment, but the EGT functions can be easily embedded by the user in Simulink models. In this way, thanks to the MATLAB RealTime Workshop, the user can generate and execute stand-alon e C code for many off-line and real-time applications. A distinguishable remark of EGT is that it can be used to create and manipulate visual data provided by both pinhole and panoramic cameras. Catadioptric cameras, due to their wide field of view, has been recently applied in visual servoing [32]. The second motivation lead to the development of EGT was the increasing distribution of “free” software in the lates t years, on the basis of the Free Software Foundation [10] principles . In this way users are allowed, and also encouraged, to adapt and improve the program as dictated by their needs. Examples of programs that follow these principles include for instan ce the Robotics Toolbox [7], for the creation of simulations in robotics, and the Intel’s OpenCVC++ libraries for the implementation of computer vision algorithms, such as imag e processing and object recognition [1]. The third important motivation for EGT was the availability and increasing sophistication of MATLAB. EGT could have been written in other languages, such as C, C++ and this would have freed it from dependency on other software. However these low-level languages are not so conducive to rapid program development as MATLAB. This tutorial assumes the reader has familiarity with MATLAB and presents the basic EGT functions, after short theory recalls, together with intuitive examples. In this tutoria l we also present two applications of EGT to visual servoing. Section 2 presents the basic vector notation in EGT, while in Section 3 the pin-hole and omnidirectional camera models ogether with EGT basic functions are presented. In Section 4 we present the setup for multiple camera geometry (Epipolar
A study of user behavior on an online dating site
Online dating sites have become popular platforms for people to look for potential romantic partners. It is important to understand users' dating preferences in order to make better recommendations on potential dates. The message sending and replying actions of a user are strong indicators for what he/she is looking for in a potential date and reflect the user's actual dating preferences. We study how users' online dating behaviors correlate with various user attributes using a real-world dateset from a major online dating site in China. Our study provides a firsthand account of the user online dating behaviors in China, a country with a large population and unique culture. The results can provide valuable guidelines to the design of recommendation engine for potential dates.
A self-powered bidirectional DC solid state circuit breaker using two normally-on SiC JFETs
This paper reports self-powered, autonomously operated bidirectional solid state circuit breakers (SSCB) with two back-to-back connected normally-on SiC JFETs as the main static switch for DC power systems. The SSCBs detect short circuit faults by sensing the sudden voltage rise between its two power terminals in either direction, and draws power from the fault condition itself to turn and hold off the SiC JFETs. The two-terminal SSCB can be directly placed in a circuit branch without requiring any external power supply or extra wiring. A low-power, fast-starting, isolated DC/DC converter is designed and optimized to activate the SSCB in response to a short circuit fault. The SSCB prototypes are experimentally demonstrated to interrupt fault currents up to 150 amperes at a DC bus voltage of 400 volts within 0.7 microseconds.
The genetic toolbox for Acidovorax temperans.
Environmental isolates belonging to the genus Acidovorax play a crucial role in degrading a wide range of pollutants. Studies on Acidovorax are currently limited for many species due to the lack of genetic tools. Here, we described the use of the replicon from a small, cryptic plasmid indigenous to Acidovorx temperans strain CB2, to generate stably maintained shuttle vectors. In addition, we have developed a scarless gene knockout technique, as well as establishing green fluorescent protein (GFP) reporter and complementation systems. Taken collectively, these tools will improve genetic manipulations in the genus Acidovorax.
Millimeter-Wave Mobile Communications Microstrip Antenna for 5G - A Future Antenna
In the present scenario, cellular service provider facing bandwidth shortage in conventional cellular system, and delivering high quality, low latency video and multimedia applications using 3G system which is on ground . The 4th Generation cellular networks are expected to be implemented in next few years. Here, we present the motivational approach for millimeter wave mobile communication antenna for next-generation microand Pico-cellular wireless Networks (5th generation). The Millimeter Wave mobile communication works on 28 GHz and 38GHz frequency by employing steerable directional antennas (high dimensional antenna array) at base stations and mobile devices [1]. This paper describes a future antenna for 5G mobile communication. This antenna consists of two rectangular patch elements using a single layer RT/Duroid 5880 substrate with transformer coupled impedance matching network, which provides high gain of 9. 0583dB and efficiency 83. 308%. This antenna has good performance in terms of antenna gain, directivity, return losses, VSWR, Characteristics impedance, Band width and efficiency at the centre frequency 38GHz.
Linguistics, anthropology and philosophy in the French Enlightenment : language theory and ideology
Linguistics, Anthropology and Philosophy in the French Enlightenment treats the development of linguistic thought from Descartes to Degerando as both a part of and a determining factor in the emergence of modern consciousness. Through his careful analyses of works by the most influential thinkers of the time, Ulrich Ricken demonstrates that the central significance of language in the philosophy of the enlightenment, reflected and acted upon contemporary understandings of humanity as a whole. The author discusses contemporary developments in England, Germany and Italy and covers an unusually broad range of writers and ideas including Leibniz, Wolff, Herder and Humboldt. This study places history of language philosophy within the broader context of the history of ideas, aesthetics and historical anthropology and will be of interest to scholars working in these disciplines.
Energy-Efficient Query Processing on Embedded CPU-GPU Architectures
Energy efficiency is a major design and optimization factor for query co-processing of databases in embedded devices. Recently, GPUs of new-generation embedded devices have evolved with the programmability and computational capability for general-purpose applications. Such CPU-GPU architectures offer us opportunities to revisit GPU query co-processing in embedded environments for energy efficiency. In this paper, we experimentally evaluate and analyze the performance and energy consumption of a GPU query co-processor on such hybrid embedded architectures. Specifically, we study four major database operators as micro-benchmarks and evaluate TPC-H queries on CARMA, which has a quad-core ARM Cortex-A9 CPU and a NVIDIA Quadro 1000M GPU. We observe that the CPU delivers both better performance and lower energy consumption than the GPU for simple operators such as selection and aggregation. However, the GPU outperforms the CPU for sort and hash join in terms of both performance and energy consumption. We further show that CPU-GPU query co-processing can be an effective means of energy-efficient query co-processing in embedded systems with proper tuning and optimizations.
A corporate strategy for the control of information processing.
Although the use of information processing has become widespread, many organizations have developed systems that are basically independent of the firm's strategy. However, the authors in this article argue that the greatest benefits come when information technology is merged with strategy formulation. The article includes examples of how this has been done and presents a framework for top management direction and control of information processing.
An early warning on early warning systems !
Efficacy and safety of enzalutamide versus bicalutamide for patients with metastatic prostate cancer (TERRAIN): a randomised, double-blind, phase 2 study.
BACKGROUND Enzalutamide is an oral androgen-receptor inhibitor that has been shown to improve survival in two placebo-controlled phase 3 trials, and is approved for patients with metastatic castration-resistant prostate cancer. The objective of the TERRAIN study was to compare the efficacy and safety of enzalutamide with bicalutamide in patients with metastatic castration-resistant prostate cancer. METHODS TERRAIN was a double-blind, randomised phase 2 study, that recruited asymptomatic or minimally symptomatic men with prostate cancer progression on androgen-deprivation therapy (ADT) from academic, community, and private health-care provision sites across North America and Europe. Eligible patients were randomly assigned (1:1) via an interactive voice response system to receive enzalutamide 160 mg/day or bicalutamide 50 mg/day, both taken orally, in addition to ADT, until disease progression. Patients were stratified by a permutated block method (block size of four), by whether bilateral orchiectomy or receipt of luteinising hormone-releasing hormone agonist or antagonist therapy started before or after the diagnosis of metastases, and by study site. Participants, investigators, and those assessing outcomes were masked to group assignment. The primary endpoint was progression-free survival, analysed in all randomised patients. Safety outcomes were analysed in all patients who received at least one dose of study drug. The open-label period of the trial is in progress, wherein patients still on treatment at the end of the double-blind treatment period were offered open-label enzalutamide at the discretion of the patient and study investigator. This trial is registered with ClinicalTrials.gov, number NCT01288911. FINDINGS Between March 22, 2011, and July 11, 2013, 375 patients were randomly assigned, 184 to enzalutamide and 191 to bicalutamide. 126 (68%) and 168 (88%) patients, respectively, discontinued their assigned treatment before study end, mainly due to progressive disease. Median follow-up time was 20·0 months (IQR 15·0-25·6) in the enzalutamide group and 16·7 months (10·2-21·9) in the bicalutamide group. Patients in the enzalutamide group had significantly improved median progression-free survival (15·7 months [95% CI 11·5-19·4]) compared with patients in the bicalutamide group (5·8 months [4·8-8·1]; hazard ratio 0·44 [95% CI 0·34-0·57]; p<0·0001). Of the most common adverse events, those occurring more frequently with enzalutamide than with bicalutamide were fatigue (51 [28%] of 183 patients in the enzalutamide group vs 38 [20%] of 189 in the bicalutamide group), back pain (35 [19%] vs 34 [18%]), and hot flush (27 [15%] vs 21 [11%]); those occurring more frequently with bicalutamide were nausea (26 [14%] vs 33 [17%]), constipation (23 [13%] vs 25 [13%]), and arthralgia (18 [10%] vs 30 [16%]). The most common grade 3 or worse adverse events in the enzalutamide or bicalutamide treatment groups, respectively, were hypertension (13 [7%] vs eight [4%]), hydronephrosis (three [2%] vs seven [4%]), back pain (five [3%] vs three [2%]), pathological fracture (five [3%] vs two [1%]), dyspnoea (four [2%] vs one [1%]), bone pain (one [1%] vs four [2%]), congestive cardiac failure (four [2%] vs two [1%]), myocardial infarction (five [3%] vs none), and anaemia (four [2%] vs none]). Serious adverse events were reported by 57 (31%) of 183 patients and 44 (23%) of 189 patients in the enzalutamide and bicalutamide groups, respectively. One of the nine deaths in the enzalutamide group was thought to be possibly related to treatment (due to systemic inflammatory response syndrome) compared with none of the three deaths in the bicalutamide group. INTERPRETATION The data from the TERRAIN trial support the use of enzalutamide rather than bicalutamide in patients with asymptomatic or mildly symptomatic metastatic castration-resistant prostate cancer. FUNDING Astellas Pharma, Inc and Medivation, Inc.
18F-Fluoride PET/CT is highly effective for excluding bone metastases even in patients with equivocal bone scintigraphy
Bone scintigraphy (BS) has been used extensively for many years for the diagnosis of bone metastases despite its low specificity and significant rate of equivocal lesions. 18F-Fluoride PET/CT has been proven to have a high sensitivity and specificity in the detection of malignant bone lesions, but its effectiveness in patients with inconclusive lesions on BS is not well documented. This study evaluated the ability of 18F-fluoride PET/CT to exclude bone metastases in patients with various malignant primary tumours and nonspecific findings on BS. We prospectively studied 42 patients (34–88 years of age, 26 women) with different types of tumour. All patients had BS performed for staging or restaging purposes but with inconclusive findings. All patients underwent 18F-fluoride PET/CT. All abnormalities identified on BS images were visually compared with their appearance on the PET/CT images. All the 96 inconclusive lesions found on BS images of the 42 patients were identified on PET/CT images. 18F-Fluoride PET/CT correctly excluded bone metastases in 23 patients (68 lesions). Of 19 patients (28 lesions) classified by PET/CT as having metastases, 3 (5 lesions) were finally classified as free of bone metastases on follow-up. The sensitivity, specificity, and positive and negative predictive values of 18F-fluoride PET/CT were, respectively, 100 %, 88 %, 84 % and 100 % for the identification of patients with metastases (patient analysis) and 100 %, 82 % and 100 % for the identification of metastatic lesions (lesion analysis). The factors that make BS inconclusive do not affect 18F-fluoride PET/CT which shows a high sensitivity and negative predictive value for excluding bone metastases even in patients with inconclusive conventional BS.
Transferring heterogeneous links across location-based social networks
ocation-based social networks (LBSNs) are one kind of online social networks offering geographic services and have been attracting much attention in recent years. LBSNs usually have complex structures, involving heterogeneous nodes and links. Many recommendation services in LBSNs (e.g., friend and location recommendation) can be cast as link prediction problems (e.g., social link and location link prediction). Traditional link prediction researches on LBSNs mostly focus on predicting either social links or location links, assuming the prediction tasks of different types of links to be independent. However, in many real-world LBSNs, the prediction tasks for social links and location links are strongly correlated and mutually influential. Another key challenge in link prediction on LBSNs is the data sparsity problem (i.e., "new network" problem), which can be encountered when LBSNs branch into new geographic areas or social groups. Actually, nowadays, many users are involved in multiple networks simultaneously and users who just join one LBSN may have been using other LBSNs for a long time. In this paper, we study the problem of predicting multiple types of links simultaneously for a new LBSN across partially aligned LBSNs and propose a novel method TRAIL (TRAnsfer heterogeneous lInks across LBSNs). TRAIL can accumulate information for locations from online posts and extract heterogeneous features for both social links and location links. TRAIL can predict multiple types of links simultaneously. In addition, TRAIL can transfer information from other aligned networks to the new network to solve the problem of lacking information. Extensive experiments conducted on two real-world aligned LBSNs show that TRAIL can achieve very good performance and substantially outperform the baseline methods.
Visual search of emotional faces. Eye-movement assessment of component processes.
In a visual search task using photographs of real faces, a target emotional face was presented in an array of six neutral faces. Eye movements were monitored to assess attentional orienting and detection efficiency. Target faces with happy, surprised, and disgusted expressions were: (a) responded to more quickly and accurately, (b) localized and fixated earlier, and (c) detected as different faster and with fewer fixations, in comparison with fearful, angry, and sad target faces. This reveals a happy, surprised, and disgusted-face advantage in visual search, with earlier attentional orienting and more efficient detection. The pattern of findings remained equivalent across upright and inverted presentation conditions, which suggests that the search advantage involves processing of featural rather than configural information. Detection responses occurred generally after having fixated the target, which implies that detection of all facial expressions is post- rather than preattentional.
Context Aware Document Embedding
Recently, doc2vec has achieved excellent results in different tasks (Lau and Baldwin, 2016). In this paper, we present a context aware variant of doc2vec. We introduce a novel weight estimating mechanism that generates weights for each word occurrence according to its contribution in the context, using deep neural networks. Our context aware model can achieve similar results compared to doc2vec initialized by Wikipedia trained vectors, while being much more efficient and free from heavy external corpus. Analysis of context aware weights shows they are a kind of enhanced IDF weights that capture sub-topic level keywords in documents. They might result from deep neural networks that learn hidden representations with the least entropy.
Liquidity Risk Management: A Comparative Study Between Islamic And Conventional Banks
This paper examines the factors that affect the liquidity risk for Islamic and conventional banks in Gulf countries, using the panel data for 11 IBs and 33 CBs between 2006 and 2013. Our results show that return on equity, Net Interest Margin, Capital Adequacy Ratio and inflation rate have a positive impact on liquidity risk for Islamic banks, while returns on assets, Non-Performing Loan, size and GDP growth have a negative impact.On the other hand, in conventional banks, size, Return on Equity, Net Interest Margin, Capital Adequacy Ratio, GDP growth and inflation rate have a positive impact, whereas the Return onAssets, Non-Performing Loan have a negative impact on liquidity risk.This study tries to see how Islamic and conventional banks manage their liquidity in response to changes on the basis of several factors. Citation: Ghenimi A, Omri MAB (2015) Liquidity Risk Management: A Comparative Study between Islamic and Conventional Banks. Arabian J Bus Manag Review 5: 166. doi:10.4172/2223-5833.1000166
A Self-Assembled Cofacial Cobalt Porphyrin Prism for Oxygen Reduction Catalysis.
Herein we report the first study of the oxygen reduction reaction (ORR) catalyzed by a cofacial porphyrin scaffold accessed in high yield (overall 53%) using coordination-driven self-assembly with no chromatographic purification steps. The ORR activity was investigated using chemical and electrochemical techniques on monomeric cobalt(II) tetra(meso-4-pyridyl)porphyrinate (CoTPyP) and its cofacial analogue [Ru8(η6-iPrC6H4Me)8(dhbq)4(CoTPyP)2][OTf]8 (Co Prism) (dhbq = 2,5-dihydroxy-1,4-benzoquinato, OTf = triflate) as homogeneous oxygen reduction catalysts. Co Prism is obtained in one self-assembly step that organizes six total building blocks, two CoTPyP units and four arene-Ru clips, into a cofacial motif previously demonstrated with free-base, Zn(II), and Ni(II) porphyrins. Turnover frequencies (TOFs) from chemical reduction (66 vs 6 h-1) and rate constants of overall homogeneous catalysis (kobs) determined from rotating ring-disk experiments (1.1 vs 0.05 h-1) establish a cofacial enhancement upon comparison of the activities of Co Prism and CoTPyP, respectively. Cyclic voltammetry was used to initially probe the electrochemical catalytic behavior. Rotating ring-disk electrode studies were completed to probe the Faradaic efficiency and obtain an estimate of the rate constant associated with the ORR.
Semi-automated map creation for fast deployment of AGV fleets in modern logistics
Today, Automated Guided Vehicles (AGVs) still have a lowmarket share in logistics, compared to manual forklifts. We identified one of the main bottlenecks in the rather long deployment time which involves precise 2D mapping of the plant, 3D geo-referencing of pick-up/ drop positions and the manual design of the roadmap. The long deployment time has various reasons: in state-of-the-art plant installations, designated infrastructure is still necessary for the localization; themappingprocess requires highly skilled personnel; inmany cases unavailable or inappropriate position information of drop points for goodsmust be corrected on site. Finally, the design of the roadmap, performed by expert technicians is manually optimised in a tedious process to achieve maximum flow of goods for the plant operator. In total the setup of a plant to be ready for AGV operation is taking several months, binding highly skilled technicians and involves very time-consuming and costly on-site procedures. Therefore, we present an approach to AGV deployment which aims to drastically reduce the time, cost and involved personnel. First, we propose the employment of a novel, industrial-ready SICK 3D laser scanning technology in order to build an accurate and consistent virtual representation of the whole warehouse environment. By utilizing suitable segmentation and processing a semantic map is generated that contains 3D geo-referenced positions as well as a 2D localization map eliminating the need for dedicated solution to 2D mapping. Second, the semantic map provides a free space map which is used as a basis for the automatic roadmap creation in order to achieve optimal flow. So, this paper proposes an innovative methodology for obtaining, in a semi-automated manner, the map of an industrial environment where a system of multiple AGVs will be installed with less time and cost. © 2016 Elsevier B.V. All rights reserved.
Inventing polemic: religion, print, and literary culture in early modern England
Introduction: The disorder of books 1. Foxe's Books of Martyrs: printing and popularizing the Actes and Monuments 2. Martin Marprelate and the fugitive text 3. 'Whole Hamlets': Q1, Q2, and the work of distinction 4. Printing Donne: poetry and polemic in the early seventeenth century 5. Areopagitica and 'The True Warfaring Christian' 6. Institutionalizing polemic: the rise and fall of Chelsea College Epilogue: Polite learning.
Compressive Phase Retrieval From Squared Output Measurements Via Semidefinite Programming
Given a linear system in a real or complex domain, linear regression aims to recover the model parameters from a set of observations. Recent studies in compressive sensing have successfully shown that under certain conditions, a linear program, namely, `1-minimization, guarantees recovery of sparse parameter signals even when the system is underdetermined. In this paper, we consider a more challenging problem: when the phase of the output measurements from a linear system is omitted. Using a lifting technique, we show that even though the phase information is missing, the sparse signal can be recovered exactly by solving a simple semidefinite program when the sampling rate is sufficiently high, albeit the exact solutions to both sparse signal recovery and phase retrieval are combinatorial. The results extend the type of applications that compressive sensing can be applied to those where only output magnitudes can be observed. We demonstrate the accuracy of the algorithms through theoretical analysis, extensive simulations and a practical experiment.
Gyroscope Technology and Applications: A Review in the Industrial Perspective
This paper is an overview of current gyroscopes and their roles based on their applications. The considered gyroscopes include mechanical gyroscopes and optical gyroscopes at macro- and micro-scale. Particularly, gyroscope technologies commercially available, such as Mechanical Gyroscopes, silicon MEMS Gyroscopes, Ring Laser Gyroscopes (RLGs) and Fiber-Optic Gyroscopes (FOGs), are discussed. The main features of these gyroscopes and their technologies are linked to their performance.
Mining Educational Data to Analyze Students' Performance
The main objective of higher education institutions is to provide quality education to its students. One way to achieve highest level of quality in higher education system is by discovering knowledge for prediction regarding enrolment of students in a particular course, alienation of traditional classroom teaching model, detection of unfair means used in online examination, detection of abnormal values in the result sheets of the students, prediction about students’ performance and so on. The knowledge is hidden among the educational data set and it is extractable through data mining techniques. Present paper is designed to justify the capabilities of data mining techniques in context of higher education by offering a data mining model for higher education system in the university. In this research, the classification task is used to evaluate student’s performance and as there are many approaches that are used for data classification, the decision tree method is used here. By this task we extract knowledge that describes students’ performance in end semester examination. It helps earlier in identifying the dropouts and students who need special attention and allow the teacher to provide appropriate advising/counseling. Keywords-Educational Data Mining (EDM); Classification; Knowledge Discovery in Database (KDD); ID3 Algorithm.
Short-Term Effects of Kefir-Fermented Milk Consumption on Bone Mineral Density and Bone Metabolism in a Randomized Clinical Trial of Osteoporotic Patients
UNLABELLED Milk products are good sources of calcium that may reduce bone resorption and help prevent bone loss as well as promote bone remodeling and increase bone formation. Kefir is a product made by kefir grains that degrade milk proteins into various peptides with health-promoting effects, including antithrombotic, antimicrobial and calcium-absorption enhancing bioactivities. In a controlled, parallel, double-blind intervention study over 6 months, we investigated the effects of kefir-fermented milk (1,600 mg) supplemented with calcium bicarbonate (CaCO3, 1,500 mg) and bone metabolism in 40 osteoporosis patients, and compared them with CaCO3 alone without kefir supplements. Bone turnover markers were measured in fasting blood samples collected before therapy and at 1, 3, and 6 months. Bone mineral density (BMD) values at the spine, total hip, and hip femoral neck were assessed by dual-energy x-ray absorptiometry (DXA) at baseline and at 6 months. Among patients treated with kefir-fermented milk, the relationships between baseline turnover and 6 months changes in DXA-determined BMD were significantly improved. The serum β C-terminal telopeptide of type I collagen (β-CTX) in those with T-scores > -1 patients significantly decreased after three months treatment. The formation marker serum osteocalcin (OC) turned from negative to positive after 6 months, representing the effect of kefir treatment. Serum parathyroid hormone (PTH) increased significantly after treatment with kefir, but decreased significantly in the control group. PTH may promote bone remodeling after treatment with kefir for 6 months. In this pilot study, we concluded that kefir-fermented milk therapy was associated with short-term changes in turnover and greater 6-month increases in hip BMD among osteoporotic patients. TRIAL REGISTRATION ClinicalTrials.gov NCT02361372.
Emerging Principles of Gene Expression Programs and Their Regulation.
Many mechanisms contribute to regulation of gene expression to ensure coordinated cellular behaviors and fate decisions. Transcriptional responses to external signals can consist of many hundreds of genes that can be parsed into different categories based on kinetics of induction, cell-type and signal specificity, and duration of the response. Here we discuss the structure of transcription programs and suggest a basic framework to categorize gene expression programs based on characteristics related to their control mechanisms. We also discuss possible evolutionary implications of this framework.
Neurostimulation for Parkinson ’ s Disease with Early Motor Complications
W.M.M. Schuepbach, J. Rau, K. Knudsen, J. Volkmann, P. Krack, L. Timmermann, T.D. Hälbig, H. Hesekamp, S.M. Navarro, N. Meier, D. Falk, M. Mehdorn, S. Paschen, M. Maarouf, M.T. Barbe, G.R. Fink, A. Kupsch, D. Gruber, G.-H. Schneider, E. Seigneuret, A. Kistner, P. Chaynes, F. Ory-Magne, C. Brefel Courbon, J. Vesper, A. Schnitzler, L. Wojtecki, J.-L. Houeto, B. Bataille, D. Maltête, P. Damier, S. Raoul, F. Sixel-Doering, D. Hellwig, A. Gharabaghi, R. Krüger, M.O. Pinsker, F. Amtage, J.-M. Régis, T. Witjas, S. Thobois, P. Mertens, M. Kloss, A. Hartmann, W.H. Oertel, B. Post, H. Speelman, Y. Agid, C. Schade-Brittinger, and G. Deuschl, for the EARLYSTIM Study Group*