title
stringlengths
8
300
abstract
stringlengths
0
10k
Multipath Estimation in the Global Positioning System for Multicorrelator Receivers
In urban areas, multipath (MP) is one of the main error sources when tracking signals used in global navigation satellite systems. The received signals subjected to MP are the sum of several delayed replicas leading to biased estimations. This paper studies a sequential Monte Carlo (SMC) algorithm which mitigates MP effects. The proposed algorithm is based on a state-space model associated to a multicorrelator GPS receiver and on a Rao Blackwellized technique which allows to achieve good performance.
Epidural analgesia is superior to local infiltration analgesia in children with cerebral palsy undergoing unilateral hip reconstruction
BACKGROUND AND PURPOSE Treatment of postoperative pain in children with cerebral palsy (CP) is a major challenge. We investigated the effect of epidural analgesia, high-volume local infiltration analgesia (LIA), and an approximated placebo control on early postoperative pain in children with CP who were undergoing unilateral hip reconstruction. PATIENTS AND METHODS Between 2009 and 2014, we included 18 children with CP. The first part of the study was a randomized double-blind trial with allocation to either LIA or placebo for postoperative pain management, in addition to intravenous or oral analgesia. In the second part of the study, the children were consecutively included for postoperative pain management with epidural analgesia in addition to intravenous or oral analgesia. The primary outcome was postoperative pain 4 h postoperatively using 2 pain assessment tools (r-FLACC and VAS-OBS) ranging from 0 to 10. The secondary outcome was opioid consumption over the 21-h study period. RESULTS The mean level of pain 4 h postoperatively was lower in the epidural group (r-FLACC: 0.7; VAS-OBS: 0.6) than in both the LIA group (r-FLACC: 4.8, p = 0.01; VAS-OBS: 5.2, p = 0.02) and the placebo group (r-FLACC: 5.2, p = 0.01; VAS-OBS: 6.5, p < 0.001). Corrected for body weight, the mean opioid consumption was lower in the epidural group than in the LIA group and the placebo group (both p < 0.001). INTERPRETATION Epidural analgesia is superior to local infiltration analgesia for early postoperative pain management in children with cerebral palsy who undergo unilateral hip reconstruction.
Network security as public good: A mean-field-type game theory approach
We investigate dynamic public good games in networks consisting of strategic users with interdependent network security. The strategic users can choose their investment strategies to contribute to the basic security of the network. Mimicking the behavior of infection propagation over multi-hop networks which depends on the average degree of the network, we propose a mean-field-type model to capture the effect of the others' control actions on the security state. Using linear-quadratic differential mean-field-type games we propose and analyze two different regimes, examining the equilibria and global optima of each to address. We show that, generically, each user has a unique best response strategy to invest into security. Closed-form expressions are obtained using the recent development of mean-field-type game theory.
Bayesian regression and Bitcoin
In this paper, we discuss the method of Bayesian regression and its efficacy for predicting price variation of Bitcoin, a recently popularized virtual, cryptographic currency. Bayesian regression refers to utilizing empirical data as proxy to perform Bayesian inference. We utilize Bayesian regression for the so-called “latent source model”. The Bayesian regression for “latent source model” was introduced and discussed by Chen, Nikolov and Shah [1] and Bresler, Chen and Shah [2] for the purpose of binary classification. They established theoretical as well as empirical efficacy of the method for the setting of binary classification. In this paper, instead we utilize it for predicting real-valued quantity, the price of Bitcoin. Based on this price prediction method, we devise a simple strategy for trading Bitcoin. The strategy is able to nearly double the investment in less than 60 day period when run against real data trace.
Сорокин нулевых: в пространстве мифов о национальной идентичности
The article considers Vladimir Sorokin's prose of the 2000-s, which opened a new stage in the writer's evolution. Den' oprichnika, Sakharniy Kreml, Metel', Monoklon and other texts of this period represent the models of the history of Russia that are constructed as a kind of topical political theories, trends in contemporary historiography and mythologies of mass consciousness (civilizational approach, neo-Euroasionism, imperial myth, etc). In Sorokin's literary works the semantics, symbolism and emblematics of the national identity are constructed on the basis of the methods of deconstruction and parody that are created by the writer together with chthonian symbolics and mass culture myths.
CHR for Social Responsibility
Publicly traded corporations often operate against the public's interest, serving a very limited group of stakeholders. This is counter-intuitive, since the public as a whole owns these corporations through direct investment in the stock-market, as well as indirect investment in mutual, index, and pension funds. Interestingly, the public's role in the proxy voting process, which allows shareholders to influence their company's direction and decisions, is essentially ignored by individual investors. We speculate that a prime reason for this lack of participation is information overload, and the disproportionate efforts required for an investor to make an informed decision. In this paper we propose a CHR based model that significantly simplifies the decision making process, allowing users to set general guidelines that can be applied to every company they own to produce voting recommendations. The use of CHR here is particularly advantageous as it allows users to easily track back the most relevant data that was used to formulate the decision, without the user having to go through large amounts of irrelevant information. Finally we describe a simplified algorithm that could be used as part of this model.
Beamspace Channel Estimation for Wideband Millimeter-Wave MIMO with Lens Antenna Array
Beamspace channel estimation is essential for wideband millimeter-wave (mmWave) MIMO with lens antenna array to achieve substantial increase in data rates with considerably reduced number of radio-frequency chains. However, most of existing beamspace channel estimation schemes are designed for narrowband mmWave systems, while rather scarce wideband schemes ideally assume beamspace channel enjoys the common support in frequency-domain. In this paper, inspired by the classical successive interference cancellation for multi-user detection, we propose an efficient successive support detection (SSD) based scheme without the assumption of common support. Specifically, we first demonstrate that each path component of the wideband beamspace channel exhibits a unique frequency-varying sparse structure. Based on this, we then successively estimate all sparse path components. For each path component, its supports at different frequencies are jointly estimated to improve the estimation accuracy, and then its influence is removed to estimate the remained path components. Once all path components have been estimated, the wideband beamspace channel can be recovered at a low complexity. Simulation results verify that the proposed SSD based beamspace channel estimation scheme achieves higher accuracy than existing wideband schemes.
Exploiting Hierarchical Activations of Neural Network for Image Retrieval
The Convolutional Neural Networks (CNNs) have achieved breakthroughs on several image retrieval benchmarks. Most previous works re-formulate CNNs as global feature extractors used for linear scan. This paper proposes a Multi-layer Orderless Fusion (MOF) approach to integrate the activations of CNN in the Bag-of-Words (BoW) framework. Specifically, through only one forward pass in the network, we extract multi-layer CNN activations of local patches. Activations from each layer are aggregated in one BoW model, and several BoW models are combined with late fusion. Experimental results on two benchmark datasets demonstrate the effectiveness of the proposed method.
Evaluation of a hierarchical reinforcement learning spoken dialogue system
We describe an evaluation of spoken dialogue strategies designed using hierarchical reinforcement learning agents. The dialogue strategies were learnt in a simulated environment and tested in a laboratory setting with 32 users. These dialogues were used to evaluate three types of machine dialogue behaviour: hand-coded, fully-learnt and semi-learnt. These experiments also served to evaluate the realism of simulated dialogues using two proposed metrics contrasted with ‘PrecisionRecall’. The learnt dialogue behaviours used the Semi-Markov Decision Process (SMDP) model, and we report the first evaluation of this model in a realistic conversational environment. Experimental results in the travel planning domain provide evidence to support the following claims: (a) hierarchical semi-learnt dialogue agents are a better alternative (with higher overall performance) than deterministic or fully-learnt behaviour; (b) spoken dialogue strategies learnt with highly coherent user behaviour and conservative recognition error rates (keyword error rate of 20%) can outperform a reasonable hand-coded strategy; and (c) hierarchical reinforcement learning dialogue agents are feasible and promising for the (semi) automatic design of optimized dialogue behaviours in larger-scale systems. 2009 Elsevier Ltd. All rights reserved.
Motivation and barriers to participation in virtual knowledge-sharing communities of practice
Alexander Ardichvili is an assistant professor at the department of Human Resource Education, University of Illinois at Urbana/ Champaign. He received his MBA and Ph.D. from the University of Minnesota, and Ph.D. from the University of Moscow. Dr. Ardichvili has published peer-reviewed articles and book chapters in the areas of human resource development, entrepreneurship, and knowledge management ([email protected]). Vaughn Page is a doctoral student in Human Resource Education at the University of Illinois at Urbana-Champaign. His research interest centers on knowledge management and communities of practice. Page earned a Bachelor’s degree in Career and Organizational Studies and a Master’s in Training & Development from Eastern Illinois University, Charleston, Illinois ([email protected]). Tim Wentling is a professor in the Department of Library and Information Science and a Senior Research Scientist in the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign. Professor Wentling holds a PhD in Education and an MBA from the University of Illinois and a Master of Science in Educational Psychology from the University of Wisconsin. Professor Wentling is the leader of the Knowledge and Learning Systems Group at NCSA where he heads a team of cross-disciplinary faculty, postdoctoral researchers, and graduate students ([email protected]). Abstract This paper reports the results of a qualitative study of motivation and barriers to employee participation in virtual knowledge-sharing communities of practice at Caterpillar Inc., a Fortune 100, multinational corporation. The study indicates that, when employees view knowledge as a public good belonging to the whole organization, knowledge ows easily. However, even when individuals give the highest priority to the interests of the organization and of their community, they tend to shy away from contributing knowledge for a variety of reasons. SpeciŽcally, employees hesitate to contribute out of fear of criticism, or of misleading the community members (not being sure that their contributions are important, or completely accurate, or relevant to a speciŽc discussion). To remove the identiŽed barriers, there is a need for developing various types of trust, ranging from the knowledge-based to the institutionbased trust. Future research directions and implications for KM practitioners are formulated.
Is Corporate Governance Different for Bank Holding Companies ? 1
n the wake of the recent corporate scandals, corporate governance practices have received heightened attention. Shareholders, creditors, regulators, and academics are examining the decision-making process in corporations and other organizations and are proposing changes in governance structures to enhance accountability and efficiency. To the extent that these proposals are based on academic research, they generally draw upon a large body of studies on the governance of firms in unregulated, nonfinancial industries. Financial institutions, however, are very different from firms in unregulated industries, such as manufacturing firms. Thus, the question arises as to whether these proposals and reforms can also be effective at enhancing the governance of financial institutions, and, in particular, banking firms. The question is a difficult one to answer, though, given the little research on the governance of banking firms. Therefore, in order to evaluate reforms to the governance structures of banking firms, it is important to understand current governance practices as well as how governance differs between banking and unregulated firms. Otherwise, governance proposals cannot be fine-tuned. Significantly, uniformly designed proposals that do not take into account industry differences at the very least may be ineffective in improving the governance of financial institutions, and at worst may have unintended negative consequences. Accordingly, this article examines corporate governance in banking firms. In particular, we study corporate governance variables identified as relevant by academics and practitioners and describe their differences and similarities vis-á-vis banking firms and manufacturing firms. Because public information on governance characteristics is generally available only for publicly traded bank holding companies (BHCs), we examine the governance of BHCs and not banks. We also discuss the effect of regulation—such as supervisory and regulatory requirements at the state and Office of the Comptroller of the Currency (OCC) levels—prior to 2000 on banking firm behavior. Many typical external governance mechanisms, such as the threat of hostile takeovers in the industry, are absent in the case of banking firms; therefore, we focus primarily on internal governance structures and shareholder block ownership. Our goal is to provide useful information and a road map for thinking about the governance of financial institutions, in terms of reform as well as research. We discuss the potential benefits and costs associated with some of the corporate governance variables for an average firm. However, we stress that all of these variables are ultimately part of a simultaneous system that determines the corporation’s value and the allocation of such value among claimants. Also, different governance mechanisms may be substitutes for one another. For example, certain executive pay packages can vary across firms, even in the same business environment, for good reason. Firms with more effective boards may have more
Gated Bayesian networks for algorithmic trading
This paper introduces a new probabilistic graphical model called gated Bayesian network (GBN). This model evolved from the need to represent processes that include several distinct phases. In essence, a GBN is a model that combines several Bayesian networks (BNs) in such a manner that they may be active or inactive during queries to the model. We use objects called gates to combine BNs, and to activate and deactivate them when predefined logical statements are satisfied. In this paper we also present an algorithm for semi-automatic learning of GBNs. We use the algorithm to learn GBNs that output buy and sell decisions for use in algorithmic trading systems. We show how the learnt GBNs can substantially lower risk towards invested capital, while they at the same time generate similar or better rewards, compared to the benchmark investment strategy buy-and-hold. We also explore some differences and similarities between GBNs and other related formalisms.
Multi-Beam Multi-Layer Leaky-Wave SIW Pillbox Antenna for Millimeter-Wave Applications
This work proposes a novel multi-beam leaky-wave pillbox antenna. The antenna system is based on three main parts: feeding part (integrated horns), quasi-optical system and radiating part. The radiating and input parts are placed in two different stacked substrates connected by an optimized quasi-optical system. In contrast to conventional pillbox antennas, the quasi-optical system is made by a pin-made integrated parabola and several coupling slots whose sizes and positions are used to efficiently transfer the energy coming from the input part to the radiating part. The latter consists of a printed leaky-wave antenna, namely an array of slots etched on the uppermost metal layer. Seven pin-made integrated horns are placed in the focal plane of the integrated parabola to radiate seven beams in the far field. Each part of the antenna structure can be optimized independently, thus facilitating and speeding up the complete antenna design. The antenna concept has been validated by measurements (around 24 GHz) showing a scanning capability over ±30° in azimuth and more than 20° in elevation thanks to the frequency scanning behavior of the leaky-wave radiating part. The proposed antenna is well suited to low-cost printed circuit board fabrication process, and its low profile and compactness make it a very promising solution for applications in the millimeter-wave range.
Time series feature extraction for data mining using DWT and DFT
A new method of dimensionality reduction for time series data mining is proposed. Each time series is compressed with wavelet or Fourier decomposition. Instead of using only the first coefficients, a new method of choosing the best coefficients for a set of time series is presented. A criterion function is evaluated using all values of a coefficient position to determine a good set of coefficients. The optimal criterion function with respect to energy preservation is given. For many real life data sets much more energy can be preserved, which is advantageous for data mining tasks. All time series to be mined, or at least a representative subset, need to be available a priori.
Sentiment expression via emoticons on social media
Emoticons (e.g., :) and :( ) have been widely used in sentiment analysis and other NLP tasks as features to machine learning algorithms or as entries of sentiment lexicons. In this paper, we argue that while emoticons are strong and common signals of sentiment expression on social media the relationship between emoticons and sentiment polarity are not always clear. Thus, any algorithm that deals with sentiment polarity should take emoticons into account but extreme caution should be exercised in which emoticons to depend on. First, to demonstrate the prevalence of emoticons on social media, we analyzed the frequency of emoticons in a large recent Twitter data set. Then we carried out four analyses to examine the relationship between emoticons and sentiment polarity as well as the contexts in which emoticons are used. The first analysis surveyed a group of participants for their perceived sentiment polarity of the most frequent emoticons. The second analysis examined clustering of words and emoticons to better understand the meaning conveyed by the emoticons. The third analysis compared the sentiment polarity of microblog posts before and after emoticons were removed from the text. The last analysis tested the hypothesis that removing emoticons from text hurts sentiment classification by training two models with and without emoticons in the text, respectively. The results confirms the arguments that: 1) a few emoticons are strong and reliable signals of sentiment polarity and one should take advantage of them in any sentiment analysis; 2) a large group of the emoticons conveys complicated sentiment hence they should be treated with extreme caution.
Is there a safe plateau pressure in ARDS? The right heart only knows
Airway pressure limitation is now a largely accepted strategy in adult respiratory distress syndrome (ARDS) patients; however, some debate persists about the exact level of plateau pressure which can be safely used. The objective of the present study was to examine if the echocardiographic evaluation of right ventricular function performed in ARDS may help to answer to this question. For more than 20 years, we have regularly monitored right ventricular function by echocardiography in ARDS patients, during two different periods, a first (1980–1992) where airway pressure was not limited, and a second (1993–2006) where airway pressure was limited. By pooling our data, we can observe the effect of a large range of plateau pressure upon mortality rate and incidence of acute cor pulmonale. In this whole group of 352 ARDS patients, mortality rate and incidence of cor pulmonale were 80 and 56%, respectively, when plateau pressure was > 35 cmH2O; 42 and 32%, respectively, when plateau pressure was between 27 and 35 cmH2O; and 30 and 13%, respectively, when plateau pressure was < 27 cmH2O. Moreover, a clear interaction between plateau pressure and cor pulmonale was evidenced: whereas the odd ratio of dying for an increase in plateau pressure from 18–26 to 27–35 cm H2O in patients without cor pulmonale was 1.05 (p = 0.635), it was 3.32 in patients with cor pulmonale (p < 0.034). We hypothesize that monitoring of right ventricular function by echocardiography at bedside might help to control the safety of plateau pressure used in ARDS.
Performance of existing risk scores in screening for undiagnosed diabetes: an external validation study.
AIM To compare the performance of nine published strategies for the selection of individuals prior to screening for undiagnosed diabetes. METHODS We conducted a validation study, based on a cross-sectional analysis of 6990 participants of the Whitehall II study, an occupational cohort of civil servants in London. We calculated sensitivity, specificity and the area under the receiver operating characteristic (ROC) curve, indicative of the ability of a risk score to correctly identify those with undiagnosed diabetes. RESULTS The prevalence of unknown diabetes was 2.0%. At a set level of sensitivity (0.70), the specificity of the different scores ranged between 0.41 and 0.57. A reference model, based solely on age and body mass index had an area under the ROC curve of 0.67 [95% confidence interval (CI): 0.62, 0.72]. Four scores had a lower area under the ROC curve (lowest ROC AUC: 0.62; 95% CI: 0.58, 0.67) compared with the reference model, while the other five scores had similar areas (highest ROC AUC: 0.68; 95% CI: 0.63, 0.72). All ROC curve areas were lower than those reported in the original publications and validation studies. CONCLUSIONS Existing risk scores for the detection of undiagnosed diabetes perform less well in a large validation cohort compared with previous validation studies. Our study indicates that non-invasive risk scores require further refinement and testing before they can be used as the first step in a diabetes screening programme.
Structure and Evolution Characteristics of the Global Sedimentary Basins:Evidence from the N-S Long Profiles
Basin formation and its structure record basin tectonic evolution,which reflects the different periods of tectonic settings.As the development of the exploration in oil and gas,lots of seismic sections have been accumulated.This provides a large number of basic data for comparison of the characteristics of basins.Materials from hundreds of important sedimentary basins were collected to study the basin tectonic evolution and evolutionary background systematically based on the global structure theory.Two large scale profiles(thousands of km length) were drawn:India-Siberia-North America-South America large scale profile and Africa-Mediterranean-Europe-Arctic-Siberia-Australian large scale profile,the structures and evolution characteristics of the global sedimentary basins show that structure and evolution characteristics of different types of basins are controlled by plate tectonic evolution;Different types of sedimentary basins are orderly parallel to different continents such as ocean basins,trench,forearc basin,back-arc foreland basin,cratonic basin,cratonic basins,rift basins,and passive continental margin basins.The basins in Europe-Asia plate were affected by Alpine orogeny(Himalayan movement).The evolution and structure of basin groups in Asia were controlled by the ancient Asian continent,Tethys,and west Pacific tectonic zone.By the effection of plate boundary,different basin groups in the same continent have close relationship in structure,sedimentation,and tectonic events.Basins containing the richest oil and gas are mainly located within the plate,away from the extrusion plate boundary.These basins generally have a huge cumulative deposition thickness,weak interior faulting,and stable sedimentary and tectonic subsidence.
Fungal bioremediation of copper , chromium and boron treated wood as studied by electron paramagnetic resonance
In future years, problems concerning the disposal of waste copper/chromium-treated wood will increase signi4cantly. One of the environmentally friendly options of dealing with such treated wood is through bioremediation with copper-tolerant wood decay fungi in order to recycle both the wood 4bers and the heavy metals. To study changes during the bioremediation process, Norway spruce (Picea abies) samples were vacuum impregnated with 5% CCB solution. Some samples were also impregnated with copper or chromium solution of the same concentration as in the CCB preservative. Following conditioning of the samples, they were then exposed to two copper-tolerant brown rot fungi, (Antrodia vaillantii, Leucogyrophana pinastri) and two copper-sensitive brown rot fungi, (Gloeophyllum trabeum, Poria monticola) for a period of 4–8 weeks. After exposure, the samples were cleaned of the mycelia and leached with water or 1.25% ammonia solution for 4 days. The concentrations of Cr and Cu in the leachates were determined. After the leaching process, the samples were studied using electron paramagnetic resonance (EPR). The results obtained showed the important role oxalic acid produced by the decay fungi plays during leaching of the metals from the treated wood. Furthermore, it was also found that though excretion of oxalic acid is necessary for the leaching of metals, it does not fully explain fungal ability to decay copper preserved wood. ? 2003 Elsevier Ltd. All rights reserved.
LI-RADS (Liver Imaging Reporting and Data System): summary, discussion, and consensus of the LI-RADS Management Working Group and future directions.
To improve standardization and consensus regarding performance, interpreting, and reporting computed tomography (CT) and magnetic resonance imaging (MRI) examinations of the liver in patients at risk for hepatocellular carcinoma (HCC), LI-RADS (Liver Imaging Reporting and Data System) was launched in March 2011 and adopted by many clinical practices throughout the world. LI-RADS categorizes nodules recognized at CT or MRI, in patients at high risk of HCC, as definitively benign, probably benign, intermediate probability of being HCC, probably HCC, and definitively HCC (corresponding to LI-RADS categories 1-5). The LI-RADS Management Working Group, consisting of internationally recognized medical and surgical experts on HCC management, as well as radiologists involved in the development of LI-RADS, was convened to evaluate management implications related to radiological categorization of the estimated probability that a lesion will be ultimately diagnosed as HCC. In this commentary, we briefly review LI-RADS and the initial consensus of the LI-RADS Management Working Group reached during its deliberations in 2013. We then focus on initial discordance of LI-RADS with American Association for the Study of Liver Diseases and Organ Procurement Transplant Network guidelines, the basis for these differences, and how they are being addressed going forward to optimize reporting of CT and MRI findings in patients at risk for HCC and to increase consensus throughout the international community of physicians involved in the diagnosis and treatment of HCC.
Asynchronous frameless event-based optical flow
This paper introduces a process to compute optical flow using an asynchronous event-based retina at high speed and low computational load. A new generation of artificial vision sensors has now started to rely on biologically inspired designs for light acquisition. Biological retinas, and their artificial counterparts, are totally asynchronous and data driven and rely on a paradigm of light acquisition radically different from most of the currently used frame-grabber technologies. This paper introduces a framework for processing visual data using asynchronous event-based acquisition, providing a method for the evaluation of optical flow. The paper shows that current limitations of optical flow computation can be overcome by using event-based visual acquisition, where high data sparseness and high temporal resolution permit the computation of optical flow with micro-second accuracy and at very low computational cost.
BIRDS OF A FEATHER : Homophily in Social Networks
■ Abstract Similarity breeds connection. This principle—the homophily principle—structures network ties of every type, including marriage, friendship, work, advice, support, information transfer, exchange, comembership, and other types of relationship. The result is that people’s personal networks are homogeneous with regard to many sociodemographic, behavioral, and intrapersonal characteristics. Homophily limits people’s social worlds in a way that has powerful implications for the information they receive, the attitudes they form, and the interactions they experience. Homophily in race and ethnicity creates the strongest divides in our personal environments, with age, religion, education, occupation, and gender following in roughly that order. Geographic propinquity, families, organizations, and isomorphic positions in social systems all create contexts in which homophilous relations form. Ties between nonsimilar individuals also dissolve at a higher rate, which sets the stage for the formation of niches (localized positions) within social space. We argue for more research on: (a) the basic ecological processes that link organizations, associations, cultural communities, social movements, and many other social forms; (b) the impact of multiplex ties on the patterns of homophily; and (c) the dynamics of network change over time through which networks and other social entities co-evolve.
Backstepping control design of a DC-DC converter - DC machine association
This paper deals with the synthesis of a speed control strategy for a DC motor drive based on an output feedback backstepping controller. The backstepping method takes into account the non linearities of the system in the design control law and leads to a system asymptotically stable in the context of Lyapunov theory. Simulated results are displayed to validate the feasibility and the effectiveness of the proposed strategy.
Myotonic Dystrophy Type 1 (DM1): From the Genetics to Molecular Mechanisms
For a long time, the human genome was considered an intrinsically stable entity; however, it is currently known that our human genome contains many unstable elements consisting of tandem repeat elements, mainly Short tandem repeats (STR), also known as microsatellites or Simple sequence repeats (SSR) (Ellegren, 2000). These sequences involve a repetitive unit of 1-6 bp, forming series with lengths from two to several thousand nucleotides. STR are widely found in proand eukaryotes, including humans. They appear scattered more or less evenly throughout the human genome, accounting for ca. 3% of the entire genome (Sharma et al., 2007). STR are polymorphic but stable in general population; however, repeats can become unstable during DNA replication, resulting in mitotic or meiotic contractions or expansions. STR instability is an important and unique form of mutation that is linked to >40 neurological, neurodegenerative, and neuromuscular disorders (Pearson et al., 2005). In particular, abnormal expansion of trinucleotide repeats (CTG)n, (CGG)n, (CCG)n, (GAA)n, and (CAG)n have been associated with different diseases such as fragile X syndrome, Huntington disease (HD), Dentatorubral-pallidoluysian atrophy (DRPLA), Friedreich ataxia (FA), diverse Spinocerebellar ataxias (SCA), and Myotonic dystrophy type 1 (DM1).
A LabVIEW-Based GPS Receiver Development and Testing Platform with DSP Peripherals : Case study with C 6713 DSK
The modernization of Global Positioning Systems (GPS) and the availability of more complex signals and modulation schemes boost the development of civil and military applications while the accuracy and coverage of receivers continually improve. Recently, software defined receiver solutions gained attention for flexible multimode operations. For them, developers address algorithmic and hardware accelerators or their hybrids for fast prototyping and testing high performance receivers for various conditions. This paper presents a new fast prototyping concept exploiting digital signal processor (DSP) peripherals and the benefits of the host environment using the National Instruments (NI) LabVIEW platform. With a reasonable distribution of tasks between the host hardware and reconfigurable peripherals, a higher performance is achieved. As a case study, in this paper the Texas Instruments (TI) TMS320C6713 DSP is used along with a Real Time Data Exchange (RTDX) communication link to compare with similar Simulink-based solutions. The proposed testbed GPS signal is created using the NI PXI signal generator and the NI GPS Simulation Toolkit.
Complex spectral minutiae representation for fingerprint recognition
The spectral minutiae representation is designed for combining fingerprint recognition with template protection. This puts several constraints to the fingerprint recognition system: first, no relative alignment of two fingerprints is allowed due to the encrypted storage; second, a fixed-length feature vector is required as input of template protection schemes. The spectral minutiae representation represents a minutiae set as a fixed-length feature vector, which is invariant to translation, rotation and scaling. These characteristics enable the combination of fingerprint recognition systems with template protection schemes and allow for fast minutiae-based matching as well. In this paper, we introduce the complex spectral minutiae representation (SMC): a spectral representation of a minitiae set, as the location-based and the orientation-based spectral minutiae representations (SML and SMO), but it encodes minutiae orientations differently. SMC improves the recognition accuracy, expressed in term of the Equal Error Rate, about 2–4 times compared with SML and SMO. In addition, the paper presents two feature reduction algorithms: the Column-PCA and the Line-DFT feature reductions, which achieve a template size reduction around 90% and results in a 10–15 times higher matching speed (with 125,000 comparisons per second).
Oblivious Neural Network Computing via Homomorphic Encryption
The problem of secure data processing by means of a neural network (NN) is addressed. Secure processing refers to the possibility that the NN owner does not get any knowledge about the processed data since they are provided to him in encrypted format. At the same time, the NN itself is protected, given that its owner may not be willing to disclose the knowledge embedded within it. The considered level of protection ensures that the data provided to the network and the network weights and activation functions are kept secret. Particular attention is given to prevent any disclosure of information that could bring a malevolent user to get access to the NN secrets by properly inputting fake data to any point of the proposed protocol. With respect to previous works in this field, the interaction between the user and the NN owner is kept to a minimum with no resort to multiparty computation protocols.
Promoting self-regulation through school-based martial arts training
The impact of school-based Tae Kwon Do training on self-regulatory abilities was examined. A self-regulation framework including three domains (cognitive, affective, and physical) was presented. Children (N = 207) from kindergarten through Grade 5 were randomly assigned by homeroom class to either the intervention (martial arts) group or a comparison (traditional physical education) group. Outcomes were assessed using multidimensional, multimodal assessments. After a 3-month intervention, results indicated that the martial arts group demonstrated greater improvements than the comparison group in areas of cognitive self-regulation, affective self-regulation, prosocial behavior, classroom conduct, and performance on a mental math test. A significant Group Gender interaction was found for cognitive self-regulation and classroom conduct, with boys showing greater improvements than girls. Possible explanations of this interaction as well as implications for components of martial arts training for the development of self-regulation in school-age children are discussed. D 2004 Elsevier Inc. All rights reserved.
Differential variational inequalities
This paper introduces and studies the class of differential variational inequalities (DVIs) in a finite-dimensional Euclidean space. The DVI provides a powerful modeling paradigm for many applied problems in which dynamics, inequalities, and discontinuities are present; examples of such problems include constrained time-dependent physical systems with unilateral constraints, differential Nash games, and hybrid engineering systems with variable structures. The DVI unifies several mathematical problem classes that include ordinary differential equations (ODEs) with smooth and discontinuous right-hand sides, differential algebraic equations (DAEs), dynamic complementarity systems, and evolutionary variational inequalities. Conditions are presented under which the DVI can be converted, either locally or globally, to an equivalent ODE with a Lipschitz continuous right-hand function. For DVIs that cannot be so converted, we consider their numerical resolution via an Euler time-stepping procedure, which involves the solution of a sequence of finite-dimensional The work of J.-S. Pang is supported by the National Science Foundation under grants CCR-0098013 CCR-0353074, and DMS-0508986, by a Focused Research Group Grant DMS-0139715 to the Johns Hopkins University and DMS-0353016 to Rensselaer Polytechnic Institute, and by the Office of Naval Research under grant N00014-02-1-0286. The work of D. E. Stewart is supported by the National Science Foundation under a Focused Research Group grant DMS-0138708. J.-S. Pang (B) Department of Mathematical Sciences and Department of Decision Science and Engineering Systems, Rensselaer Polytechnic Institute, Troy, NY 12180-3590, USA e-mail: [email protected] D. E. Stewart Department of Mathematics, University of Iowa, Iowa City, IA 52242, USA e-mail: [email protected] 346 J.-S. Pang, D. E. Stewart variational inequalities. Borrowing results from differential inclusions (DIs) with upper semicontinuous, closed and convex valued multifunctions, we establish the convergence of such a procedure for solving initial-value DVIs. We also present a class of DVIs for which the theory of DIs is not directly applicable, and yet similar convergence can be established. Finally, we extend the method to a boundary-value DVI and provide conditions for the convergence of the method. The results in this paper pertain exclusively to systems with “index” not exceeding two and which have absolutely continuous solutions.
The adoption of electronic tax filing systems: an empirical study
This paper discusses the factors affecting the adoption of electronic tax-filing systems. Using the technology acceptance model (TAM) as a theoretical framework, this study introduces “perceived credibility” as a new factor that reflects the user’s intrinsic belief in the electronic tax-filing systems, and examines the effect of computer self-efficacy on the intention to use an electronic tax-filing system. Based on a sample of 260 users from a telephone interview, the results strongly support the extended TAM in predicting the intention of users to adopt electronic tax-filing systems. The results also demonstrate the significant effect that computer self-efficacy has on behavioral intention through perceived ease of use, perceived usefulness, and perceived credibility. Based on the findings of this study, implications for electronic tax filing in particular and for e-government services in general are discussed. Finally, this paper concludes by discussing limitations that could be addressed in future studies. © 2002 Elsevier Inc. All rights reserved.
Tablet-level origin of toughening in abalone shells and translation to synthetic composite materials.
Nacre, the iridescent material in seashells, is one of many natural materials employing hierarchical structures to achieve high strength and toughness from relatively weak constituents. Incorporating these structures into composites is appealing as conventional engineering materials often sacrifice strength to improve toughness. Researchers hypothesize that nacre's toughness originates within its brick-and-mortar-like microstructure. Under loading, bricks slide relative to each other, propagating inelastic deformation over millimeter length scales. This leads to orders-of-magnitude increase in toughness. Here, we use in situ atomic force microscopy fracture experiments and digital image correlation to quantitatively prove that brick morphology (waviness) leads to transverse dilation and subsequent interfacial hardening during sliding, a previously hypothesized dominant toughening mechanism in nacre. By replicating this mechanism in a scaled-up model synthetic material, we find that it indeed leads to major improvements in energy dissipation. Ultimately, lessons from this investigation may be key to realizing the immense potential of widely pursued nanocomposites.
Optimal loop unrolling for GPGPU programs
Graphics Processing Units (GPUs) are massively parallel, many-core processors with tremendous computational power and very high memory bandwidth. With the advent of general purpose programming models such as NVIDIA's CUDA and the new standard OpenCL, general purpose programming using GPUs (GPGPU) has become very popular. However, the GPU architecture and programming model have brought along with it many new challenges and opportunities for compiler optimizations. One such classical optimization is loop unrolling. Current GPU compilers perform limited loop unrolling. In this paper, we attempt to understand the impact of loop unrolling on GPGPU programs. We develop a semi-automatic, compile-time approach for identifying optimal unroll factors for suitable loops in GPGPU programs. In addition, we propose techniques for reducing the number of unroll factors evaluated, based on the characteristics of the program being compiled and the device being compiled to. We use these techniques to evaluate the effect of loop unrolling on a range of GPGPU programs and show that we correctly identify the optimal unroll factors. The optimized versions run up to 70% faster than the unoptimized versions.
Security and Privacy in Decentralized Energy Trading Through Multi-Signatures, Blockchain and Anonymous Messaging Streams
Smart grids equipped with bi-directional communication flow are expected to provide more sophisticated consumption monitoring and energy trading. However, the issues related to the security and privacy of consumption and trading data present serious challenges. In this paper we address the problem of providing transaction security in decentralized smart grid energy trading without reliance on trusted third parties. We have implemented a proof-of-concept for decentralized energy trading system using blockchain technology, multi-signatures, and anonymous encrypted messaging streams, enabling peers to anonymously negotiate energy prices and securely perform trading transactions. We conducted case studies to perform security analysis and performance evaluation within the context of the elicited security and privacy requirements.
A piece of the action: Modulation of sensory-motor regions by action idioms and metaphors
The idea that the conceptual system draws on sensory and motor systems has received considerable experimental support in recent years. Whether the tight coupling between sensory-motor and conceptual systems is modulated by factors such as context or task demands is a matter of controversy. Here, we tested the context sensitivity of this coupling by using action verbs in three different types of sentences in an fMRI study: literal action, apt but non-idiomatic action metaphors, and action idioms. Abstract sentences served as a baseline. The result showed involvement of sensory-motor areas for literal and metaphoric action sentences, but not for idiomatic ones. A trend of increasing sensory-motor activation from abstract to idiomatic to metaphoric to literal sentences was seen. These results support a gradual abstraction process whereby the reliance on sensory-motor systems is reduced as the abstractness of meaning as well as conventionalization is increased, highlighting the context sensitive nature of semantic processing.
Low-Rank Common Subspace for Multi-view Learning
Multi-view data is very popular in real-world applications, as different view-points and various types of sensors help to better represent data when fused across views or modalities. Samples from different views of the same class are less similar than those with the same view but different class. We consider a more general case that prior view information of testing data is inaccessible in multi-view learning. Traditional multi-view learning algorithms were designed to obtain multiple view-specific linear projections and would fail without this prior information available. That was because they assumed the probe and gallery views were known in advance, so the correct view-specific projections were to be applied in order to better learn low-dimensional features. To address this, we propose a Low-Rank Common Subspace (LRCS) for multi-view data analysis, which seeks a common low-rank linear projection to mitigate the semantic gap among different views. The low-rank common projection is able to capture compatible intrinsic information across different views and also well-align the within-class samples from different views. Furthermore, with a low-rank constraint on the view-specific projected data and that transformed by the common subspace, the within-class samples from multiple views would concentrate together. Different from the traditional supervised multi-view algorithms, our LRCS works in a weakly supervised way, where only the view information gets observed. Such a common projection can make our model more flexible when dealing with the problem of lacking prior view information of testing data. Two scenarios of experiments, robust subspace learning and transfer learning, are conducted to evaluate our algorithm. Experimental results on several multi-view datasets reveal that our proposed method outperforms state-of-the-art, even when compared with some supervised learning methods.
Computational models in the debate over language learnability
Computational models have played a central role in the debate over language learnability. This article discusses how they have been used in different “stances”, from generative views to more recently introduced explanatory frameworks based on embodiment, cognitive development and cultural evolution. By digging into the details of certain specific models, we show how they organize, transform and rephrase defining questions about what makes language learning possible for children. Finally, we present a tentative synthesis to recast the debate using the notion of learning bias.
From patches to pixels in Non-Local methods: Weighted-average reprojection
Since their introduction in denoising, the family of non local methods, whose Non-Local Means (NL-Means) is the most famous member, has proved its ability to challenge other powerful methods such as wavelet based approaches or variational techniques. Though simple to implement and efficient in practice, the classical NL-Means suffers from ringing artifacts around edges. In this paper, we present an easy to implement and time efficient modification of the NL-means based on a better reprojection from the patches space to the original (image) pixel space. We illustrate the performance of our method on a toy example and on some classical images.
Wireless Sensor Network Operating System Design Rules Based on Real-World Deployment Survey
Wireless sensor networks (WSNs) have been a widely researched field since the beginning of the 21st century. The field is already maturing, and TinyOS has established itself as the de facto standard WSN Operating System (OS). However, the WSN researcher community is still active in building more flexible, efficient and user-friendly WSN operating systems. Often, WSN OS design is based either on practical requirements of a particular research project or research group’s needs or on theoretical assumptions spread in the WSN community. The goal of this paper is to propose WSN OS design rules that are based on a thorough survey of 40 WSN deployments. The survey unveils trends of WSN applications and provides empirical substantiation to support widely usable and flexible WSN operating system design.
The use of processed allograft dermal matrix for intraoral resurfacing: an alternative to split-thickness skin grafts.
BACKGROUND The standard reconstruction of significant mucosal defects in head and neck surgery has been split-thickness skin grafting (STSG). OBJECTIVE To examine the use of a commercially available acellular dermal matrix as an alternative to STSG to reduce the scarring and contracture inherent to meshed split-thickness autografting and avoid the additional donor site morbidity. PATIENTS AND METHODS Twenty-nine patients with full-thickness defects of the oral cavity were included in this retrospective chart review. Candidate patients had their operative procedure performed at a tertiary care center during a 24-month period. Allograft dermal matrix, an acellular tissue-processed biomaterial, was applied to these intraoral defects. The defects were reconstructed with an acellular dermal graft matrix in the same technical fashion as with an autologous skin graft. Patients were evaluated for rate of "take," functional return time to reepithelialization, average surface area of graft, associated pain and discomfort, evidence of restrictive graft contracture, patient diagnosis, and graft location within the oral cavity. Any evidence of incomplete graft reepithelialization was considered grounds for graft failure, either complete or incomplete. Epithelialization and contracture were assessed during outpatient clinical examinations. Patient complaints with regard to discomfort at the graft bed were considered evidence of pain. RESULTS Graft locations included 9 in the tongue (32%), 5 in the maxillary oral vestibule (17%), 4 in the mandible (14%), 4 in the floor of mouth (14%), 3 in the hard and/or soft palate (10%), 3 in the tonsil (10%), and 1 in the lip (3%). The overall rate of take was 90% with complete epithelialization noted on clinical evaluation within 4 weeks. Patients were followed up for an average of 8.6 months. The average grafted surface area was 25 cm2. Pain or discomfort was noted in 3 patients (12%). One patient (4%) was noted to have clinical evidence of graft contracture. CONCLUSIONS Allograft dermal matrix was successful as a substitute to autologous STSG for resurfacing of intraoral defects. Allograft dermal matrix may be considered a useful reconstructive option for patients with oral mucosal defects.
A Neural Attention Model for Sentence Summarization
Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build. In this work, we propose a fully data-driven approach to abstractive sentence summarization. Our method utilizes a local attention-based model that generates each word of the summary conditioned on the input sentence. While the model is structurally simple, it can easily be trained end-to-end and scales to a large amount of training data. The model shows significant performance gains on the DUC-2004 shared task compared with several strong baselines.
On Becoming a Strategic Partner : The Role of Human Resources in Gaining Competitive Advantage by
Although managers cite human resources as a firm's most important asset, many organizational decisions do not reflect this belief. This paper uses the VRIO (value, rareness, imitability, and organization) framework to examine the role that the Human Resource (HR) function plays in developing a sustainable competitive advantage. We discuss why some popularly cited sources of sustainable competitive advantage are not, and what aspects of a firm's human resources can provide a source of sustainable competitive advantage. We also examine the role of the HR executive as a strategic partner in developing and maintaining competitive advantage within the firm.
Characterising Semantic Relatedness using Interpretable Directions in Conceptual Spaces
Various applications, such as critique-based recommendation systems and analogical classifiers, rely on knowledge of how different entities relate. In this paper, we present a methodology for identifying such semantic relationships, by interpreting them as qualitative spatial relations in a conceptual space. In particular, we use multi-dimensional scaling to induce a conceptual space from a relevant text corpus and then identify directions that correspond to relative properties such as “more violent than” in an entirely unsupervised way. We also show how a variant of FOIL is able to learn natural categories from such qualitative representations, by simulating a fortiori inference, an important pattern of commonsense reasoning.
The EPIC-Norfolk Eye Study: rationale, methods and a cross-sectional analysis of visual impairment in a population-based cohort
OBJECTIVES To summarise the methods of the European Prospective Investigation of Cancer (EPIC)-Norfolk Eye Study, and to present data on the prevalence of visual impairment and associations with visual impairment in the participants. DESIGN A population-based cross-sectional study nested within an on-going prospective cohort study (EPIC). SETTING East England population (the city of Norwich and its surrounding small towns and rural areas). PARTICIPANTS A total of 8623 participants aged 48-92 years attended the Eye Study and underwent assessment of visual acuity, autorefraction, biometry, tonometry, corneal biomechanical measures, scanning laser polarimetry, confocal scanning laser ophthalmoscopy, fundal photography and automated perimetry. OUTCOME MEASURES Visual impairment was defined according to the WHO classification and the UK driving standard, and was based on presenting visual acuity. Summary measures of other ophthalmic measurements are also presented. RESULTS The prevalence (95% CI) of WHO-defined moderate-to-severe visual impairment and blindness was 0.74% (0.55% to 0.92%). The prevalence (95% CI) of presenting visual acuity worse than the UK driving standard was 5.87% (5.38% to 6.37%). Older age was significantly associated with visual impairment or blindness (p<0.001). Presenting visual acuity worse than UK driving standard was associated with older age (p<0.001), female sex (p=0.005) and lower educational level (p=0.022). CONCLUSIONS The prevalence of blindness and visual impairment in this selected population was low. Visual impairment was more likely in older participants, women and those with a lower educational level.
Object Co-segmentation via Graph Optimized-Flexible Manifold Ranking
Aiming at automatically discovering the common objects contained in a set of relevant images and segmenting them as foreground simultaneously, object co-segmentation has become an active research topic in recent years. Although a number of approaches have been proposed to address this problem, many of them are designed with the misleading assumption, unscalable prior, or low flexibility and thus still suffer from certain limitations, which reduces their capability in the real-world scenarios. To alleviate these limitations, we propose a novel two-stage co-segmentation framework, which introduces the weak background prior to establish a globally close-loop graph to represent the common object and union background separately. Then a novel graph optimized-flexible manifold ranking algorithm is proposed to flexibly optimize the graph connection and node labels to co-segment the common objects. Experiments on three image datasets demonstrate that our method outperforms other state-of-the-art methods.
Generative Attention Model with Adversarial Self-learning for Visual Question Answering
Visual question answering (VQA) is arguably one of the most challenging multimodal understanding problems as it requires reasoning and deep understanding of the image, the question, and their semantic relationship. Existing VQA methods heavily rely on attention mechanisms to semantically relate the question words with the image contents for answering the related questions. However, most of the attention models are simplified as a linear transformation, over the multimodal representation, which we argue is insufficient for capturing the complex nature of the multimodal data. In this paper we propose a novel generative attention model obtained by adversarial self-learning. The proposed adversarial attention produces more diverse visual attention maps and it is able to generalize the attention better to new questions. The experiments show the proposed adversarial attention leads to a state-of-the-art VQA model on the two VQA benchmark datasets, VQA v1.0 and v2.0.
People on the move in a changing climate : the regional impact of environmental change on migration
1: Regional Perspectives on Migration, the Environment and Climate Change: Frank Laczko, Etienne Piguet.- 2: Migration and Environmental Change in Asia: Graeme Hugo and Douglas K. Bardsley.- 3: Environmental change and Migration between Europe and its Neighbours: Mark Mulligan Sophia Burke and Caitlin Douglas.- 4: Environmental change and human migration in sub-Saharan Africa: James Morrissey.- 5: Climate Change, Extreme Weather Events, and Migration: Review of the Literature for Five Arab Countries: Quentin Wodon, Nicholas Burger, Audra Grant, George Joseph, Andrea Liverani and Olesya Tkacheva.- 6: Migration and Environmental Change in North America (USA and Canada): Susana B. Adamo and Alexander M. de Sherbinin.- 7: Migration and Climate Change in Latin America and the Caribbean: Raoul Kaenzig and Etienne Piguet.- 8: Migration and Climate Change in Oceania Richard Bedford and John Campbell.- 9: The changing Hindu Kush Himalayas: Environmental change and migration: Soumyadeep Banerjee, Richard Black, Dominic Kniveton, Michael Kollmair.-10: Regional Policy Perspectives: Karoline Popp.
Cyclic AMP imaging in adult cardiac myocytes reveals far-reaching beta1-adrenergic but locally confined beta2-adrenergic receptor-mediated signaling.
Beta(1)- and beta(2)-adrenergic receptors (betaARs) are known to differentially regulate cardiomyocyte contraction and growth. We tested the hypothesis that these differences are attributable to spatial compartmentation of the second messenger cAMP. Using a fluorescent resonance energy transfer (FRET)-based approach, we directly monitored the spatial and temporal distribution of cAMP in adult cardiomyocytes. We developed a new cAMP-FRET sensor (termed HCN2-camps) based on a single cAMP binding domain of the hyperpolarization activated cyclic nucleotide-gated potassium channel 2 (HCN2). Its cytosolic distribution, high dynamic range, and sensitivity make HCN2-camps particularly well suited to monitor subcellular localization of cardiomyocyte cAMP. We generated HCN2-camps transgenic mice and performed single-cell FRET imaging on freshly isolated cardiomyocytes. Whole-cell superfusion with isoproterenol showed a moderate elevation of cAMP. Application of various phosphodiesterase (PDE) inhibitors revealed stringent control of cAMP through PDE4>PDE2>PDE3. The beta(1)AR-mediated cAMP signals were entirely dependent on PDE4 activity, whereas beta(2)AR-mediated cAMP was under control of multiple PDE isoforms. beta(1)AR subtype-specific stimulation yielded approximately 2-fold greater cAMP responses compared with selective beta(2)-subtype stimulation, even on treatment with the nonselective PDE inhibitor 3-isobutyl-1-methylxanthine (IBMX) (DeltaFRET, 17.3+/-1.3% [beta(1)AR] versus 8.8+/-0.4% [beta(2)AR]). Treatment with pertussis toxin to inactivate G(i) did not affect cAMP production. Localized beta(1)AR stimulation generated a cAMP gradient propagating throughout the cell, whereas local beta(2)AR stimulation did not elicit marked cAMP diffusion. Our data reveal that in adult cardiac myocytes, beta(1)ARs induce far-reaching cAMP signals, whereas beta(2)AR-induced cAMP remains locally confined.
Compliance of blood sampling procedures with the CLSI H3-A6 guidelines: An observational study by the European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) working group for the preanalytical phase (WG-PRE).
BACKGROUND An observational study was conducted in 12 European countries by the European Federation of Clinical Chemistry and Laboratory Medicine Working Group for the Preanalytical Phase (EFLM WG-PRE) to assess the level of compliance with the CLSI H3-A6 guidelines. METHODS A structured checklist including 29 items was created to assess the compliance of European phlebotomy procedures with the CLSI H3-A6 guideline. A risk occurrence chart of individual phlebotomy steps was created from the observed error frequency and severity of harm of each guideline key issue. The severity of errors occurring during phlebotomy was graded using the risk occurrence chart. RESULTS Twelve European countries participated with a median of 33 (18-36) audits per country, and a total of 336 audits. The median error rate for the total phlebotomy procedure was 26.9 % (10.6-43.8), indicating a low overall compliance with the recommended CLSI guideline. Patient identification and test tube labelling were identified as the key guideline issues with the highest combination of probability and potential risk of harm. Administrative staff did not adhere to patient identification procedures during phlebotomy, whereas physicians did not adhere to test tube labelling policy. CONCLUSIONS The level of compliance of phlebotomy procedures with the CLSI H3-A6 guidelines in 12 European countries was found to be unacceptably low. The most critical steps in need of immediate attention in the investigated countries are patient identification and tube labelling.
Design on mixed-voltage-tolerant I/O interface with novel tracking circuits in a 0.13-µm CMOS technology
This paper presents a 1.2V/2.5V tolerant I/O buffer design with only thin gate-oxide devices. The novel floating N-well and gate-tracking circuits in mixed-voltage I/O buffer are proposed to overcome the problem of leakage current, which will occur in the conventional CMOS I/O buffer when using in the mixedvoltage I/O interfaces. The new proposed 1.2V/2.5V tolerant I/O buffer design has been successfully verified in a 0.13-μm salicided CMOS process, which can be also applied in other CMOS processes to serve different mixed-voltage I/O interfaces.
The equipoise of perioperative anticoagulation management: a Canadian cross-sectional survey
Warfarin anticoagulation is indicated for the treatment of venous thromboembolism (VTE). Anticoagulation management around the time of surgery is uncertain and while perioperative anticoagulation guidelines exist, there is little evidence to support an optimal management strategy in patients with VTE [1]. Furthermore, rivaroxaban, an oral direct factor Xa inhibitor, is increasingly being prescribed for VTE treatment [2]. The only guide for perioperative rivaroxaban management are expert opinion reviews largely based on drug pharmacokinetic profiles [3, 4]. Given the lack of evidence for perioperative strategies, we sought out to assess general practices among Canadian specialists who manage perioperative anticoagulation. In January 2013, we completed a cross-sectional Canadian survey among hematologists and general internal medicine (GIM) specialists. The survey was distributed to 480 individuals on-line using SurveyMonkey , with an invitation to the survey hyperlink distributed via e-mail. Specifically, we contacted the members of the Thrombosis Interest Group of Canada (TIGC), a group of 72 healthcare professionals with an interest in venous and arterial thromboembolic disease. Without the availability of a centralized database for GIM physicians, we contacted GIM division directors at Canadian academic centers, who then distributed the survey to general internists regularly involved in perioperative care. A second reminder e-mail to complete the survey was sent. The survey assessed perioperative warfarin management using four hypothetical scenarios involving a 70-year old man with a recurrent idiopathic DVT/PE diagnosed 5 months ago, undergoing a (1) outpatient prostate biopsy, (2) laparoscopic cholecystectomy (overnight stay), (3) inpatient colostomy reversal for colon cancer, and (4) inpatient right knee replacement. The management options included a temporary cessation of warfarin 5 days before the procedure with the option to (a) resume warfarin postoperatively with no low-molecular-weight heparin (LMWH) use, (b) prophylactic-dose LMWH given before and after the procedure, (c) prophylactic-dose LMWH only after the procedure, (d) therapeutic-dose LMWH before and after the procedure, or (e) therapeutic-dose LMWH only after the procedure. We included a sixth open response option so participants could describe other management strategies. There were two questions that assessed the perioperative management of rivaroxaban. The survey was pre-tested (M.K. and A.L.L.). The data was collected online using SurveyMonkey and entered into a database for analysis. 95 % confidence intervals for proportions were calculated using the Wilson’s score method (OpenEpi Version 3) [5]. Proportions were calculated using the total number of respondents for each question. Of the 480 surveys that were distributed, 76 individuals responded (16 %). There were 52 general internists (68.4 %), 19 hematologists (26.3 %), and 5 respondents who classified themselves as ‘other’ (identified as: 1 thrombosis physician, 2 cardiologists with a thrombosis interest and 2 anticoagulation clinic pharmacists). The responses to four clinical scenarios involving perioperative warfarin management are summarized in Table 1. L. Skeith A. Lazo-Langner M. J. Kovacs (&) Division of Hematology, Department of Medicine, London Health Sciences Centre, 800 Commissioners Rd E, London, ON N6A 5W9, Canada e-mail: [email protected]
Survey on Collaborative Filtering, Content-based Filtering and Hybrid Recommendation System
Recommender systems or recommendation systems are a subset of information filtering system that used to anticipate the &apos;evaluation&apos; or &apos;preference&apos; that user would feed to an item. In recent years E-commerce applications are widely using Recommender system. Generally the most popular E-commerce sites are probably music, news, books, research articles, and products. Recommender systems are also available for business experts, jokes, restaurants, financial services, life insurance and twitter followers. Recommender systems have formulated in parallel with the web. Initially Recommender systems were based on demographic, content-based filtering and collaborative filtering. Currently, these systems are incorporating social information for enhancing a quality of recommendation process. For betterment of recommendation process in the future, Recommender systems will use personal, implicit and local information from the Internet. This paper provides an overview of recommender systems that include collaborative filtering, content-based filtering and hybrid approach of recommender system.
On the Cognitive Processes of Human Perception with Emotions, Motivations, and Attitudes
An interactive motivation-attitude theory is developed based on the Layered Reference Model of the Brain (LRMB) and the object-attribute-relation (OAR) model. This paper presents a rigorous model of human perceptual processes such as emotions, motivations, and attitudes. A set of mathematical models and formal cognitive processes of perception is developed. Interactions and relationships between motivation and attitude are formally described in real-time process algebra (RTPA). Applications of the mathematical models of motivations and attitudes in software engineering are demonstrated. This work is a part of the formalization of LRMB, which provides a comprehensive model for explaining the fundamental cognitive processes of the brain and their interactions. This work demonstrates that the complicated human emotional and perceptual phenomena can be rigorously modeled and formally treated based on cognitive informatics theories and denotational mathematics.
Monitoring for Precision Agriculture using Wireless Sensor Network-A review
This paper explores the potential of WSN in the area of agriculture in India. Aiming at the sugarcane crop, a multi-parameter monitoring system is designed based on low-power ZigBee wireless communication technology for system automation and monitoring. Real time data is collected by wireless sensor nodes and transmitted to base station using zigbee. Data is received, saved and displayed at base station to achieve soil temperature, soil moisture and humidity monitoring. The data is continuously monitored at base station and if it exceeds the desired limit, a message is sent to farmer on mobile through GSM network for controlling actions. The implementation of system software and hardware are given, including the design of wireless node and the implementation principle of data transmission and communication modules. This system overcomes the limitations of wired sensor networks and has the advantage of flexible networking for monitoring equipment, convenient installation and removing of equipment, low cost and reliable nodes and high capacity.
Doing It Now or Later
We examine self-control problems — modeled as time-inconsistent, presentbiased preferences—in a model where a person must do an activity exactly once. We emphasize two distinctions: Do activities involve immediate costs or immediate rewards, and are people sophisticated or naive about future self-control problems? Naive people procrastinate immediate-cost activities and preproperate—do too soon—immediate-reward activities. Sophistication mitigates procrastination, but exacerbates preproperation. Moreover, with immediate costs, a small present bias can severely harm only naive people, whereas with immediate rewards it can severely harm only sophisticated people. Lessons for savings, addiction, and elsewhere are discussed. (JEL A12, B49, C70, D11, D60, D74, D91, E21)
A hybrid parallel Delaunay image-to-mesh conversion algorithm scalable on distributed-memory clusters
In this paper, we present a scalable three dimensional hybrid MPI+Threads parallel Delaunay image-to-mesh conversion algorithm. A nested master-worker communication model for parallel mesh generation is implemented which simultaneously explores process-level parallelization and thread-level parallelization: inter-node communication using MPI and inter-core communication inside one node using threads. In order to overlap the communication (task request and data movement) and computation (parallel mesh refinement), the inter-node MPI communication and intra-node local mesh refinement is separated. The master thread that initializes the MPI environment is in charge of the inter-node MPI communication while the worker threads of each process are only responsible for the local mesh refinement within the node. We conducted a set of experiments to test the performance of the algorithm on Turing, a distributed memory cluster at Old Dominion University High Performance Computing Center and observed that the granularity of coarse level data decomposition, which affects the coarse level concurrency, has a significant influence on the performance of the algorithm. With the proper value of granularity, the algorithm expresses impressive performance potential and is scalable to 30 distributed memory compute nodes with 20 cores each (the maximum number of nodes available for us in the experiments). c © 2016 The Authors. Published by Elsevier Ltd. Peer-review under responsibility of organizing committee of the 25th International Meshing Roundtable (IMR25).
Support Vectors Machine-based identification of heart valve diseases using heart sounds
Taking into account that heart auscultation remains the dominant method for heart examination in the small health centers of the rural areas and generally in primary healthcare set-ups, the enhancement of this technique would aid significantly in the diagnosis of heart diseases. In this context, the present paper initially surveys the research that has been conducted concerning the exploitation of heart sound signals for automated and semi-automated detection of pathological heart conditions. Then it proposes an automated diagnosis system for the identification of heart valve diseases based on the Support Vector Machines (SVM) classification of heart sounds. This system performs a highly difficult diagnostic task (even for experienced physicians), much more difficult than the basic diagnosis of the existence or not of a heart valve disease (i.e. the classification of a heart sound as 'healthy' or 'having a heart valve disease'): it identifies the particular heart valve disease. The system was applied in a representative global dataset of 198 heart sound signals, which come both from healthy medical cases and from cases suffering from the four most usual heart valve diseases: aortic stenosis (AS), aortic regurgitation (AR), mitral stenosis (MS) and mitral regurgitation (MR). Initially the heart sounds were successfully categorized using a SVM classifier as normal or disease-related and then the corresponding murmurs in the unhealthy cases were classified as systolic or diastolic. For the heart sounds diagnosed as having systolic murmur we used a SVM classifier for performing a more detailed classification of them as having aortic stenosis or mitral regurgitation. Similarly for the heart sounds diagnosed as having diastolic murmur we used a SVM classifier for classifying them as having aortic regurgitation or mitral stenosis. Alternative classifiers have been applied to the same data for comparison (i.e. back-propagation neural networks, k-nearest-neighbour and naïve Bayes classifiers), however their performance for the same diagnostic problems was lower than the SVM classifiers proposed in this work.
Teaching UAVs to Race Using UE4Sim
Automating the navigation of unmanned aerial vehicles (UAVs) in diverse scenarios has gained much attention in the recent years. However, teaching UAVs to fly in challenging environments remains an unsolved problem, mainly due to the lack of data for training. In this paper, we develop a photo-realistic simulator that can afford the generation of large amounts of training data (both images rendered from the UAV camera and its controls) to teach a UAV to autonomously race through challenging tracks. We train a deep neural network to predict UAV controls from raw image data for the task of autonomous UAV racing. Training is done through imitation learning enabled by data augmentation to allow for the correction of navigation mistakes. Extensive experiments demonstrate that our trained network (when sufficient data augmentation is used) outperforms state-of-the-art methods and flies more consistently than many human pilots.
A Data Quality in Use model for Big Data
Beyond the hype of Big Data, something within business intelligence projects is indeed changing. This is mainly because Big Data is not only about data, but also about a complete conceptual and technological stack including raw and processed data, storage, ways of managing data, processing and analytics. A challenge that becomes even trickier is the management of the quality of the data in Big Data environments. More than ever before the need for assessing the Quality-in-Use gains importance since the real contribution – business value – of data can be only estimated in its context of use. Although there exists different Data Quality models for assessing the quality of regular data, none of them has been adapted to Big Data. To fill this gap, we propose the ‘‘3As Data Quality-in-Use model’’, which is composed of three Data Quality characteristics for assessing the levels of Data Quality-in-Use in Big Data projects: Contextual Adequacy, Operational Adequacy and Temporal Adequacy. The model can be integrated into any sort of BigData project, as it is independent of anypre-conditions or technologies. Thepaper shows the way to use the model with a working example. The model accomplishes every challenge related to Data Quality program aimed for Big Data. The main conclusion is that the model can be used as an appropriate way to obtain the Quality-in-Use levels of the input data of the Big Data analysis, and those levels can be understood as indicators of trustworthiness and soundness of the results of the Big Data analysis. © 2015 Elsevier B.V. All rights reserved.
Tortoise and Hares Consensus: the Meshcash Framework for Incentive-Compatible, Scalable Cryptocurrencies
We propose Meshcash, a new framework for cryptocurrency protocols that combines a novel, proof-of-work based, permissionless byzantine consensus protocol (the tortoise) that guarantees eventual consensus and irreversibility, with a possibly-faulty but quick consensus protocol (the hare). The construction is modular, allowing any suitable “hare” protocol to be plugged in. The combined protocol enjoys best of both worlds properties: consensus is quick if the hare protocol succeeds, but guaranteed even if it is faulty. Unlike most existing proofof-work based consensus protocols, our tortoise protocol does not rely on leader-election (e.g., the single miner who managed to extend the longest chain). Rather, we use ideas from asynchronous byzantine agreement protocols to gradually converge to a consensus. Meshcash, is designed to be race-free: there is no “race” to generate the next block, hence honestly-generated blocks are always rewarded. This property, which we define formally as a game-theoretic notion, turns out to be useful in analyzing rational miners’ behavior: we prove (using a generalization of the blockchain mining games of Kiayias et al.) that race-free blockchain protocols are incentive-compatible and satisfy linearity of rewards (i.e., a party receives rewards proportional to its computational power). Because Meshcash can tolerate a high block rate regardless of network propagation delays (which will only affect latency), it allows us to lower both the variance and the expected time between blocks for honest miners; together with linearity of rewards, this makes pooled mining far less attractive. Moreover, race-free protocols scale more easily (in terms of transaction rate). This is because the race-free property implies that the network propagation delays are not a factor in terms of rewards, which removes the main impediment to accommodating a larger volume of transactions. We formally prove that all of our guarantees hold in the asynchronous communication model of Pass, Seeman and shelat, and against a constant fraction of byzantine (malicious) miners; not just rational ones.
Binary artificial algae algorithm for multidimensional knapsack problems
The multidimensional knapsack problem (MKP) is a well-known NP-hard optimization problem. Various meta-heuristic methods are dedicated to solve this problem in literature. Recently a new meta-heuristic algorithm, called artificial algae algorithm (AAA), was presented, which has been successfully applied to solve various continuous optimization problems. However, due to its continuous nature, AAA cannot settle the discrete problem straightforwardly such as MKP. In view of this, this paper proposes a binary artificial algae algorithm (BAAA) to efficiently solve MKP. This algorithm is composed of discrete process, repair operators and elite local search. In discrete process, two logistic functions with different coefficients of curve are studied to achieve good discrete process results. Repair operators are performed to make the solution feasible and increase the efficiency. Finally, elite local search is introduced to improve the quality of solutions. To demonstrate the efficiency of our proposed algorithm, simulations and evaluations are carried out with total of 94 benchmark problems and compared with other bio-inspired state-of-the-art algorithms in the recent years including MBPSO, BPSOTVAC, CBPSOTVAC, GADS, bAFSA, and IbAFSA. The results show the superiority of BAAA to many compared existing algorithms.
Multi-document English Text Summarization using Latent Semantic Analysis
In today's busy schedule, everybody expects to get the information in short but meaningful manner. Huge long documents consume more time to read. For this, we need summarization of document. Work has been done on Single-document but need of multiple document summarization is encouraging. Existing methods such as cluster approach, graph-based approach and fuzzy-based approach for multiple document summaries are improving. The statistical approach based on algebraic method is still topic of research. It demands for improvement in the approach by considering the limitations of Latent Semantic Analysis (LSA). Firstly, it reads only input text and does not consider world knowledge, for example women and lady it does not consider synonyms. Secondly, it does not consider word order, for example I will deliver to you tomorrow, deliver I will to you or tomorrow I will deliver to you. These different clauses may wrongly convey same meaning in different parts of document. Experimental results have overcomed the limitation and prove LSA with tf-idf method better in performance than KNN with tf-idf.
Non-inverting buck-boost power-factor-correction converter with wide input-voltage-range applications
This paper presents a non-inverting buck-boost based power-factor-correction (PFC) converter operating in the boundary-conduction-mode (BCM) for the wide input-voltage-range applications. Unlike other conventional PFC converters, the proposed non-inverting buck-boost based PFC converter has both step-up and step-down conversion functionalities to provide positive DC output-voltage. In order to reduce the turn-on switching-loss in high frequency applications, the BCM current control is employed to achieve zero current turn-on for the power switches. Besides, the relationships of the power factor versus the voltage conversion ratio between the BCM boost PFC converter and the proposed BCM non-inverting buck-boost PFC converter are also provided. Finally, the 70-watt prototype circuit of the proposed BCM buck-boost based PFC converter is built for the verification of the high frequency and wide input-voltage-range.
Exploring Topic Discriminating Power of Words in Latent Dirichlet Allocation
Latent Dirichlet Allocation (LDA) and its variants have bee n widely used to discover latent topics in textual documents. However, some of topics generated by L DA may be noisy with irrelevant words scattering across these topics. We name this kind of wo rds as topic-indiscriminate words, which tend to make topics more ambiguous and less interpreta ble by humans. In our work, we propose a new topic model named TWLDA, which assigns low weig hts to words with low topic discriminating power (ability). Our experimental results show that the proposed approach, which effectively reduces the number of topic-indiscriminate wo rds in discovered topics, improves the effectiveness of LDA.
Designing products with added emotional value ; development and application of an approach for research through design
In this paper a design approach is introduced for designing products with added emotional value. First, the approach is established, based on a theoretical framework and a non-verbal instrument to measure emotional responses. Second, the value of the design approach was assessed by applying it to the design of mobile telephones. Mobile telephones were found to be useful vehicles as people appear to have strong feelings about this product. A user study was conducted which resulted in the identification of two user groups (‘trend followers’ and ‘security seekers’) and for each group an emotional profile. Four telephones were designed which, for each group, either were intended to or were not intended to match the group’s profile. Finally, these designs were evaluated with the nonverbal instrument. The results indicate that the approach is appropriate for designing products with added emotional value. The advantages and disadvantages of the approach are discussed and further research directions indicated
Hubness and Pollution: Delving into Cross-Space Mapping for Zero-Shot Learning
Zero-shot methods in language, vision and other domains rely on a cross-space mapping function that projects vectors from the relevant feature space (e.g., visualfeature-based image representations) to a large semantic word space (induced in an unsupervised way from corpus data), where the entities of interest (e.g., objects images depict) are labeled with the words associated to the nearest neighbours of the mapped vectors. Zero-shot cross-space mapping methods hold great promise as a way to scale up annotation tasks well beyond the labels in the training data (e.g., recognizing objects that were never seen in training). However, the current performance of cross-space mapping functions is still quite low, so that the strategy is not yet usable in practical applications. In this paper, we explore some general properties, both theoretical and empirical, of the cross-space mapping function, and we build on them to propose better methods to estimate it. In this way, we attain large improvements over the state of the art, both in cross-linguistic (word translation) and cross-modal (image labeling) zero-shot experiments.
Jackie Robinson : Race, Sports and the American Dream
With these words, President Clinton contributed to Long Island University's three-day celebration of that momentous event in American history when Robinson became the first African American to play major league baseball. This new book includes presentations from that celebration, especially chosen for their fresh perspectives and illuminating insights. A heady mix of journalism, scholarship, and memory offers a presentation that far transcends the retelling of just another sports story. Readers get a true sense of the social conditions prior to Robinson's arrival in the major leagues and the ripple effect his breakthrough had on the nation. Anecdotes enliven the story and offer more than the usual "larger than life" portrait of Robinson. A melange of contributors from the sports world, academia, and journalism, some of Robinson's contemporaries, Dodger fans, and historians of the era, all sharing a passion for baseball, reflect on issues of sports, race, and the dramatic transformation of the American social and political scene in the last fifty years. In addition to the editors, the list of authors includes Peter Golenbock, one of America's preeminent sports biographers and author of Bums: The Brooklyn Dodgers, 1947-1957, Tom Hawkins, the first African-American to star in basketball at Notre Dame and currently Vice-President for Communications of the Los Angeles Dodgers, Bill Mardo a former writer for the New York Daily Worker, Roger Rosenblatt, teacher at the Southampton Campus of Long Island University, and author of numerous articles, plays, and books, Peter Williams, author of a study of sports myth, The Sports Immortals, and Samuel Regalado, author of Viva Baseball!: LatinMajor Leaguers and Their Special Hunger.
A stretchable and screen-printed electrochemical sensor for glucose determination in human perspiration.
Here we present two types of all-printable, highly stretchable, and inexpensive devices based on platinum (Pt)-decorated graphite for glucose determination in physiological fluids. Said devices are: a non-enzymatic sensor and an enzymatic biosensor, the latter showing promising results. Glucose has been quantified by measuring hydrogen peroxide (H2O2) reduction by chronoamperometry at -0.35V (vs pseudo-Ag/AgCl) using glucose oxidase immobilized on Pt-decorated graphite. The sensor performs well for the quantification of glucose in phosphate buffer solution (0.25M PBS, pH 7.0), with a linear range between 0 mM and 0.9mM, high sensitivity and selectivity, and a low limit of detection (LOD). Thus, it provides an alternative non-invasive and on-body quantification of glucose levels in human perspiration. This biosensor has been successfully applied on real human perspiration samples and results also show a significant correlation between glucose concentration in perspiration and glucose concentration in blood measured by a commercial glucose meter.
Applicability of current diagnostic algorithms in geriatric patients suspected of new, slow onset heart failure.
BACKGROUND referral for echocardiography for all geriatric outpatients suspected of heart failure (HF) is not feasible. Diagnostic algorithms could be helpful. OBJECTIVE to investigate whether available diagnostic algorithms accurately identify (older) patients (aged 70 years or over) eligible for echocardiography, with acceptable numbers of false-negatives. METHODS algorithms (European Society of Cardiology (ESC)) guideline, National Institute for Health and Clinical Excellence (NICE) guideline, multidisciplinary guideline the Netherlands (NL) and algorithm by Mant et al. were validated in 203 geriatric patients (mean age 82 ± 6 years, 30% men) suspected of new, slow onset HF. HF was adjudicated by an outcome panel. Applicability of algorithms was evaluated by calculating proportion of patients (i) referred for echocardiography, (ii) with HF among referred patients and (iii) without HF in the non-referred. RESULTS ninety-two (45%) patients had HF. Applying algorithms resulted in referral for echocardiography in 52% (normal NT-proBNP; ESC), 72% (normal ECG; ESC), 56% (NICE), 93% (NL) and 70% (Mant) of all patients, diagnosing HF in 78, 56, 76, 49 and 62% of those referred, respectively. In patients not referred for echocardiography HF was absent in 90, 82, 93, 100 and 95%, respectively. CONCLUSION the ESC NT-proBNP (<400 pg/ml)-based algorithm combines the lowest number of referrals for echocardiography (of whom 78% has HF) with a limited number (10%) of false negatives in the non-referred.
ADANA: Active Name Disambiguation
Name ambiguity has long been viewed as a challenging problem in many applications, such as scientific literature management, people search, and social network analysis. When we search a person name in these systems, many documents (e.g., papers, web pages) containing that person's name may be returned. It is hard to determine which documents are about the person we care about. Although much research has been conducted, the problem remains largely unsolved, especially with the rapid growth of the people information available on the Web. In this paper, we try to study this problem from a new perspective and propose an ADANA method for disambiguating person names via active user interactions. In ADANA, we first introduce a pairwise factor graph (PFG) model for person name disambiguation. The model is flexible and can be easily extended by incorporating various features. Based on the PFG model, we propose an active name disambiguation algorithm, aiming to improve the disambiguation performance by maximizing the utility of the user's correction. Experimental results on three different genres of data sets show that with only a few user corrections, the error rate of name disambiguation can be reduced to 3.1%. A real system has been developed based on the proposed method and is available online.
A Hand Gesture Recognition Framework and Wearable Gesture-Based Interaction Prototype for Mobile Devices
An algorithmic framework is proposed to process acceleration and surface electromyographic (SEMG) signals for gesture recognition. It includes a novel segmentation scheme, a score-based sensor fusion scheme, and two new features. A Bayes linear classifier and an improved dynamic time-warping algorithm are utilized in the framework. In addition, a prototype system, including a wearable gesture sensing device (embedded with a three-axis accelerometer and four SEMG sensors) and an application program with the proposed algorithmic framework for a mobile phone, is developed to realize gesture-based real-time interaction. With the device worn on the forearm, the user is able to manipulate a mobile phone using 19 predefined gestures or even personalized ones. Results suggest that the developed prototype responded to each gesture instruction within 300 ms on the mobile phone, with the average accuracy of 95.0% in user-dependent testing and 89.6% in user-independent testing. Such performance during the interaction testing, along with positive user experience questionnaire feedback, demonstrates the utility of the framework.
Multiplex CRISPR/Cas9-based genome engineering from a single lentiviral vector
Engineered DNA-binding proteins that manipulate the human genome and transcriptome have enabled rapid advances in biomedical research. In particular, the RNA-guided CRISPR/Cas9 system has recently been engineered to create site-specific double-strand breaks for genome editing or to direct targeted transcriptional regulation. A unique capability of the CRISPR/Cas9 system is multiplex genome engineering by delivering a single Cas9 enzyme and two or more single guide RNAs (sgRNAs) targeted to distinct genomic sites. This approach can be used to simultaneously create multiple DNA breaks or to target multiple transcriptional activators to a single promoter for synergistic enhancement of gene induction. To address the need for uniform and sustained delivery of multiplex CRISPR/Cas9-based genome engineering tools, we developed a single lentiviral system to express a Cas9 variant, a reporter gene and up to four sgRNAs from independent RNA polymerase III promoters that are incorporated into the vector by a convenient Golden Gate cloning method. Each sgRNA is efficiently expressed and can mediate multiplex gene editing and sustained transcriptional activation in immortalized and primary human cells. This delivery system will be significant to enabling the potential of CRISPR/Cas9-based multiplex genome engineering in diverse cell types.
Critical Success Factors for ERP Implementation : A Classification
In recent years research on ERP implementation has gained prominence because ERP, in most cases, has resulted in improving the efficiency and productivity of the user company. However, the decision to implement ERP alone is not a recipe of success, a lot depends on the planning and quality of implementation. Several factors are found to have an important role in the success of ERP implementation. In the ERP literature these factors are commonly described as Critical Success Factors (CSF). Several researchers have contributed to the body of literature pertaining to these CSF. However, the research in this area is still fragmented and unorganized. Hence, there is a need to identify and classify the relevant success factors, which is paramount for the success of ERP implementation. In this work we have classified various CSFs identified from literature in seven broad categories. This may help the practitioners and researchers in getting a quick understanding of these CSFs.
Concordance of performance metrics among U.S. trauma centers caring for injured children.
BACKGROUND Several indicators of quality pediatric trauma care have been proposed including low in-hospital mortality, nonoperative management of blunt splenic injury, use of intracranial pressure monitors after severe traumatic brain injury, and craniotomy for children with severe subdural or epidural hematomas. It is not known if center-level performance is consistent in each of these metrics. We evaluated whether center performance in one area of quality predicted similar performance in other areas of quality. METHODS We reviewed patients 18 years or younger who were hospitalized with an injury Abbreviated Injury Scale (AIS) score of 2 or greater from 2010 to 2011 at trauma centers (n = 150) participating in the Trauma Quality Improvement Program. Random-intercept multilevel modeling was used to generate center-specific adjusted odds ratios for each quality indicator. We evaluated correlations between center-specific adjusted odds ratios of each quality indicator and mortality using Pearson correlation coefficients. Weighted κ statistics were used to test multiple pairwise agreements between indicators and the overall agreement across all four indicators. RESULTS Among 84,880 children identified for analysis, 3,603 had blunt splenic injury, 3,503 had severe traumatic brain injury, and 1,286 had an epidural or subdural hematoma. A negative correlation between center-specific odds of mortality and craniotomy was present (Pearson correlation coefficient, -0.18; p = 0.03). There were no significant correlations between other indicators. Although κ statistics showed slight agreement for the pairwise comparison of odds of mortality and craniotomy (0.17, 0.02-0.32), there was no agreement for all other pairwise comparisons or the overall comparison of all four indicators (-0.01, -0.07 to 0.06). CONCLUSION Our findings demonstrate a lack of concordance in center-level performance across the four pediatric trauma quality indicators we evaluated. These findings should be considered by pediatric trauma quality improvement initiatives to allow for comprehensive measurement of hospital quality as opposed to benchmarking using a single indicator.
Neural correlates of dispositional mindfulness during affect labeling.
OBJECTIVE Mindfulness is a process whereby one is aware and receptive to present moment experiences. Although mindfulness-enhancing interventions reduce pathological mental and physical health symptoms across a wide variety of conditions and diseases, the mechanisms underlying these effects remain unknown. Converging evidence from the mindfulness and neuroscience literature suggests that labeling affect may be one mechanism for these effects. METHODS Participants (n = 27) indicated trait levels of mindfulness and then completed an affect labeling task while undergoing functional magnetic resonance imaging. The labeling task consisted of matching facial expressions to appropriate affect words (affect labeling) or to gender-appropriate names (gender labeling control task). RESULTS After controlling for multiple individual difference measures, dispositional mindfulness was associated with greater widespread prefrontal cortical activation, and reduced bilateral amygdala activity during affect labeling, compared with the gender labeling control task. Further, strong negative associations were found between areas of prefrontal cortex and right amygdala responses in participants high in mindfulness but not in participants low in mindfulness. CONCLUSIONS The present findings with a dispositional measure of mindfulness suggest one potential neurocognitive mechanism for understanding how mindfulness meditation interventions reduce negative affect and improve health outcomes, showing that mindfulness is associated with enhanced prefrontal cortical regulation of affect through labeling of negative affective stimuli.
Predicting Problematic Internet Use in Men and Women: The Contributions of Psychological Distress, Coping Style, and Body Esteem
Problematic Internet use (PIU) is becoming a prevalent and serious problem among college students. Rates of PIU are higher in men, which may be due to psychological variables, such as comorbid psychological disorders and beliefs about one's body. We examined the ability of psychological distress, coping style, and body esteem to predict levels of PIU in men and women in a sample of 425 undergraduate students (46.8 percent male; mean age = 19.0, SD = 1.7). For men, phobic anxiety, wishful thinking, and overweight preoccupation were significant predictors of increased PIU. For women, depression, keeping to oneself, and decreased tension reduction were associated with increased PIU. The findings suggest that men and women may have different psychological reasons for excessive Internet use, including different types of psychological distress and coping styles. Unlike women, men may use the Internet because of weight concerns.
An industrial study on the risk of software changes
Modelling and understanding bugs has been the focus of much of the Software Engineering research today. However, organizations are interested in more than just bugs. In particular, they are more concerned about managing risk, i.e., the likelihood that a code or design change will cause a negative impact on their products and processes, regardless of whether or not it introduces a bug. In this paper, we conduct a year-long study involving more than 450 developers of a large enterprise, spanning more than 60 teams, to better understand risky changes, i.e., changes for which developers believe that additional attention is needed in the form of careful code or design reviewing and/or more testing. Our findings show that different developers and different teams have their own criteria for determining risky changes. Using factors extracted from the changes and the history of the files modified by the changes, we are able to accurately identify risky changes with a recall of more than 67%, and a precision improvement of 87% (using developer specific models) and 37% (using team specific models), over a random model. We find that the number of lines and chunks of code added by the change, the bugginess of the files being changed, the number of bug reports linked to a change and the developer experience are the best indicators of change risk. In addition, we find that when a change has many related changes, the reliability of developers in marking risky changes is negatively affected. Our findings and models are being used today in practice to manage the risk of software projects.
Strength and power training: physiological mechanisms of adaptation.
Adaptations in resistance training are focused on the development and maintenance of the neuromuscular unit needed for force production [97, 136]. The effects of training, when using this system, affect many other physiological systems of the body (e.g., the connective tissue, cardiovascular, and endocrine systems) [16, 18, 37, 77, 83]. Training programs are highly specific to the types of adaptation that occur. Activation of specific patterns of motor units in training dictate what tissue and how other physiological systems will be affected by the exercise training. The time course of the development of the neuromuscular system appears to be dominated in the early phase by neural factors with associated changes in the types of contractile proteins. In the later adaptation phase, muscle protein increases, and the contractile unit begins to contribute the most to the changes in performance capabilities. A host of other factors can affect the adaptations, such as functional capabilities of the individual, age, nutritional status, and behavioral factors (e.g., sleep and health habits). Optimal adaptation appears to be related to the use of specific resistance training programs to meet individual training objectives.
Variational Reasoning for Question Answering With Knowledge Graph
Knowledge graph (KG) is known to be helpful for the task of question answering (QA), since it provides well-structured relational information between entities, and allows one to further infer indirect facts. However, it is challenging to build QA systems which can learn to reason over knowledge graphs based on question-answer pairs alone. First, when people ask questions, their expressions are noisy (for example, typos in texts, or variations in pronunciations), which is non-trivial for the QA system to match those mentioned entities to the knowledge graph. Second, many questions require multi-hop logic reasoning over the knowledge graph to retrieve the answers. To address these challenges, we propose a novel and unified deep learning architecture, and an end-to-end variational learning algorithm which can handle noise in questions, and learn multi-hop reasoning simultaneously. Our method achieves state-of-the-art performance on a recent benchmark dataset in the literature. We also derive a series of new benchmark datasets, including questions for multi-hop reasoning, questions paraphrased by neural translation model, and questions in human voice. Our method yields very promising results on all these challenging datasets.
Parallel Concatenated Trellis Coded Modulation1
In this paper, we propose a new solution to parallel concatenation of trellis codes with multilevel amplitude/phase modulations and a suitable bit by bit iterative decoding structure. Examples are given for throughput 2 and 4 bits/sec/Hz with 8PSK, 16QAM, and 64QAM modulations. For parallel concatenated trellis codes in the examples, rate 2/3 and 4/5, 8, and 16-state binary convolutional codes with Ungerboeck mapping by set partitioning (natural mapping), a reordered mapping, and Gray code mapping are used. The performance of these codes is within 1 dB from the Shannon limit at a bit error probability of 10 −7 for a given throughput, which outperforms the performance of all codes reported in the past for the same throughput.
A Generative Approach to Audio-Visual Person Tracking
This paper focuses on the integration of acoustic and visual information for people tracking. The system presented relies on a probabilistic framework within which information from multiple sources is integrated at an intermediate stage. An advantage of the method proposed is that of using a generative approach which supports easy and robust integration of multi source information by means of sampled projection instead of triangulation. The system described has been developed in the EU funded CHIL Project research activities. Experimental results from the CLEAR evaluation workshop are reported.
TAPAS: Tricks to Accelerate (encrypted) Prediction As a Service
Machine learning methods are widely used for a variety of prediction problems. Prediction as a service is a paradigm in which service providers with technological expertise and computational resources may perform predictions for clients. However, data privacy severely restricts the applicability of such services, unless measures to keep client data private (even from the service provider) are designed. Equally important is to minimize the amount of computation and communication required between client and server. Fully homomorphic encryption offers a possible way out, whereby clients may encrypt their data, and on which the server may perform arithmetic computations. The main drawback of using fully homomorphic encryption is the amount of time required to evaluate large machine learning models on encrypted data. We combine ideas from the machine learning literature, particularly work on binarization and sparsification of neural networks, together with algorithmic tools to speed-up and parallelize computation using encrypted data.
Discriminative feature learning for efficient RGB-D object recognition
This paper presents an efficient approach to recognize objects captured with an RGB-D sensor. The proposed approach uses a Bag-of-Words (BOW) model to learn feature representations from raw RGB-D point clouds in a weakly supervised manner. To this end, we introduce a novel method based on randomized clustering trees to learn visual vocabularies which are fast to compute and more discriminative compared to the vocabularies generated by classical methods such as k-means. We show that, when combined with standard spatial pooling strategies, our proposed approach yields a powerful feature representation for RGB-D object recognition. Our extensive experimental evaluation on two challenging RGB-D object datasets and live video streams from Kinect shows that our learned features result in superior object recognition accuracies compared with the state-of-the-art methods.
Generic Advice: On the Combination of AOP with Generative Programming in AspectC++
Besides object-orientation, generic types or templates and aspectoriented programming (AOP) gain increasing popularity as they provide additional dimensions of decomposition. Most modern programming languages like Ada, Eiffel, and C++ already have built-in support for templates. For Java and C# similar extensions will be available in the near future. Even though promising, the combination of aspects with generic and generative programming is still a widely unexplored field. This paper presents our extensions to the AspectC++ language, an aspect-oriented C++ derivate. By these extensions aspects can now affect generic code and exploit the potentials of generic code and template metaprogramming in their implementations. This allows aspects to inject template metaprograms transparently into the component code. A case study demonstrates that this feature enables the development of highly expressive and efficient generic aspect implementations in AspectC++. A discussion whether these concepts are applicable in the context of other aspect-oriented language extensions like AspectJ rounds up our contribution.
Combining Content-based and Collaborative Filters in an Online Newspaper Combining Content-based and Collaborative Filters in an Online Newspaper
The explosive growth of mailing lists, Web sites and Usenet news demands eeective ltering solutions. Collaborative ltering combines the informed opinions of humans to make personalized, accurate predictions. Content-based ltering uses the speed of computers to make complete, fast predictions. In this work, we present a new ltering approach that combines the coverage and speed of content-lters with the depth of collaborative ltering. We apply our research approach to an online newspaper, an as yet untapped opportunity for lters useful to the widespread news reading populace. We present the design of our ltering system and describe the results from preliminary experiments that suggest merits to our approach.
Integration of solid state transformer with DC microgrid system
This paper investigates a Solid State Transformer (SST) based DC microgrid architecture, addressing the design and control of the multiple SST power conversion stages and the power management strategy required for its integration with other microgrid elements, such as storage devices and local distributed generation. The advantages of a SST in relation to conventional low frequency transformers are commonly listed as a substantial reduction of volume and weight, fault isolation capability, voltage regulation, harmonic filtering, reactive power compensation and power factor correction. The SST also constitutes the required infrastructure that will enable DC power distribution to homes and commercial buildings in the near future and provides a more efficient way to integrate storage devices and distributed generation into the electrical grid. In this sense the SST behaves as an energy router, which represents a key element in an intelligent power system. The behavior of the proposed architecture for different operating conditions and disturbances will be assessed through computational simulations in the software MATLAB/Simulink.
Proof-nets: the parallel syntax for proof theory
The paper is mainly concerned with the extension of proof-nets to additives, for which the best known solution is presented. It proposes two cut-elimination procedures, the lazy one being in linear time. The solution is shown to be compatible with quantifiers, and the structural rules of exponentials are also accommodated. Traditional proof-theory deals with cut-elimination ; these results are usually obtained by means of sequent calculi, with the consequence that 75% of a cutelimination proof is devoted to endless commutations of rules. It is hard to be happy with this, mainly because : I the structure of the proof is blurred by all these cases ; I whole forests have been destroyed in order to print the same routine lemmas ; I this is not extremely elegant. However old-fashioned proof-theory, which is concerned with the ritual question : “is-that-theory-consistent ?” never really cared. The situation changed when subtle algorithmic aspects of cut-elimination became prominent : typically the determinism of cut-elimination, its actual complexity, its implementation cannot be handled in terms of sequent calculus without paying a heavy price. Natural deduction could easily fix the main drawbacks of cutelimination, but this improvement was limited to the negative fragment of intuitionistic logic. The situation changed in 1986 with the invention of linear logic : proof-nets were introduced in [G86] as a new kind of syntax for linear logic, in order to
Suicidal decapitation by guillotine: case report.
A recently widowed man constructed a guillotine in the entrance to his cellar, having previously announced his intention to decapitate himself. A neighbor who saw the device from her house alerted the police. The deceased was found completely decapitated, still holding a pair of pliers that he had used to activate the mechanism. The findings of the resulting investigation are described, and the mechanism of suicidal decapitation is reviewed.
Bitcoin - Asset or Currency? Revealing Users' Hidden Intentions
Digital currencies are a globally spreading phenomenon that is frequently and also prominently addressed by media, venture capitalists, financial and governmental institutions alike. As exchange prices for Bitcoin have reached multiple peaks within 2013, we pose a prevailing and yet academically unaddressed question: What are users' intentions when changing their domestic into a digital currency? In particular, this paper aims at giving empirical insights on whether users’ interest regarding digital currencies is driven by its appeal as an asset or as a currency. Based on our evaluation, we find strong indications that especially uninformed users approaching digital currencies are not primarily interested in an alternative transaction system but seek to participate in an alternative investment vehicle.
Loop gain analysis and development of high-speed high-accuracy current sensors for switching converters
Loop gain analysis for performance evaluation of current sensors for switching converters is presented. The MOS transistor scaling technique is reviewed and employed in developing high-speed and high-accuracy current-sensors with offset-current cancellation. Using a standard 0.35/spl mu/m CMOS process, and integrated full-range inductor current sensor for a boost converter is designed. It operated at a supply voltage of 1.5 V with a DC loop gain of 38 dB, and a unity gain frequency of 10 MHz. The sensor worked properly at a converter switching frequency of 500 kHz.
Full Flow: Optical Flow Estimation By Global Optimization over Regular Grids
We present a global optimization approach to optical flow estimation. The approach optimizes a classical optical flow objective over the full space of mappings between discrete grids. No descriptor matching is used. The highly regular structure of the space of mappings enables optimizations that reduce the computational complexity of the algorithm's inner loop from quadratic to linear and support efficient matching of tens of thousands of nodes to tens of thousands of displacements. We show that one-shot global optimization of a classical Horn-Schunck-type objective over regular grids at a single resolution is sufficient to initialize continuous interpolation and achieve state-of-the-art performance on challenging modern benchmarks.
A cognitive model of drug urges and drug-use behavior: role of automatic and nonautomatic processes.
Contemporary urge models assume that urges are necessary but not sufficient for the production of drug use in ongoing addicts, are responsible for the initiation of relapse in abstinent addicts, and can be indexed across 3 classes of behavior: verbal report, overt behavior, and somatovisceral response. A review of available data does not provide strong support for these assumptions. An alternative cognitive model of drug use and drug urges is proposed that hypothesizes that drug use in the addict is controlled by automatized action schemata. Urges are conceptualized as responses supported by nonautomatic cognitive processes activated in parallel with drug-use action schemata either in support of the schema or in support of attempts to block the execution of the schema. The implications of this model for the assessment of urge responding and drug-use behavior are presented.
Overview of deep learning in medical imaging.
The use of machine learning (ML) has been increasing rapidly in the medical imaging field, including computer-aided diagnosis (CAD), radiomics, and medical image analysis. Recently, an ML area called deep learning emerged in the computer vision field and became very popular in many fields. It started from an event in late 2012, when a deep-learning approach based on a convolutional neural network (CNN) won an overwhelming victory in the best-known worldwide computer vision competition, ImageNet Classification. Since then, researchers in virtually all fields, including medical imaging, have started actively participating in the explosively growing field of deep learning. In this paper, the area of deep learning in medical imaging is overviewed, including (1) what was changed in machine learning before and after the introduction of deep learning, (2) what is the source of the power of deep learning, (3) two major deep-learning models: a massive-training artificial neural network (MTANN) and a convolutional neural network (CNN), (4) similarities and differences between the two models, and (5) their applications to medical imaging. This review shows that ML with feature input (or feature-based ML) was dominant before the introduction of deep learning, and that the major and essential difference between ML before and after deep learning is the learning of image data directly without object segmentation or feature extraction; thus, it is the source of the power of deep learning, although the depth of the model is an important attribute. The class of ML with image input (or image-based ML) including deep learning has a long history, but recently gained popularity due to the use of the new terminology, deep learning. There are two major models in this class of ML in medical imaging, MTANN and CNN, which have similarities as well as several differences. In our experience, MTANNs were substantially more efficient in their development, had a higher performance, and required a lesser number of training cases than did CNNs. "Deep learning", or ML with image input, in medical imaging is an explosively growing, promising field. It is expected that ML with image input will be the mainstream area in the field of medical imaging in the next few decades.
Pancultural self-enhancement.
The culture movement challenged the universality of the self-enhancement motive by proposing that the motive is pervasive in individualistic cultures (the West) but absent in collectivistic cultures (the East). The present research posited that Westerners and Easterners use different tactics to achieve the same goal: positive self-regard. Study 1 tested participants from differing cultural backgrounds (the United States vs. Japan), and Study 2 tested participants of differing self-construals (independent vs. interdependent). Americans and independents self-enhanced on individualistic attributes, whereas Japanese and interdependents self-enhanced on collectivistic attributes. Independents regarded individualistic attributes, whereas interdependents regarded collectivistic attributes, as personally important. Attribute importance mediated self-enhancement. Regardless of cultural background or self-construal, people self-enhance on personally important dimensions. Self-enhancement is a universal human motive.
Социально-психологические установки как фактор, определяющий моральное сознание личности
The article deals with methodical prerequisites for social and psychological aims in the context of the research made by Georgian school of authors under the direction of D.N. Uznadze. The author shows interrelationship between social and psychological aims, in particular communicative aim in relations, and moral consciousness of the personality.
Detection of bladder cancer using a point-of-care proteomic assay.
CONTEXT A combination of methods is used for diagnosis of bladder cancer because no single procedure detects all malignancies. Urine tests are frequently part of an evaluation, but have either been nonspecific for cancer or required specialized analysis at a laboratory. OBJECTIVE To investigate whether a point-of-care proteomic test that measures the nuclear matrix protein NMP22 in voided urine could enhance detection of malignancy in patients with risk factors or symptoms of bladder cancer. DESIGN, SETTING, AND PATIENTS Twenty-three academic, private practice, and veterans' facilities in 10 states prospectively enrolled consecutive patients from September 2001 to May 2002. Participants included 1331 patients at elevated risk for bladder cancer due to factors such as history of smoking or symptoms including hematuria and dysuria. Patients at risk for malignancy of the urinary tract provided a voided urine sample for analysis of NMP22 protein and cytology prior to cystoscopy. MAIN OUTCOME MEASURES The diagnosis of bladder cancer, based on cystoscopy with biopsy, was accepted as the reference standard. The performance of the NMP22 test was compared with voided urine cytology as an aid to cancer detection. Testing for the NMP22 tumor marker was conducted in a blinded manner. RESULTS Bladder cancer was diagnosed in 79 patients. The NMP22 assay was positive in 44 of 79 patients with cancer (sensitivity, 55.7%; 95% confidence interval [CI], 44.1%-66.7%), whereas cytology test results were positive in 12 of 76 patients (sensitivity, 15.8%; 95% CI, 7.6%-24.0%). The specificity of the NMP22 assay was 85.7% (95% CI, 83.8%-87.6%) compared with 99.2% (95% CI, 98.7%-99.7%) for cytology. The proteomic marker detected 4 cancers that were not visualized during initial endoscopy, including 3 that were muscle invasive and 1 carcinoma in situ. CONCLUSION The noninvasive point-of-care assay for elevated urinary NMP22 protein can increase the accuracy of cystoscopy, with test results available during the patient visit.
Personalization features on business-to-consumer e-commerce: Review and future directions
Personalization in e-commerce has potentials to increase sales, customers' purchase intention and acquisition, as well as improvement of customer interaction. It is understood that personalization is a controllable variable for successful e-commerce. However, previous research on personalization proposed diverse concepts from numerous fields. As a result, it leads to bias construct of e-commerce personalization development and evaluation by academia and industry. To address this gap, a study was conducted to unravel personalization features from various perspectives. A Kitchenham's systematic literature review was used to discover personalization research from Q1/Q2 journals and top conference papers between 2012-2017. A theory-driven approach was administered to extract 21 selected papers. This process classifies personalization features into four dimensions based on three characters i.e objective, method, and user model. They include architectural, relational, instrumental and commercial dimensions. The results show that instrumental and commercial personalizations have been proved as the most popular dimension in the academic literature. However, relational personalization has been consistently rising as a new interesting topic to study since the massive growth of social media data.
94%-Effective Policies for a Two-Stage Serial Inventory System with Stochastic Demand
A two-stage inventory system is considered where Poisson demand occurs at Stage 1, and Stage 1 replenishes its inventory from Stage 2, which in turn orders from an outside supplier with unlimited stock. Each shipment, either to Stage 2 or to Stage 1, incurs a fixed setup cost. Under the assumption that the supply leadtime at Stage 2 is zero, we characterize a simple heuristic policy whose long-run average cost is guaranteed to be within 6% of optimality, i.e., a 94%-effective policy. The paper also provides heuristic policies for more general inventory systems and reports computational results. (Multi-Echelon Inventory; Stochastic Demand; Worst Case Analysis; Heuristic Policy)
Shri Shri Sadguru Sanga শ্রী শ্রী সদগুরু সঙ্গ
Internal Capital Markets and Firm-Level Compensation Incentives for Division Managers
Do multi-divisional firms structure compensation contracts for division managers to mitigate information and incentive problems in their internal capital markets? Using Compustat Segment financial data and compensation data from a proprietary survey, I find evidence that compensation and investment incentives are substitutes: firms providing a stronger link to firm performance in incentive compensation for division managers also provide weaker investment incentives through the capital budgeting process. Specifically, as the proportion of incentive pay for division managers that is based on firm performance increases, division investment is less responsive to division profitability. While these findings may be consistent with other explanations, they are generally consistent with a model of influence activities by division managers and the implied relative weights placed on imperfect, objective signals (i.e. accounting measures) versus distortable, subjective signals (i.e. manager recommendations) in inter-divisional capital allocation decisions. *I wish to thank Stan Baiman, Severin Borenstein, Brian Hall, Charlie Himmelberg, Bob Holthausen, Dan Levinthal, Maggie McMillan, Ron Miller, Andy Newman, Felix Oberholzer-Gee, Scott Stern, Louis Thomas and Joel Waldfogel for their comments and suggestions and Raffi Indjejikian, Dave Larcker and Abbie Smith for access to the compensation data. I would also like to thank participants at the Wharton Applied Economics Workshop and Finance Department Lunch, the American Compensation Association’s Academic Research Conference, Berkeley’s Institutional Analysis Workshop and Washington University’s Strategy Seminar. 2000 Steinberg-Dietrich Hall, Rm 2031, University of Pennsylvania, Philadelphia, PA 19104 Email: [email protected]
Extreme reductions: contraction of disyllables into monosyllables in taiwan Mandarin
This study investigates a severe form of segmental reduction known as contraction. In Taiwan Mandarin, a disyllabic word or phrase is often contracted into a monosyllabic unit in conversational speech, just as “do not” is often contracted into “don’t” in English. A systematic experiment was conducted to explore the underlying mechanism of such contraction. Preliminary results show evidence that contraction is not a categorical shift but a gradient undershoot of the articulatory target as a result of time pressure. Moreover, contraction seems to occur only beyond a certain duration threshold. These findings may further our understanding of the relation between duration and segmental reduction.