title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
On-the-fly Table Generation | Many information needs revolve around entities, which would be better answered by summarizing results in a tabular format, rather than presenting them as a ranked list. Unlike previous work, which is limited to retrieving existing tables, we aim to answer queries by automatically compiling a table in response to a query. We introduce and address the task of on-the-fly table generation: given a query, generate a relational table that contains relevant entities (as rows) along with their key properties (as columns). This problem is decomposed into three specific subtasks: (i) core column entity ranking, (ii) schema determination, and (iii) value lookup. We employ a feature-based approach for entity ranking and schema determination, combining deep semantic features with task-specific signals. We further show that these two subtasks are not independent of each other and can assist each other in an iterative manner. For value lookup, we combine information from existing tables and a knowledge base. Using two sets of entity-oriented queries, we evaluate our approach both on the component level and on the end-to-end table generation task. |
The generalized railroad crossing: a case study in formal verification of real-time systems | A new solution to the generalized railroad crossing problem, based on timed automata, invariants and simulation mappings, is presented and evaluated. The solution shows formally the correspondence between four system descriptions: an axiomatic specification, an operational specification, a discrete system implementation, and a system implementation that works with a continuous gate model.<<ETX>> |
An Efficient Algorithm for Learning Distances that Obey the Triangle Inequality | Semi-supervised clustering improves performance using constraints that indicate if two images belong to the same category or not. Success depends on how effectively these constraints can be propagated to the unsupervised data. Many algorithms use these constraints to learn Euclidean distances in a vector space. However, distances between images are often computed using classifiers or combinatorial algorithms that make distance learning difficult. In such a setting, we propose to use the triangle inequality to propagate constraints to unsupervised data. First, we formulate distance learning as a metric nearness problem where a brute-force Quadratic Program (QP) is used to modify the distances such that the total change in distances is minimized but the final distances obey the triangle inequality. Then we propose a much faster version of the QP that enforces only a subset of the inequalities and can be applied to real world clustering datasets. We show experimentally that this efficient QP produces stronger clustering results on face, leaf and video image datasets, outperforming state-of-the-art methods for constrained clustering. To gain insight into the effectiveness of this algorithm, we analyze a special case of the semi-supervised clustering problem, and show that the subset of constraints that we sample still preserves key properties of the distances that would be produced by enforcing all constraints. |
LL-MAC: A low latency MAC protocol for wireless self-organised networks | This paper proposes LL-MAC, a medium access control (MAC) protocol specifically designed for wireless sensor network applications that require low data latency. Wireless sensor networks use battery-operated computing and sensing devices and their main application is environmental monitoring. In order to achieve such requirements, the whole network must work autonomously and collaborate in periodically sensing the surrounding environment and sending data to the sink. LL-MAC uses novel techniques to offer a low end-toend data transmission latency from the furthest away nodes to the sink in a unique working cycle while offering a low duty cycle operation in a multi-hop fashion. Key features of this protocol include a synchronised sleep schedule to reduce control overhead along with a mechanism to avoid overhearing unnecessary traffic and elude collisions. Finally, control interval adjustment enables power-aware topology management in changing environments. 2007 Elsevier B.V. All rights reserved. |
Cognitive Load Estimation in the Wild | Cognitive load has been shown, over hundreds of validated studies, to be an important variable for understanding human performance. However, establishing practical, non-contact approaches for automated estimation of cognitive load under real-world conditions is far from a solved problem. Toward the goal of designing such a system, we propose two novel vision-based methods for cognitive load estimation, and evaluate them on a large-scale dataset collected under real-world driving conditions. Cognitive load is defined by which of 3 levels of a validated reference task the observed subject was performing. On this 3-class problem, our best proposed method of using 3D convolutional neural networks achieves 86.1% accuracy at predicting task-induced cognitive load in a sample of 92 subjects from video alone. This work uses the driving context as a training and evaluation dataset, but the trained network is not constrained to the driving environment as it requires no calibration and makes no assumptions about the subject's visual appearance, activity, head pose, scale, and perspective. |
Anti-inflammatory and radical scavenge effects of Arctium lappa. | The effects of Arctium lappa L. (root) on anti-inflammatory and free radical scavenger activity were investigated. Subcutaneous administration of A. lappa crude extract significantly decreased carrageenan-induced rat paw edema. When simultaneously treated with CCl4, it produced pronounced activities against CCl4-induced acute liver damage. The free radical scavenging activity of its crude extract was also examined by means of an electron spin resonance (ESR) spectrometer. The IC50 of A. lappa extract on superoxide and hydroxyl radical scavenger activity was 2.06 mg/ml and 11.8 mg/ml, respectively. These findings suggest that Arctium lappa possess free radical scavenging activity. The inhibitory effects on carrageenan-induced paw edema and CCl4-induced hepatotoxicity could be due to the scavenging effect of A. lappa. |
Improving Distributional Similarity with Lessons Learned from Word Embeddings | Recent trends suggest that neural-network-inspired word embedding models outperform traditional count-based distributional models on word similarity and analogy detection tasks. We reveal that much of the performance gains of word embeddings are due to certain system design choices and hyperparameter optimizations, rather than the embedding algorithms themselves. Furthermore, we show that these modifications can be transferred to traditional distributional models, yielding similar gains. In contrast to prior reports, we observe mostly local or insignificant performance differences between the methods, with no global advantage to any single approach over the others. |
Measurement of serum total and free prostate-specific antigen in women with colorectal carcinoma | We investigated the diagnostic value and the relationship with clinicopathological features of total and free prostate-specific antigen by measuring the concentrations of these markers in the sera of 75 women with colorectal carcinoma and in 30 healthy women. Measurements were performed by immunoradiometric assay which utilizes monoclonal and polyclonal anti-prostate-specific antigen antibodies; the lowest detection level for both markers was 0.01 ng ml−1. Free prostate-specific antigen levels were significantly higher in women with colorectal carcinoma than healthy women (P=0.006). The percentage of free prostate-specific antigen predominant (free prostate-specific antigen/total prostate-specific antigen >50%) subjects was 20% in colorectal carcinoma patients and 3.3% in healthy women (P=0.035). Cut-off values were 0.34 ng ml−1 for total prostate-specific antigen and 0.01 ng ml−1 for free prostate-specific antigen. In women with colorectal carcinoma, total prostate-specific antigen positivity was 20% and free prostate-specific antigen positivity was 34.6%. When compared to negatives, total prostate-specific antigen positive patients had a lower percentage of well-differentiated (P=0.056) and early stage (stages I and II) tumours (P=0.070). However, patients with predominant free prostate-specific antigen, had a higher percentage of well-differentiated (P=0.014) and early stage tumours (P=0.090) than patients with predominant bound prostate-specific antigen. In conclusion, although the sensitivity of free prostate-specific antigen predominancy is low (20%), in distinguishing women with colorectal carcinoma than healthy women, its specificity is high (96.7%). Free prostate-specific antigen predominancy tends to be present in less aggressive tumours. These findings may indicate clinical significance of preoperative measurement of serum total and free prostate-specific antigen in women with colorectal carcinoma. |
A design method with iron powder core for a claw pole type half-wave rectified variable field flux motor | We have proposed a claw pole type half-wave rectified variable field flux motor (CP-HVFM) with special self-excitation method. The claw pole rotor needs the 3D magnetic path core. This paper reports an analysis method with experimental BH and loss data of the iron powder core for FEM. And it shows a designed analysis model and characteristics such as torque, efficiency and loss calculation results. |
A millennium perspective on the contribution of global fallout radionuclides to ocean science. | Five decades ago, radionuclides began to enter the ocean from the fallout from atmospheric nuclear weapons tests. The start of the 21st century is an appropriate vantage point in time to reflect on the fate of this unique suite of manmade radionuclides--of which more than two-thirds arrived at the surface of the oceans of the planet. During these five decades much has been learned of the behavior and fate of these radionuclides and, through their use as unique tracers, of how they have contributed to the growth of basic knowledge of complex oceanic physical and biogeochemical processes. Some of the highlights of the ways in which fallout radionuclides have given new insights into these processes are reviewed in the historical context of technological and basic ocean science developments over this period. The review addresses major processes involved, such as physical dispersion and mixing, particle association and transport of reactive nuclides, biological interactions, and mixing and burial within ocean sediments. These processes occur over a range of scales ranging from local to global. Finally, an account is given of the present spatial distribution within the oceans of the various components of the fallout radionuclide suite. |
DISTRIBUTION EQUILIBRIUM OF LANTHANIDE(III) COMPLEXES WITH N-BENZOYL-N-PHENYLHYDROXYLAMINE IN SEVERAL INERT SOLVENT SYSTEMS | ABSTRACT The distribution equilibrium of lanthanides(III) (Ln) with N-benzoyl-N-phenylhydroxylamine (BPHA, HL) in several inert solvent systems was studied. The representative lanthanides(III) (Yb, Eu and Pr) were all found to extract as self-adduct chelates of the form, LnL3(HL)m, (m = 1–3), containing a different number of neutral adduct molecules in extracted species. The differences in the number of neutral adduct molecules in extracted species with each solvents are attributed to the difference in the solvation trend for BPHA. The extraction constant and separation factor were determined in several inert solvent systems. It was found that the distribution ratio of lanthanide(III) tends to decrease with increases in the distribution constant of BPHA. Relationship between the polarity of the solvent and the extractability are also discussed. |
Dynamic Action Repetition for Deep Reinforcement Learning | One of the long standing goals of Artificial Intelligence (AI) is to build cognitive agents which can perform complex tasks from raw sensory inputs without explicit supervision (Lake et al. 2016). Recent progress in combining Reinforcement Learning objective functions and Deep Learning architectures has achieved promising results for such tasks. An important aspect of such sequential decision making problems, which has largely been neglected, is for the agent to decide on the duration of time for which to commit to actions. Such action repetition is important for computational efficiency, which is necessary for the agent to respond in real-time to events (in applications such as self-driving cars). Action Repetition arises naturally in real life as well as simulated environments. The time scale of executing an action enables an agent (both humans and AI) to decide the granularity of control during task execution. Current state of the art Deep Reinforcement Learning models, whether they are off-policy (Mnih et al. 2015; Wang et al. 2015) or on-policy (Mnih et al. 2016), consist of a framework with a static action repetition paradigm, wherein the action decided by the agent is repeated for a fixed number of time steps regardless of the contextual state while executing the task. In this paper, we propose a new framework Dynamic Action Repetition which changes Action Repetition Rate (the time scale of repeating an action) from a hyper-parameter of an algorithm to a dynamically learnable quantity. At every decision-making step, our models allow the agent to commit to an action and the time scale of executing the action. We show empirically that such a dynamic time scale mechanism improves the performance on relatively harder games in the Atari 2600 domain, independent of the underlying Deep Reinforcement Learning algorithm used. |
Salient Object Detection and Segmentation | Automatic estimation of salient object regions across images, without any prior assumption or knowledge of the contents of the corresponding scenes, enhances many computer vision and computer graphics applications. We introduce a regional contrast based salient object extraction algorithm, which simultaneously evaluates global contrast differences and spatial weighted coherence scores. The proposed algorithm is simple, efficient, naturally multi-scale, and produces full-resolution, highquality saliency maps. These saliency maps are further used to initialize a novel iterative version of GrabCut for high quality salient object segmentation. We extensively evaluated our algorithm using traditional salient object detection datasets, as well as a more challenging Internet image dataset. Our experimental results demonstrate that our algorithm consistently outperforms existing salient object detection and segmentation methods, yielding higher precision and better recall rates. We also show that our algorithm can be used to efficiently extract salient object masks from Internet images, enabling effective sketch-based image retrieval (SBIR) via simple shape comparisons. Despite such noisy internet images, where the saliency regions are ambiguous, our saliency guided image retrieval achieves a superior retrieval rate compared with state-of-the-art SBIR methods, and additionally provides important target object region information. |
HANDGESTURERECOGNITION: A LITERATURE REVIEW | Hand gesture recognition system received great attention in the recent few years because of its manifoldness applications and the ability to interact with machine efficiently through human computer interaction. In this paper a survey of recent hand gesture recognition systems is presented. Key issues of hand gesture recognition system are presented with challenges of gesture system. Review methods of recent postures and gestures recognition system presented as well. Summary of research results of hand gesture methods, databases, and comparison between main gesture recognition phases are also given. Advantages and drawbacks of the discussed systems are explained finally. |
The significance of Streptococcus anginosus group in intracranial complications of pediatric rhinosinusitis. | OBJECTIVE
To assess the significance of the Streptococcus anginosus group in intracranial complications of pediatric patients with rhinosinusitis.
DESIGN
Retrospective cohort study.
SETTING
Tertiary pediatric hospital.
PATIENTS
A 20-year review of medical records identified patients with intracranial complications resulting from rhinosinusitis. In the 50 cases identified, S anginosus was the most commonly implicated bacterial pathogen in 14 (28%). Documented data included demographics, cultured bacteria, immune status, sinuses involved, type of intracranial complication, otolaryngologic surgical and neurosurgical intervention, type and duration of antibiotics used, and resulting neurologic deficits. Complications and outcomes of cases of S anginosus group-associated rhinosinusitis were compared with those of other bacteria.
MAIN OUTCOME MEASURES
The severity and outcomes of intracranial complications of pediatric rhinosinusitis due to S anginosus group bacteria compared with other bacteria.
RESULTS
Infection caused by the S anginosus group resulted in more severe intracranial complications (P = .001). In addition, patients with S anginosus group-associated infections were more likely to require neurosurgical intervention (P < .001) and develop long-term neurologic deficits (P = .02). Intravenous antibiotics were administered for a longer duration (P < .001) for S anginosus group-associated infections.
CONCLUSIONS
Rhinosinusitis associated with the S anginosus group should be considered a more serious infection relative to those caused by other pathogens. Streptococcus anginosus group bacteria are significantly more likely than other bacteria to cause more severe intracranial complications and neurologic deficits and to require neurosurgical intervention. A low threshold for intervention should be used for infection caused by this pathogen. |
Regularization Theory and Neural Networks | We had previously shown that regularization principles lead to approximation schemes which are equivalent to networks with one layer of hidden units, called Regularization Networks. In particular, standard smoothness functionals lead to a subclass of regularization networks, the well known Radial Basis Functions approximation schemes. This paper shows that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models and some of the neural networks. In particular, we introduce new classes of smoothness functionals that lead to diierent classes of basis functions. Additive splines as well as some tensor product splines can be obtained from appropriate classes of smoothness functionals. Furthermore, the same generalization that extends Radial Basis Functions (RBF) to Hyper Basis Functions (HBF) also leads from additive models to ridge approximation models, containing as special cases Breiman's hinge functions, some forms of Projection Pursuit Regression and several types of neural networks. We propose to use the term Generalized Regularization Networks for this broad class of approximation schemes that follow from an extension of regularization. In the probabilistic interpretation of regularization, the diierent classes of basis functions correspond to diierent classes of prior probabilities on the approximating function spaces, and therefore to diierent types of smoothness assumptions. In summary, diierent multilayer networks with one hidden layer, which we collectively call Generalized Regularization Networks, correspond to diierent classes of priors and associated smoothness functionals in a classical regularization principle. Three broad classes are a) Radial Basis Functions that can be generalized to Hyper Basis Functions, b) some tensor product splines, and c) additive splines that can be generalized to schemes of the type of ridge approximation, hinge functions and several perceptron-like neural networks with one-hidden layer. |
Design versus Assessment of Concrete Structures Using Stress Fields and Strut-and-Tie Models | Stress fields and strut-and-tie models are widely used for design and assessment of structural concrete members. Although they are often used in the same manner for both purposes, developing suitable stress fields and strut-and-tie models for the design of a new structure or for assessment of the strength of an existing one should not necessarily be performed following the same approach. For design, simple load-carrying models in equilibrium with the external actions can be considered. From the various possibilities, those leading to better behavior at serviceability limit state and to simple reinforcement layouts should be selected (or a combination of them). For the assessment of existing structures, however, avoiding unnecessary strengthening (or minimizing it) should be the objective. Thus, simple stress fields or strut-and-tie models are to be iteratively refined whenever the calculated strength of the member is insufficient with respect to the design actions. This can be done by accounting for kinematic considerations to calculate the higher possible strength of the member accounting for its actual geometry and available reinforcement (allowing to calculate the exact solution according to limit analysis). In this paper, the differences between the two approaches for design and assessment are clarified and explained on the basis of some examples. A number of strategies are comprehensibly presented to obtain suitable stress fields and strut-and-tie models in both cases. The results of exact solutions according to limit analysis (developed using elastic-plastic stress fields) are finally compared to 150 tests of the literature showing the consistency and generality of the presented approaches. INTRODUCTION Currently, most codes for structural concrete are oriented to design of new structures. Yet assessing the strength of existing structures (and their retrofitting, if necessary) is becoming more and more a relevant task for structural engineers. This is influencing codes of practice, which start incorporating new concepts related to life cycle design |
Fast estimation of Gaussian mixture models for image segmentation | The expectation maximization algorithm has been classically used to find the maximum likelihood estimates of parameters in probabilistic models with unobserved data, for instance, mixture models. A key issue in such problems is the choice of the model complexity. The higher the number of components in the mixture, the higher will be the data likelihood, but also the higher will be the computational burden and data overfitting. In this work, we propose a clustering method based on the expectation maximization algorithm that adapts online the number of components of a finite Gaussian mixture model from multivariate data or method estimates the number of components and their means and covariances sequentially, without requiring any careful initialization. Our methodology starts from a single mixture component covering the whole data set and sequentially splits it incrementally during expectation maximization steps. The coarse to fine nature of the algorithm reduce the overall number of computations to achieve a solution, which makes the method particularly suited to image segmentation applications whenever computational time is an issue. We show the effectiveness of the method in a series of experiments and compare it with a state-of-the-art alternative technique both with synthetic data and real images, including experiments with images acquired from the iCub humanoid robot. |
Earned scope management: a case of study of scope performance using use cases as scope in a real project | Three of the major constraints on projects are scope (size), cost (money), and time (schedule). While the Earned Value Management (EVM) and Earned Schedule (ES) techniques manage cost and time constraints, they do not explicitly tackle the scope constraint. While scope management is recognized as a key success factor in software projects, there is a lack of formal techniques in the literature to manage scope. This paper proposes an Earned Scope Management (ESM) technique based on non-conventional scope units such as Use Cases, for a software project scope. |
A two-output series-resonant inverter for induction-heating cooking appliances | Multiple-burner induction-heating cooking appliances are suitable for using multiple-output inverters. Some common approaches use several single-output inverters or a single-output inverter multiplexing the loads along the time periodically. By specifying a two-output series-resonant high-frequency inverter, a new inverter is obtained fulfilling the requirements. The synthesized converter can be considered as a two-output extension of a full-bridge topology. It allows the control of the two outputs, simultaneously and independently, up to their rated powers saving component count compared with the two-converter solution and providing a higher utilization of electronics. To verify theoretical predictions, the proposed converter is designed and tested experimentally in an induction-heating appliance prototype. A fixed-frequency control strategy is digitally implemented with good final performances for the application, including ZVS operation for active devices and a quick heating function. Although the work is focused on low-power induction heating, it can be probably useful for other power electronic applications. |
Bifurcation and chaos in the double well Duffing-van der Pol oscillator : Numerical and analytical studies | The behaviour of a driven double well Duffing-van der Pol (DVP) oscillator for a specific parametric choice (| α |= β) is studied. The existence of different attractors in the system parameters (f − ω) domain is examined and a detailed account of various steady states for fixed damping is presented. Transition from quasiperiodic to periodic motion through chaotic oscillations is reported. The intervening chaotic regime is further shown to possess islands of phase-locked states and periodic windows (including period doubling regions), boundary crisis, all the three classes of intermittencies, and transient chaos. We also observe the existence of local-global bifurcation of intermittent catastrophe type and global bifurcation of blue-sky catastrophe type during transition from quasiperiodic to periodic solutions. Using a perturbative periodic solution, an investigation of the various forms of instablities allows one to predict Neimark instablity in the (f − ω) plane and eventually results in the approximate predictive criteria for the chaotic region. |
The Pitt-Westinghouse graduate program | Recognizing that some of the highest types of graduate engineering instruction is to be found in a few of the larger research and industrial organizations, the University of Pittsburgh (Pa.) and the Westinghouse company during the past several years have cooperated in a joint program of graduate study for certain designated employees of the company. Besides being of distinct benefit to the individual employee, this work has had a highly stimulating influence upon the faculty members of the university connected with it, and the results have been most gratifying to the industry. |
Learning Salient Features for Speech Emotion Recognition Using Convolutional Neural Networks | As an essential way of human emotional behavior understanding, speech emotion recognition (SER) has attracted a great deal of attention in human-centered signal processing. Accuracy in SER heavily depends on finding good affect- related , discriminative features. In this paper, we propose to learn affect-salient features for SER using convolutional neural networks (CNN). The training of CNN involves two stages. In the first stage, unlabeled samples are used to learn local invariant features (LIF) using a variant of sparse auto-encoder (SAE) with reconstruction penalization. In the second step, LIF is used as the input to a feature extractor, salient discriminative feature analysis (SDFA), to learn affect-salient, discriminative features using a novel objective function that encourages feature saliency, orthogonality, and discrimination for SER. Our experimental results on benchmark datasets show that our approach leads to stable and robust recognition performance in complex scenes (e.g., with speaker and language variation, and environment distortion) and outperforms several well-established SER features. |
The effect of endurance training on muscle strength in young, healthy men in relation to hormonal status. | The objective of this study was to establish the effect of moderate intensity endurance training on muscle strength in relation to hormonal changes in the body. Fifteen young, healthy men took part in 5 week endurance training performed on a cycloergometer. Before and after training program, exercise testing sessions were performed involving all participants. Training program significantly increased V(O2 max) (P<0.05) and time to fatigue at 50% of maximal voluntary isometric contraction (TTF 50% MVC), P<0.03, but it did not affect maximal voluntary isometric contraction (MVC). This was accompanied by an increase (P<0.001) in total plasma testosterone (T) and free testosterone (fT) concentrations, whereas a decrease in sex hormone-binding globulin (SHBG) (P<0.02), growth hormone (P<0.05), free triiodothyronine (P<0.001) and free thyroxine (P<0.02) concentrations was observed. No changes were found in plasma cortisol (C) and insulin-like growth factor-I (IGF-I) concentrations. Additionally, MVC was positively correlated to T/C, fT/C and IGF-I/C ratios after the training, whereas time to fatigue at 50% of MVC was closely positively correlated to the SHBG concentration, both before and after endurance training. We have concluded that moderate intensity endurance training resulting in a significant increase in V(O2 max), did not affect the MVC, but it significantly increased time to fatigue at 50% of MVC. This index of local muscular endurance was greater in subjects with higher concentration of SHBG, both before and after the training. |
KEEPING YOUR HEADCOUNT WHEN ALL ABOUT YOU ARE LOSING THEIRS : DOWNSIZING , VOLUNTARY TURNOVER RATES , AND THE MODERATING ROLE OF HR PRACTICES | Although both downsizing and voluntary turnover have been topics of great interest in the organizational literature, little research has addressed the possible relationship between the two. Using organization-level data from multiple industries, we first investigate whether downsizing predicts voluntary turnover rates. Second, to lend support to our causal model, we examine whether this relationship is mediated by aggregated levels of organizational commitment. Third, we test whether the downsizing-turnover rate relationship: (1) is mitigated by HR practices that tend to embed employees in the organization or convey procedural fairness; and (2) is strengthened by HR practices that enhance career development. Results support the hypothesized main, mediated, and moderated effects. |
Bayesian Map Learning in Dynamic Environments | We consider the problem of learning a grid-based map using a robot with noisy sensors and actuators. We compare two approaches: online EM, where the map is treated as a fixed parameter, and Bayesian inference, where the map is a (matrix-valued) random variable. We show that even on a very simple example, online EM can get stuck in local minima, which causes the robot to get "lost" and the resulting map to be useless. By contrast, the Bayesian approach, by maintaining multiple hypotheses, is much more robust. We then introduce a method for approximating the Bayesian solution, called Rao-Blackwellised particle filtering. We show that this approximation, when coupled with an active learning strategy, is fast but accurate. |
Three-dimensional cluster analysis identifies interfaces and functional residue clusters in proteins. | Three-dimensional cluster analysis offers a method for the prediction of functional residue clusters in proteins. This method requires a representative structure and a multiple sequence alignment as input data. Individual residues are represented in terms of regional alignments that reflect both their structural environment and their evolutionary variation, as defined by the alignment of homologous sequences. From the overall (global) and the residue-specific (regional) alignments, we calculate the global and regional similarity matrices, containing scores for all pairwise sequence comparisons in the respective alignments. Comparing the matrices yields two scores for each residue. The regional conservation score (C(R)(x)) defines the conservation of each residue x and its neighbors in 3D space relative to the protein as a whole. The similarity deviation score (S(x)) detects residue clusters with sequence similarities that deviate from the similarities suggested by the full-length sequences. We evaluated 3D cluster analysis on a set of 35 families of proteins with available cocrystal structures, showing small ligand interfaces, nucleic acid interfaces and two types of protein-protein interfaces (transient and stable). We present two examples in detail: fructose-1,6-bisphosphate aldolase and the mitogen-activated protein kinase ERK2. We found that the regional conservation score (C(R)(x)) identifies functional residue clusters better than a scoring scheme that does not take 3D information into account. C(R)(x) is particularly useful for the prediction of poorly conserved, transient protein-protein interfaces. Many of the proteins studied contained residue clusters with elevated similarity deviation scores. These residue clusters correlate with specificity-conferring regions: 3D cluster analysis therefore represents an easily applied method for the prediction of functionally relevant spatial clusters of residues in proteins. |
Measurement of Quality of Life III. From the IQOL Theory to the Global, Generic SEQOL Questionnaire | The Danish Quality of Life Survey is based on the philosophy of life known as the integrative quality-of-life (IQOL) theory. It consists of eight different quality-of-life concepts, ranging from the superficially subjective via the deeply existential to the superficially objective (well being, satisfaction with life, happiness, meaning in life, biological order, realizing life potential, fulfillment of needs, and objective factors [ability of functioning and fulfilling societal norms]). This paper presents the work underlying the formulation of the theories of a good life and how these theories came to be expressed in a comprehensive, multidimensional, generic questionnaire for the evaluation of the global quality of life--SEQOL (self-evaluation of quality of life)--presented in full length in this paper. The instruments and theories on which the Quality of Life Survey was based are constantly being updated. It is an on-going process due to aspects such as human development, language, and culture. We arrived at eight rating scales for the quality of life that, guided by the IQOL theory, were combined into a global and generic quality-of-life rating scale. This was simplified to the validated QOL5 with only five questions, made for use in clinical databases. Unfortunately, the depth of human existence is to some extent lost in QOL5. We continue to aim towards greater simplicity, precision, and depth in the questions in order to explore the depths of human existence. We have not yet found a final form that enables us to fully rate the quality of life in practice. We hope that the several hundred questions we found necessary to adequately implement the theories of the Quality of Life Survey can be replaced by far fewer; ideally, only eight questions representing the eight component theories. These eight ideal questions have not yet been evaluated, and therefore they should not form the basis of a survey. However, the perspective is clear. If eight simple questions can accurately rate the quality of life as well as its depth, we have found an instrument of immense practical scope. |
Families, murder, and insanity: a psychiatric review of paternal neonaticide. | Neonaticide is the killing of a newborn within the first 24 h of life. Although relatively uncommon, numerous cases of maternal neonaticide have been reported. To date, only two cases of paternal neonaticide have appeared in the literature. The authors review neonaticide and present two new case reports of paternal neonaticide. A psychodynamic explanation of paternal neonaticide is formulated. A new definition for neonaticide, more consistent with biological and psychological determinants, is suggested. |
Patellofemoral pain in athletes: clinical perspectives | Patellofemoral pain (PFP) is a very common problem in athletes who participate in jumping, cutting and pivoting sports. Several risk factors may play a part in the pathogenesis of PFP. Overuse, trauma and intrinsic risk factors are particularly important among athletes. Physical examination has a key role in PFP diagnosis. Furthermore, common risk factors should be investigated, such as hip muscle dysfunction, poor core muscle endurance, muscular tightness, excessive foot pronation and patellar malalignment. Imaging is seldom needed in special cases. Many possible interventions are recommended for PFP management. Due to the multifactorial nature of PFP, the clinical approach should be individualized, and the contribution of different factors should be considered and managed accordingly. In most cases, activity modification and rehabilitation should be tried before any surgical interventions. |
Prolonging Sensor Network Lifetime Through Wireless Charging | The emerging wireless charging technology is a promising alternative to address the power constraint problem in sensor networks. Comparing to existing approaches, this technology can replenish energy in a more controllable manner and does not require accurate location of or physical alignment to sensor nodes. However, little work has been reported on designing and implementing a wireless charging system for sensor networks. In this paper, we design such a system, build a proof-of-concept prototype, conduct experiments on the prototype to evaluate its feasibility and performance in small-scale networks, and conduct extensive simulations to study its performance in large-scale networks. Experimental and simulation results demonstrate that the proposed system can utilize the wireless charging technology effectively to prolong the network lifetime through delivering energy by a robot to where it is needed. The effects of various configuration and design parameters have also been studied, which may serve as useful guidelines in actual deployment of the proposed system in practice. |
A Deep Semantic Natural Language Processing Platform | This paper describes AllenNLP, a platform for research on deep learning methods in natural language understanding. AllenNLP is designed to support researchers who want to build novel language understanding models quickly and easily. It is built on top of PyTorch, allowing for dynamic computation graphs, and provides (1) a flexible data API that handles intelligent batching and padding, (2) highlevel abstractions for common operations in working with text, and (3) a modular and extensible experiment framework that makes doing good science easy. It also includes reference implementations of high quality approaches for both core semantic problems (e.g. semantic role labeling (Palmer et al., 2005)) and language understanding applications (e.g. machine comprehension (Rajpurkar et al., 2016)). AllenNLP is an ongoing open-source effort maintained by engineers and researchers at the Allen Institute for Artificial Intelligence. |
‘Public service motivation’ as an argument for government provision | A public service motivation (PSM) inclines employees to provide effort out of concern for the impact of that effort on a valued social service. Though deemed to be important in the literature on public administration, this motivation has not been formally considered by economists. When a PSM exists, this paper establishes conditions under which government bureaucracy can better obtain PSM motivated effort from employees than a standard profit maximizing firm. The model also provides an efficiency rationale for low-powered incentives in both bureaucracies and other organizations producing social services. 2000 Elsevier Science S.A. All rights reserved. |
Arterial Limb Microemboli during Cardiopulmonary Bypass: Observations from a Congenital Cardiac Surgery Practice. | Gaseous microemboli (GME) are known to be delivered to the arterial circulation of patients during cardiopulmonary bypass (CPB). An increased number of GME delivered during adult CPB has been associated with brain injury and postoperative cognitive dysfunction. The GME load in children exposed to CPB and its consequences are not well characterized. We sought to establish a baseline of arterial limb emboli counts during the conduct of CPB for our population of patients requiring surgery for congenital heart disease. We used the emboli detection and counting (EDAC) device to measure GME activity in 103 consecutive patients for which an EDAC machine was available. Emboli counts for GME <40 μ and >40 μ were quantified and indexed to CPB time (minutes) and body surface area (BSA) to account for the variation in patient size and CPB times. Patients of all sizes had a similar embolic burden when indexed to bypass time and BSA. Furthermore, patients of all sizes saw a three-fold increase in the <40 μ embolic burden and a five-fold increase in the >40 μ embolic burden when regular air was noted in the venous line. The use of kinetic venous-assisted drainage did not significantly increase arterial limb GME. Efforts for early identification and mitigation of venous line air are warranted to minimize GME transmission to congenital cardiac surgery patients during CPB. |
An inductorless linear optical receiver for 20Gbaud/s (40Gb/s) PAM-4 modulation using 28nm CMOS | This paper1 presents a linear optical receiver designed using a 28nm CMOS technology suitable for 20Gbaud/s (40Gb/s) PAM-4 modulation. The optical receiver consists of a transimpedance amplifier (gain adjustable from 40dBΩ to 56dBO) followed by a variable gain amplifier (gain adjustable from 6dB to 17dB). Capacitive peaking is used to achieve a bandwidth of ~10GHz, thus avoiding the use of on-chip inductors which require large die area. A robust automatic gain control loop is used to ensure a constant differential output voltage swing of ~100mV for an input dynamic range of 20μA to 500μA (peak current). Over this same range, high linearity (total harmonic distortion less than 5%, 250MHz sinewave, 10harmonics taken into account) is obtained. The rms input referred noise current (integrated from 10MHz to 20GHz) was 2.5μArms. The linear optical receiver consumes 56mW from a 1.5V supply voltage. |
Multiple Fault Diagnosis Method in Multistation Assembly Processes Using Orthogonal Diagonalization Analysis | Dimensional control has a significant impact on overall product quality and performance of large and complex multistation assembly systems. To date, the identification of processrelated faults that cause large variations of key product characteristics (KPCs) remains one of the most critical research topics in dimensional control. This paper proposes a new approach for multiple fault diagnosis in a multistation assembly process by integrating multivariate statistical analysis with engineering models. The proposed method is based on the following steps: (i) modeling of fault patterns obtained using state space representation of process and product information that explicitly represents the relationship between process-related error sources denoted by key control characteristics (KCCs) and KPCs, and (ii) orthogonal diagonalization of measurement data using principal component analysis (PCA) to project measurement data onto the axes of an affine space formed by the predetermined fault patterns. Orthogonal diagonalization allows estimating the statistical significance of the root cause of the identified fault. A case study of fault diagnosis for a multistation assembly process illustrates and validates the proposed methodology. DOI: 10.1115/1.2783228 |
Making Sense of the Comparative Advantage Gains from Trade: Comment on Batra | This paper addresses Ravi Batra's (2002 ) criticism of the basic comparative advantage gains-from-trade model. While Batra's criticism is based on the selection view interpretation of real income, the gains from trade can only be properly understood from the options view interpretation of real income. I also show how a recent empirical implementation of the gains-from-trade model defies Batra's claim that "the consumption gain . . . is not subject to measurement" (2002, p. 642). Copyright © 2006 The Author; Journal compilation © 2006 Blackwell Publishing Ltd. |
Genetic improvement of Pacific white shrimp [Penaeus (Litopenaeus) vannamei]: perspectives for genomic selection | The uses of breeding programs for the Pacific white shrimp [Penaeus (Litopenaeus) vannamei] based on mixed linear models with pedigreed data are described. The application of these classic breeding methods yielded continuous progress of great value to increase the profitability of the shrimp industry in several countries. Recent advances in such areas as genomics in shrimp will allow for the development of new breeding programs in the near future that will increase genetic progress. In particular, these novel techniques may help increase disease resistance to specific emerging diseases, which is today a very important component of shrimp breeding programs. Thanks to increased selection accuracy, simulated genetic advance using genomic selection for survival to a disease challenge was up to 2.6 times that of phenotypic sib selection. |
Inhaled corticosteroid therapy reduces the early morning peak in cortisol and aldosterone. | 1. As mineralocorticoid and adrenocorticoid activity are both under the diurnal control of adrenocorticotropic hormone secretion, we aimed to evaluate whether the normal circadian rhythm of cortisol and aldosterone secretion was suppressed by inhaled corticosteroid therapy.2.Ten normotensive patients with mild-moderate asthma, mean age 24.0 (S.D. 9.8) years and mean arterial pressure 90.7 (9.8) mmHg, were studied in a double-blind, randomized crossover design comparing placebo with fluticasone propionate, 1000 microgram administered twice daily at 08:00 h and 20:00 h. After 5 days of repeated dosing at steady state, measurements were made of plasma cortisol and aldosterone at midnight and 08:00 h.3. With placebo there was a significant (P<0.05) difference between cortisol values at 08:00 h (588.6+/-83.8 nmol/l) and midnight (109.6+/-35.0 nmol/l), whereas after treatment with fluticasone propionate there was no significant difference between levels at 08:00 h (143.3+/-57.4 nmol/l) and midnight (64.3+/-22.3 nmol/l). For cortisol at 08:00 h there was also a significant (P<0.05) difference between placebo and fluticasone propionate. The same pattern was observed for aldosterone. Plasma aldosterone levels at 08:00 h after treatment with placebo (129.6+/-30.9 nmol/l) were significantly different (P<0. 05) to those seen at midnight (40.4+/-6.2 nmol/l). After treatment with fluticasone propionate, there was no significant difference between levels at midnight (55.4+/-11.7 nmol/l) and 08:00 h (64. 8+/-12.7 nmol/l).4. These results show that inhaled corticosteroid therapy abolishes the circadian rhythm of aldosterone and cortisol secretion. This may have possible implications for patients taking inhaled corticosteroids in terms of the beneficial cardiac effects of suppressing early morning aldosterone. |
Reconstruction for Feature Disentanglement in Pose-invariant Face Recognition | Deep neural networks (DNNs) trained on large-scale datasets have recently achieved impressive improvements in face recognition. But a persistent challenge remains to develop methods capable of handling large pose variations that are relatively under-represented in training data. This paper presents a method for learning a feature representation that is invariant to pose, without requiring extensive pose coverage in training data. We first propose to use a synthesis network for generating non-frontal views from a single frontal image, in order to increase the diversity of training data while preserving accurate facial details that are critical for identity discrimination. Our next contribution is a multi-source multi-task DNN that seeks a rich embedding representing identity information, as well as information such as pose and landmark locations. Finally, we propose a Siamese network to explicitly disentangle identity and pose, by demanding alignment between the feature reconstructions through various combinations of identity and pose features obtained from two images of the same subject. Experiments on face datasets in both controlled and wild scenarios, such as MultiPIE, LFW and 300WLP, show that our method consistently outperforms the state-of-the-art, especially on images with large head pose variations. |
Ethanol-stimulated behaviour in mice is modulated by brain catalase activity and H2O2 rate of production | Abstract Rationale. Over the last few years, a role for the brain catalase-H2O2 enzymatic system has been suggested in the behavioural effects observed in rodents after ethanol administration. This role seems to be related to the ability of cerebral catalase to metabolise ethanol to acetaldehyde using H2O2 as a co-substrate. On the other hand, it has been shown that normobaric hyperoxia increases the rate of cerebral H2O2 production in rodents in vivo. Thus, substrate-level changes could regulate brain catalase activity, thereby modulating the behavioural effects of ethanol. Objectives. The aim of the present study was to assess if the enhancement of cerebral H2O2 production after hyperoxia exposure results in a boost of ethanol-induced locomotion in mice. Methods. CD-1 mice were exposed to air or 99.5% O2 inhalation (for 15, 30, or 45 min) and 0, 30, 60 or 120 min after this treatment, ethanol-induced locomotion was measured. The H2O2-mediated inactivation of endogenous brain catalase activity following an injection of 3-amino-1,2,4-triazole was used as a measure of the rate of cerebral H2O2 production. Results. Hyperoxia exposure (30 or 45 min) potentiated the locomotor-stimulating effects of ethanol (2.5 or 3.0 g/kg), whereas cocaine (4 mg/kg) or caffeine (15 mg/kg)-induced locomotion and blood ethanol levels were unaffected. Moreover, the results also confirmed brain H2O2 overproduction in mice. Conclusions. The present results suggest that an increase in brain H2O2 production potentiates ethanol-induced locomotion. Therefore, this study provides further support for the notion that the brain catalase-H2O2 system, and by implication centrally formed acetaldehyde, plays a key role in the mediation of ethanol's psychopharmacological effects. |
Genome-Wide Study of Gene Variants Associated with Differential Cardiovascular Event Reduction by Pravastatin Therapy | Statin therapy reduces the risk of coronary heart disease (CHD), however, the person-to-person variability in response to statin therapy is not well understood. We have investigated the effect of genetic variation on the reduction of CHD events by pravastatin. First, we conducted a genome-wide association study of 682 CHD cases from the Cholesterol and Recurrent Events (CARE) trial and 383 CHD cases from the West of Scotland Coronary Prevention Study (WOSCOPS), two randomized, placebo-controlled studies of pravastatin. In a combined case-only analysis, 79 single nucleotide polymorphisms (SNPs) were associated with differential CHD event reduction by pravastatin according to genotype (P<0.0001), and these SNPs were analyzed in a second stage that included cases as well as non-cases from CARE and WOSCOPS and patients from the PROspective Study of Pravastatin in the Elderly at Risk/PHArmacogenomic study of Statins in the Elderly at risk for cardiovascular disease (PROSPER/PHASE), a randomized placebo controlled study of pravastatin in the elderly. We found that one of these SNPs (rs13279522) was associated with differential CHD event reduction by pravastatin therapy in all 3 studies: P = 0.002 in CARE, P = 0.01 in WOSCOPS, P = 0.002 in PROSPER/PHASE. In a combined analysis of CARE, WOSCOPS, and PROSPER/PHASE, the hazard ratio for CHD when comparing pravastatin with placebo decreased by a factor of 0.63 (95% CI: 0.52 to 0.75) for each extra copy of the minor allele (P = 4.8 × 10(-7)). This SNP is located in DnaJ homolog subfamily C member 5B (DNAJC5B) and merits investigation in additional randomized studies of pravastatin and other statins. |
Validity and reliability of a novel immunosuppressive adverse effects scoring system in renal transplant recipients | BACKGROUND
After renal transplantation, many patients experience adverse effects from maintenance immunosuppressive drugs. When these adverse effects occur, patient adherence with immunosuppression may be reduced and impact allograft survival. If these adverse effects could be prospectively monitored in an objective manner and possibly prevented, adherence to immunosuppressive regimens could be optimized and allograft survival improved. Prospective, standardized clinical approaches to assess immunosuppressive adverse effects by health care providers are limited. Therefore, we developed and evaluated the application, reliability and validity of a novel adverse effects scoring system in renal transplant recipients receiving calcineurin inhibitor (cyclosporine or tacrolimus) and mycophenolic acid based immunosuppressive therapy.
METHODS
The scoring system included 18 non-renal adverse effects organized into gastrointestinal, central nervous system and aesthetic domains developed by a multidisciplinary physician group. Nephrologists employed this standardized adverse effect evaluation in stable renal transplant patients using physical exam, review of systems, recent laboratory results, and medication adherence assessment during a clinic visit. Stable renal transplant recipients in two clinical studies were evaluated and received immunosuppressive regimens comprised of either cyclosporine or tacrolimus with mycophenolic acid. Face, content, and construct validity were assessed to document these adverse effect evaluations. Inter-rater reliability was determined using the Kappa statistic and intra-class correlation.
RESULTS
A total of 58 renal transplant recipients were assessed using the adverse effects scoring system confirming face validity. Nephrologists (subject matter experts) rated the 18 adverse effects as: 3.1 ± 0.75 out of 4 (maximum) regarding clinical importance to verify content validity. The adverse effects scoring system distinguished 1.75-fold increased gastrointestinal adverse effects (p=0.008) in renal transplant recipients receiving tacrolimus and mycophenolic acid compared to the cyclosporine regimen. This finding demonstrated construct validity. Intra-class correlation was 0.81 (95% confidence interval: 0.65-0.90) and Kappa statistic of 0.68 ± 0.25 for all 18 adverse effects and verified substantial inter-rater reliability.
CONCLUSIONS
This immunosuppressive adverse effects scoring system in stable renal transplant recipients was evaluated and substantiated face, content and construct validity with inter-rater reliability. The scoring system may facilitate prospective, standardized clinical monitoring of immunosuppressive adverse drug effects in stable renal transplant recipients and improve medication adherence. |
M4: A Visualization-Oriented Time Series Data Aggregation | Visual analysis of high-volume time series data is ubiquitous in many industries, including finance, banking, and discrete manufacturing. Contemporary, RDBMS-based systems for visualization of high-volume time series data have difficulty to cope with the hard latency requirements and high ingestion rates of interactive visualizations. Existing solutions for lowering the volume of time series data disregard the semantics of visualizations and result in visualization errors. In this work, we introduce M4, an aggregation-based time series dimensionality reduction technique that provides errorfree visualizations at high data reduction rates. Focusing on line charts, as the predominant form of time series visualization, we explain in detail the drawbacks of existing data reduction techniques and how our approach outperforms state of the art, by respecting the process of line rasterization. We describe how to incorporate aggregation-based dimensionality reduction at the query level in a visualizationdriven query rewriting system. Our approach is generic and applicable to any visualization system that uses an RDBMS as data source. Using real world data sets from high tech manufacturing, stock markets, and sports analytics domains we demonstrate that our visualization-oriented data aggregation can reduce data volumes by up to two orders of magnitude, while preserving perfect visualizations. |
Significance of Reduced Features for Subcellular Bioimage Classification | High-throughput screening (HTS) system has the capability to produce thousands of images containing the millions of cells. An expert could categorize each cell’s phenotype using visual inspection under a microscope. In fact, this manual approach is inefficient because image acquisition systems can produce massive amounts of cell image data per hour. Therefore, we propose an automated and efficient machine-learning model for phenotype detection from HTS system. Our goal is to find the most distinctive features (using feature selection and reduction), which will provide the best phenotype classification both in terms of accuracy and validation time from the feature pool. First, we used minimum redundancy and maximum relevance (MRMR) to select the most discriminant features and evaluate their corresponding impact on the model performance with a support vector machine (SVM) classifier. Second, we used principal component analysis (PCA) to reduce our feature to the most relevant feature list. The main difference is that MRMR does not transform the original features, unlike PCA. Later, we calculated an overall classification accuracy of original features (i.e., 1025 features) and compared with feature selection and reduction accuracies (∼30 features). The feature selection method gives the highest accuracy than reduction and original features. We validated and evaluated our model against well-known benchmark problem (i.e. Hela dataset) with a classification accuracy of 92.70% and validation time in 0.41 seconds. |
An accelerated gradient method for trace norm minimization | We consider the minimization of a smooth loss function regularized by the trace norm of the matrix variable. Such formulation finds applications in many machine learning tasks including multi-task learning, matrix classification, and matrix completion. The standard semidefinite programming formulation for this problem is computationally expensive. In addition, due to the non-smooth nature of the trace norm, the optimal first-order black-box method for solving such class of problems converges as O(1/√k), where k is the iteration counter. In this paper, we exploit the special structure of the trace norm, based on which we propose an extended gradient algorithm that converges as O(1/k). We further propose an accelerated gradient algorithm, which achieves the optimal convergence rate of O(1/k2) for smooth problems. Experiments on multi-task learning problems demonstrate the efficiency of the proposed algorithms. |
Serotonin and dopamine receptor gene polymorphisms and the risk of extrapyramidal side effects in perphenazine-treated schizophrenic patients | Perphenazine, a classical antipsychotic drug, has the potential to induce extrapyramidal side effects (EPS). Dopaminergic and serotonergic pathways are involved in the therapeutic and adverse effects of the drug. To evaluate the impact of polymorphisms in the dopamine D2 and D3 and serotonin 2A and 2C receptor genes (DRD2, DRD3, HTR2A, and HTR2C) on short-term effects of perphenazine monotherapy in schizophrenic patients. Forty-seven Estonian inpatients were evaluated before and after 4–6 weeks of treatment by Simpson–Angus rating scale, Barnes scale, and Positive and Negative Symptom Scale. Genotyping was performed for common DRD2, DRD3, HTR2A, and HTR2C gene polymorphisms, previously reported to influence receptor expression and/or function. Most of the patients (n = 37) responded to the treatment and no significant association was observed between the polymorphisms and antipsychotic response. The 102C allele of HTR2A and the −697C and 23Ser alleles of HTR2C were more frequent among patients with EPS (n = 25) compared to patients without EPS (n = 22) (p = 0.02, 0.01, and 0.02, respectively). The difference between patients with and without EPS in variant allele frequencies remained significant after multiple model analyses including age, gender, and duration of antipsychotic treatment as covariants. There was no significant association between EPS occurrence and polymorphisms in the DRD2 and DRD3 genes. An association was observed between polymorphisms in HTR2A and HTR2C genes and occurrence of acute EPS in schizophrenic patients treated with perphenazine monotherapy. Larger study populations are needed to confirm our findings. |
(Re)-conceptualizing design approaches for mobile language learning | An exploratory study conducted at George Brown College in Toronto, Canada between 2007 and 2009 investigated language learning with mobile devices as an approach to augmenting ESP learning by taking learning outside the classroom into the real-world context. In common with findings at other community colleges, this study identified inadequate language proficiency, particularly in speaking and listening skills, as a major barrier for ESL college learners seeking employment, or employers hiring and retaining immigrants as employees (CIITE, 2004; Palalas, 2009). As a result of these findings, language support was designed to provide English language instruction going beyond the standard 52-hour course: a hybrid English for Accounting course encompassing in-class, online and mobile-assisted ESP instruction. This paper reports on the pilot study of the mobile component of this re-designed course, which represents the first stage of an on-going Design-Based Research (DBR) study. Discussion is also offered of a new learning theory which we have called Ecological Constructivism (Hoven, 2008; Jakobsdottir, McKeown & Hoven, 2010), devised to incorporate the multiple dimensions of Ecological Linguistics and Constructivism in the situated and context-embedded learning engendered by these new uses of mobile devices. |
Switch from 200 to 350 CD4 baseline count: what it means to HIV care and treatment programs in Kenya | INTRODUCTION
With the increasing population of infected individuals in Africa and constrained resources for care and treatment, antiretroviral management continues to be an important public health challenge. Since the announcement of World Health Organization recommendation and guidelines for initiation of antiretroviral Treatment at CD4 count below 350, many developing countries are adopting this strategy in their country specific guidelines to care and treatment of HIV and AIDS. Despite the benefits to these recommendations, what does this switch from 200 to 350 CD4 count mean in antiretroviral treatment demand?
METHODS
A Multi-centre study involving 1376 patients in health care settings in Kenya. CD4 count was carried out by flow cytometry among the HIV infected individuals in Kenya and results analyzed in view of the In-country and the new CD4 recommendation for initiation of antiretroviral treatment.
RESULTS
Across sites, 32% of the individual required antiretroviral at <200 CD4 Baseline, 40% at <250 baseline count and 58% based on the new criteria of <350 CD4 Count. There were more female (68%) than Male (32%).Different from <200 and <250 CD4 baseline criteria, over 50% of all age groups required antiretroviral at 350 CD4 baseline. Age groups between 41-62 led in demand for ART.
CONCLUSION
With the new guidelines, demand for ARVs has more than doubled with variations noted within regions and age groups. As A result, HIV Care and Treatment Programs should prepare for this expansion for the benefits to be realized. |
FDA approval: ceritinib for the treatment of metastatic anaplastic lymphoma kinase-positive non-small cell lung cancer. | On April 29, 2014, the FDA granted accelerated approval to ceritinib (ZYKADIA; Novartis Pharmaceuticals Corporation), a breakthrough therapy-designated drug, for the treatment of patients with anaplastic lymphoma kinase (ALK)-positive, metastatic non-small cell lung cancer (NSCLC) who have progressed on or are intolerant to crizotinib. The approval was based on a single-arm multicenter trial enrolling 163 patients with metastatic ALK-positive NSCLC who had disease progression on (91%) or intolerance to crizotinib. Patients received ceritinib at a starting dose of 750 mg orally once daily. The objective response rate (ORR) by a blinded independent review committee was 44% (95% CI, 36-52), and the median duration of response (DOR) was 7.1 months. The ORR by investigator assessment was similar. Safety was evaluated in 255 patients. The most common adverse reactions and laboratory abnormalities included diarrhea (86%), nausea (80%), increased alanine transaminase (80%), increased aspartate transaminase (75%), vomiting (60%), increased glucose (49%), and increased lipase (28%). Although 74% of patients required at least one dose reduction or interruption due to adverse reactions, the discontinuation rate due to adverse reactions was low (10%). With this safety profile, the benefit-risk analysis was considered favorable because of the clinically meaningful ORR and DOR. |
Ordering in cobalt-ferrous ferrites | Various cobalt-ferrous ferrites show a constricted hysteresis loop. After magnetic annealing of the samples the loop becomes rectangular. It appears that the magnetic annealing creates in each crystal a uniaxial anisotropy in a direction which is not necessarily the direction of the applied magnetic field but a crystallographic direction nearest to it. It is suggested that directional ordering is the most probable origin of the anisotropy found. |
Movement Imitation with Nonlinear Dynamical Systems in Humanoid Robots | This article presents a new approach to movement planning, on-line trajectory modification, and imitation learning by representing movement plans based on a set of nonlinear differential equations with well-defined attractor dynamics. In contrast to non-autonomous movement representations like splines, the resultant movement plan remains an autonomous set of nonlinear differential equations that forms a control policy (CP) which is robust to strong external perturbations and that can be modified on-line by additional perceptual variables. The attractor landscape of the control policy can be learned rapidly with a locally weighted regression technique with guaranteed convergence of the learning algorithm and convergence to the movement target. This property makes the system suitable for movement imitation and also for classifying demonstrated movement according to the parameters of the learning system. We evaluate the system with a humanoid robot simulation and an actual humanoid robot. Experiments are presented for the imitation of three types of movements: reaching movements with one arm, drawing movements of 2-D patterns, and tennis swings. Our results demonstrate (a) that multi-joint human movements can be encoded successfully by the CPs, (b) that a learned movement policy can readily be reused to produce robust trajectories towards different targets, (c) that a policy fitted for one particular target provides a good predictor of human reaching movements towards neighboring targets, and (d) that the parameter space which encodes a policy is suitable for measuring to which extent two trajectories are qualitatively similar. |
Long-term nitroglycerin treatment is associated with supersensitivity to vasoconstrictors in men with stable coronary artery disease: prevention by concomitant treatment with captopril. | OBJECTIVES
We examined whether long-term nitroglycerin (NTG) treatment leads to an increase in sensitivity to vasoconstrictors. To assess a potential role of the renin-angiotensin system in mediating this phenomenon, we treated patients concomitantly with the angiotensin-converting enzyme (ACE) inhibitor captopril.
BACKGROUND
The anti-ischemic efficacy of organic nitrates is rapidly blunted by the development of nitrate tolerance. The underlying mechanisms are most likely multifactorial and may involve increased vasoconstrictor responsiveness.
METHODS
Forearm blood flow and vascular resistance were determined by using strain gauge plethysmography. The short-term responses to intraarterial angiotensin II (1, 3, 9 and 27 ng/min) and phenylephrine (an alpha-adrenergic agonist drug, 0.03, 0.1, 0.3 and 1 microg/min) were studied in 40 male patients with stable coronary artery disease. These patients were randomized into four groups receiving 48 h of treatment with NTG (0.5 microg/kg body weight per min) or placebo with or without the ACE inhibitor captopril (25 mg three times daily).
RESULTS
In patients treated with NTG alone, the maximal reductions in forearm blood flow in response to angiotensin II and phenylephrine were markedly greater (-64 +/- 3% and -53 +/- 4%, respectively) than those in patients receiving placebo (-41 +/- 2% and -42 +/- 2%, respectively). Captopril treatment completely prevented the NTG-induced hypersensitivity to angiotensin II and phenylephrine (-33 +/- 3% and -35 +/- 3%, respectively) but had no significant effect on blood flow responses in patients without NTG treatment (-34 +/- 2% and -37 +/- 3%, respectively).
CONCLUSIONS
We conclude that continuous administration of NTG is associated with an increased sensitivity to phenylephrine and angiotensin II that is prevented by concomitant treatment with captopril. The prevention of NTG-induced hypersensitivity to vasoconstrictors by ACE inhibition indicates an involvement of the renin-angiotensin system in mediating this phenomenon. |
Transport Properties of Charge Carriers in Single‐Walled Carbon Nanotubes by Flash‐Photolysis Time‐Resolved Microwave Conductivity Technique | Transport properties of charge carriers in single‐walled carbon nanotubes (SWNTs) were investigated by flash‐photolysis time‐resolved microwave conductivity (FP‐TRMC) technique. With this technique, it is possible to monitor the change in conductivity on pulsed laser excitation on a nanosecond timescale, without contacting layer with electrode. The FP‐TRMC signals obtained by SWNT sample is drastically larger than that of only catalyst. The dependence of excitation wavelength on φΣμ, which is a product of a quantum yield and sum of mobility, was obtained, indicating variety in transport property of size‐distributed SWNTs. |
A Survey on the Security of Hypervisors in Cloud Computing | This survey paper focuses on the security of hyper visors in the cloud. Topics covered in this paper include attacks that allow a malicious virtual machine (VM) to compromise the hyper visor, as well as techniques used by malicious VMs to steal more than their allocated share of physical resources, and ways to bypass the isolation between the VMs by using side-channels to steal data. Also discussed are the security requirements and architectures for hyper visors to successfully defend against such attacks. |
Audio-visual speech recognition using deep learning | Audio-visual speech recognition (AVSR) system is thought to be one of the most promising solutions for reliable speech recognition, particularly when the audio is corrupted by noise. However, cautious selection of sensory features is crucial for attaining high recognition performance. In the machine-learning community, deep learning approaches have recently attracted increasing attention because deep neural networks can effectively extract robust latent features that enable various recognition algorithms to demonstrate revolutionary generalization capabilities under diverse application conditions. This study introduces a connectionist-hidden Markov model (HMM) system for noise-robust AVSR. First, a deep denoising autoencoder is utilized for acquiring noise-robust audio features. By preparing the training data for the network with pairs of consecutive multiple steps of deteriorated audio features and the corresponding clean features, the network is trained to output denoised audio features from the corresponding features deteriorated by noise. Second, a convolutional neural network (CNN) is utilized to extract visual features from raw mouth area images. By preparing the training data for the CNN as pairs of raw images and the corresponding phoneme label outputs, the network is trained to predict phoneme labels from the corresponding mouth area input images. Finally, a multi-stream HMM (MSHMM) is applied for integrating the acquired audio and visual HMMs independently trained with the respective features. By comparing the cases when normal and denoised mel-frequency cepstral coefficients (MFCCs) are utilized as audio features to the HMM, our unimodal isolated word recognition results demonstrate that approximately 65 % word recognition rate gain is attained with denoised MFCCs under 10 dB signal-to-noise-ratio (SNR) for the audio signal input. Moreover, our multimodal isolated word recognition results utilizing MSHMM with denoised MFCCs and acquired visual features demonstrate that an additional word recognition rate gain is attained for the SNR conditions below 10 dB. |
Communication Needs of Elderly at Risk of Falls and their Remote Family | The aging population experiences increased health risks, both physical and emotional. Two such risks are those of isolation and falling. This papers draws from HCI literature in these two independent research areas to explore the needs of family communication with elderly parents at risk of falls. We report on a study with 7 elderly parents and 3 of adult children, as well as a group interview with 12 elderly living in a sheltered accommodation. Findings indicate important emotional needs on both parts: adult children's anxiety for the wellbeing of their parents at risk of falls, and elderly's need for autonomy and their appreciation for an aesthetic design. We concluded with implications of these findings for designing for family communication in this challenging context. |
Impedance Control of an aerial-manipulator: Preliminary results | In this paper, an impedance control scheme for aerial robotic manipulators is proposed, with the aim of reducing the end-effector interaction forces with the environment. The proposed control has a multi-level architecture, in detail the outer loop is composed by a trajectory generator and an impedance filter that modifies the trajectory to achieve a complaint behaviour in the end-effector space; a middle loop is used to generate the joint space variables through an inverse kinematic algorithm; finally the inner loop is aimed at ensuring the motion tracking. The proposed control architecture has been experimentally tested. |
Spherical hashing | Many binary code encoding schemes based on hashing have been actively studied recently, since they can provide efficient similarity search, especially nearest neighbor search, and compact data representations suitable for handling large scale image databases in many computer vision problems. Existing hashing techniques encode high-dimensional data points by using hyperplane-based hashing functions. In this paper we propose a novel hypersphere-based hashing function, spherical hashing, to map more spatially coherent data points into a binary code compared to hyperplane-based hashing functions. Furthermore, we propose a new binary code distance function, spherical Hamming distance, that is tailored to our hypersphere-based binary coding scheme, and design an efficient iterative optimization process to achieve balanced partitioning of data points for each hash function and independence between hashing functions. Our extensive experiments show that our spherical hashing technique significantly outperforms six state-of-the-art hashing techniques based on hyperplanes across various image benchmarks of sizes ranging from one to 75 million of GIST descriptors. The performance gains are consistent and large, up to 100% improvements. The excellent results confirm the unique merits of the proposed idea in using hyperspheres to encode proximity regions in high-dimensional spaces. Finally, our method is intuitive and easy to implement. |
Transcranial direct current stimulation (tDCS) for treatment of major depression during pregnancy: study protocol for a pilot randomized controlled trial | BACKGROUND
Women with depression in pregnancy are faced with difficult treatment decisions. Untreated, antenatal depression has serious negative implications for mothers and children. While antidepressant drug treatment is likely to improve depressive symptoms, it crosses the placenta and may pose risks to the unborn child. Transcranial direct current stimulation is a focal brain stimulation treatment that improves depressive symptoms within 3 weeks of treatment by inducing changes to brain areas involved in depression, without impacting any other brain areas, and without inducing changes to heart rate, blood pressure or core body temperature. The localized nature of transcranial direct current stimulation makes it an ideal therapeutic approach for treating depression during pregnancy, although it has never previously been evaluated in this population.
METHODS/DESIGN
We describe a pilot randomized controlled trial of transcranial direct current stimulation among women with depression in pregnancy to assess the feasibility of a larger, multicentre efficacy study. Women over 18 years of age and between 14 and 32 weeks gestation can be enrolled in the study provided they meet diagnostic criteria for a major depressive episode of at least moderate severity and have been offered but refused antidepressant medication. Participants are randomized to receive active transcranial direct current stimulation or a sham condition that is administered in 15 30-minute treatments over three weeks. Women sit upright during treatment and receive obstetrical monitoring prior to, during and after each treatment session. Depressive symptoms, treatment acceptability, and pregnancy outcomes are assessed at baseline (prior to randomization), at the end of each treatment week, every four weeks post-treatment until delivery, and at 4 and 12 weeks postpartum.
DISCUSSION
Transcranial direct current stimulation is a novel therapeutic option for treating depression during pregnancy. This protocol allows for assessment of the feasibility of, acceptability of and adherence with a clinical trial protocol to administer this treatment to pregnant women with moderate to severe depression. Results from this pilot study will guide the development of a larger multicentre trial to definitively test the efficacy and safety of transcranial direct current stimulation for pregnant women with depression.
TRIAL REGISTRATION
Clinical Trials Gov NCT02116127. |
Genomic changes during evolution of animal parasitism in eukaryotes. | Understanding how pathogens have evolved to survive in close association with their hosts is an important step in unraveling the biology of host-pathogen interactions. Comparative genomics is a powerful tool to approach this problem as an increasing number of genomes of multiple pathogen species and strains become available. The ever-growing catalog of genome sequences makes comparison of organisms easier, but it also allows us to reconstitute the evolutionary processes occurring at the genomic level that may have led to the acquisition of pathogenic or parasitic mechanisms. |
Loopy Belief Propagation: Convergence and Effects of Message Errors | Belief propagation (BP) is an increasingly popular method of performing approximate inference on arbitrary graphical models. At times, even further approximations are required, whether due to quantization of the messages or model parameters, from other simplified message or model representations, or from stochastic approximation methods. The introduction of such errors into the BP message computations has the potential to affect the solution obtained adversely. We analyze the effect resulting from message approximation under two particular measures of error, and show bounds on the accumulation of errors in the system. This analysis leads to convergence conditions for traditional BP message passing, and both strict bounds and estimates of the resulting error in systems of approximate BP message passing. |
Dysproteinemias and Glomerular Disease. | Dysproteinemia is characterized by the overproduction of an Ig by clonal expansion of cells from the B cell lineage. The resultant monoclonal protein can be composed of the entire Ig or its components. Monoclonal proteins are increasingly recognized as a contributor to kidney disease. They can cause injury in all areas of the kidney, including the glomerular, tubular, and vascular compartments. In the glomerulus, the major mechanism of injury is deposition. Examples of this include Ig amyloidosis, monoclonal Ig deposition disease, immunotactoid glomerulopathy, and cryoglobulinemic GN specifically from types 1 and 2 cryoglobulins. Mechanisms that do not involve Ig deposition include the activation of the complement system, which causes complement deposition in C3 glomerulopathy, and cytokines/growth factors as seen in thrombotic microangiopathy and precipitation, which is involved with cryoglobulinemia. It is important to recognize that nephrotoxic monoclonal proteins can be produced by clones from any of the B cell lineages and that a malignant state is not required for the development of kidney disease. The nephrotoxic clones that do not meet requirement for a malignant condition are now called monoclonal gammopathy of renal significance. Whether it is a malignancy or monoclonal gammopathy of renal significance, preservation of renal function requires substantial reduction of the monoclonal protein. With better understanding of the pathogenesis, clone-directed strategies, such as rituximab against CD20 expressing B cell and bortezomib against plasma cell clones, have been used in the treatment of these diseases. These clone-directed therapies been found to be more effective than immunosuppressive regimens used in nonmonoclonal protein-related kidney diseases. |
Homography estimation using one ellipse correspondence and minimal additional information | In sport scenarios like football or basketball, we often deal with central views where only the central circle and some additional primitives like the central line and the central point or a touch line are visible. In this paper we first characterize, from a mathematical point of view, the set of homographies that project a given ellipse into the unit circle, next, using some extra minimal additional information like the knowledge of the position in the image of the central line and central point or a touch line we show a method to fully determine the plane homography. We present some experiments in sport scenarios to show the ability of the proposed method to properly recover the plane homography. |
A randomised phase 2 study combining LY2181308 sodium (survivin antisense oligonucleotide) with first-line docetaxel/prednisone in patients with castration-resistant prostate cancer. | Castration-resistant prostate cancer (CRPC) is partially characterised by overexpression of antiapoptotic proteins, such as survivin. In this phase 2 study, patients with metastatic CRPC (n=154) were randomly assigned (1:2 ratio) to receive standard first-line docetaxel/prednisone (control arm) or the combination of LY2181308 with docetaxel/prednisone (experimental arm). The primary objective was to estimate progression-free survival (PFS) for LY2181308 plus docetaxel. Secondary efficacy measures included overall survival (OS), several predefined prostate-specific antigen (PSA)-derived end points, and Brief Pain Inventory (BPI) and Functional Assessment of Cancer Therapy-Prostate (FACT-P) scores. The median PFS of treated patients for the experimental arm (n=98) was 8.64 mo (90% confidence interval [CI], 7.39-10.45) versus 9.00 mo (90% CI, 7.00-10.09) in the control arm (n=51; p=0.755). The median OS for the experimental arm was 27.04 mo (90% CI, 19.94-33.41) compared with 29.04 mo (90% CI, 20.11-39.26; p=0.838). The PSA responses (≥ 50% PSA reduction), BPI, and FACT-P scores were similar in both arms. In the experimental arm, patients had a numerically higher incidence of grades 3-4 neutropenia, anaemia, thrombocytopenia, and sensory neuropathy. In conclusion, this study failed to detect a difference in efficacy between the two treatment groups. |
Cloud-Assisted IoT-Based SCADA Systems Security: A Review of the State of the Art and Future Challenges | Industrial systems always prefer to reduce their operational expenses. To support such reductions, they need solutions that are capable of providing stability, fault tolerance, and flexibility. One such solution for industrial systems is cyber physical system (CPS) integration with the Internet of Things (IoT) utilizing cloud computing services. These CPSs can be considered as smart industrial systems, with their most prevalent applications in smart transportation, smart grids, smart medical and eHealthcare systems, and many more. These industrial CPSs mostly utilize supervisory control and data acquisition (SCADA) systems to control and monitor their critical infrastructure (CI). For example, WebSCADA is an application used for smart medical technologies, making improved patient monitoring and more timely decisions possible. The focus of the study presented in this paper is to highlight the security challenges that the industrial SCADA systems face in an IoT-cloud environment. Classical SCADA systems are already lacking in proper security measures; however, with the integration of complex new architectures for the future Internet based on the concepts of IoT, cloud computing, mobile wireless sensor networks, and so on, there are large issues at stakes in the security and deployment of these classical systems. Therefore, the integration of these future Internet concepts needs more research effort. This paper, along with highlighting the security challenges of these CI's, also provides the existing best practices and recommendations for improving and maintaining security. Finally, this paper briefly describes future research directions to secure these critical CPSs and help the research community in identifying the research gaps in this regard. |
Crowd-sourcing NLG Data: Pictures Elicit Better Data | Recent advances in corpus-based Natural Language Generation (NLG) hold the promise of being easily portable across domains, but require costly training data, consisting of meaning representations (MRs) paired with Natural Language (NL) utterances. In this work, we propose a novel framework for crowdsourcing high quality NLG training data, using automatic quality control measures and evaluating different MRs with which to elicit data. We show that pictorial MRs result in better NL data being collected than logicbased MRs: utterances elicited by pictorial MRs are judged as significantly more natural, more informative, and better phrased, with a significant increase in average quality ratings (around 0.5 points on a 6-point scale), compared to using the logical MRs. As the MR becomes more complex, the benefits of pictorial stimuli increase. The collected data will be released as part of this submission. |
Learning Policy Representations in Multiagent Systems | Modeling agent behavior is central to understanding the emergence of complex phenomena in multiagent systems. Prior work in agent modeling has largely been task-specific and driven by handengineering domain-specific prior knowledge. We propose a general learning framework for modeling agent behavior in any multiagent system using only a handful of interaction data. Our framework casts agent modeling as a representation learning problem. Consequently, we construct a novel objective inspired by imitation learning and agent identification and design an algorithm for unsupervised learning of representations of agent policies. We demonstrate empirically the utility of the proposed framework in (i) a challenging highdimensional competitive environment for continuous control and (ii) a cooperative environment for communication, on supervised predictive tasks, unsupervised clustering, and policy optimization using deep reinforcement learning. |
Parameter design of voltage balancing circuit for series connected HV-IGBTs | Applying power semiconductor devices in series connection is a direct and effective scheme to improve the voltage rating and power rating of power electronic converter. The key issue of device series connection is voltage balancing in static switching state and dynamic switching state. Active clamping circuit samples overvoltage on insulated gate bipolar transistor (IGBT), injects current into the gate and thus could increase the gate voltage and suppress the overvoltage effectively. In this paper, a voltage balancing circuit for high voltage IGBT (HV-IGBT) series connection is presented. The voltage balancing circuit is composed of static state voltage balancing sub-circuit across collector and emitter, dynamic state voltage balancing sub-circuit across collector and emitter and active clamping sub-circuit across collector and gate. The functions and principles of the three sub-circuits are then described. Besides the topology, the parameters of the voltage balancing circuit influence the voltage balance of series connected HV-IGBTs seriously. Based on quantitatively analyzing the relationships among the parameters of HV-IGBTs, the parameters of voltage balancing circuit and the voltage balancing indexes of the series connected devices, a parameter design method for the presented voltage balancing circuit is proposed. This parameter design method comprehensively considers the switching loss of HV-IGBT, the loss of voltage balancing circuit, the electrical stress on HV-IGBT and the switching frequency in order to guarantee the efficiency, reliability and performance of the system. The proposed parameter design method is applied to the development process of a series connection circuit with Infineon 6500V/600A HV-IGBTs. Experimental results verify the validity and feasibility of the proposed method. |
Long-Term Complications of Polyethylene Glycol Injection to the Face | Currently, filling, smoothing, or recontouring the face through the use of injectable fillers is one of the most popular forms of cosmetic surgery. Because these materials promise a more youthful appearance without anesthesia in a noninvasive way, various fillers have been used widely in different parts of the world. However, most of these fillers have not been approved by the Food and Drug Administration, and their applications might cause unpleasant disfiguring complications. This report describes a case of foreign body granuloma in the cheeks secondary to polyethylene glycol injection and shows the possible complications associated with the use of facial fillers. |
On the Interpolation of Data with Normally Distributed Uncertainty for Visualization | In many fields of science or engineering, we are confronted with uncertain data. For that reason, the visualization of uncertainty received a lot of attention, especially in recent years. In the majority of cases, Gaussian distributions are used to describe uncertain behavior, because they are able to model many phenomena encountered in science. Therefore, in most applications uncertain data is (or is assumed to be) Gaussian distributed. If such uncertain data is given on fixed positions, the question of interpolation arises for many visualization approaches. In this paper, we analyze the effects of the usual linear interpolation schemes for visualization of Gaussian distributed data. In addition, we demonstrate that methods known in geostatistics and machine learning have favorable properties for visualization purposes in this case. |
Multi-class Generalized Binary Search for Active Inverse Reinforcement Learning | This paper addresses the problem of learning a task from demonstration. We adopt the framework of inverse reinforcement learning, where tasks are represented in the form of a reward function. Our contribution is a novel active learning algorithm that enables the learning agent to query the expert for more informative demonstrations, thus leading to more sampleefficient learning. For this novel algorithm (Generalized Binary Search for Inverse Reinforcement Learning, or GBS-IRL), we provide a theoretical bound on sample complexity and illustrate its applicability on several different tasks. To our knowledge, GBS-IRL is the first active IRL algorithm with provable sample complexity bounds. We also discuss our method in light of other existing methods in the literature and its general applicability in multi-class classification problems. Finally, motivated by recent work on learning from demonstration in robots, we also discuss how different forms of human feedback can be integrated in a transparent manner in our learning framework. |
Evaluation of a Tree-based Pipeline Optimization Tool for Automating Data Science | As the field of data science continues to grow, there will be an ever-increasing demand for tools that make machine learning accessible to non-experts. In this paper, we introduce the concept of tree-based pipeline optimization for automating one of the most tedious parts of machine learning--pipeline design. We implement an open source Tree-based Pipeline Optimization Tool (TPOT) in Python and demonstrate its effectiveness on a series of simulated and real-world benchmark data sets. In particular, we show that TPOT can design machine learning pipelines that provide a significant improvement over a basic machine learning analysis while requiring little to no input nor prior knowledge from the user. We also address the tendency for TPOT to design overly complex pipelines by integrating Pareto optimization, which produces compact pipelines without sacrificing classification accuracy. As such, this work represents an important step toward fully automating machine learning pipeline design. |
Object Constraint Language (OCL): A Definitive Guide | The Object Constraint Language (OCL) started as a complement of the UML notation with the goal to overcome the limitations of UML (and in general, any graphical notation) in terms of precisely specifying detailed aspects of a system design. Since then, OCL has become a key component of any model-driven engineering (MDE) technique as the default language for expressing all kinds of (meta)model query, manipulation and specification requirements. Among many other applications, OCL is frequently used to express model transformations (as part of the source and target patterns of transformation rules), well-formedness rules (as part of the definition of new domain-specific languages), or code-generation templates (as a way to express the generation patterns and rules). This chapter pretends to provide a comprehensive view of this language, its many applications and available tool support as well as the latest research developments and open challenges around it. |
Attention Based CLDNNs for Short-Duration Acoustic Scene Classification | Recently, neural networks with deep architecture have been widely applied to acoustic scene classification. Both Convolutional Neural Networks (CNNs) and Long Short-Term Memory Networks (LSTMs) have shown improvements over fully connected Deep Neural Networks (DNNs). Motivated by the fact that CNNs, LSTMs and DNNs are complimentary in their modeling capability, we apply the CLDNNs (Convolutional, Long Short-Term Memory, Deep Neural Networks) framework to short-duration acoustic scene classification in a unified architecture. The CLDNNs take advantage of frequency modeling with CNNs, temporal modeling with LSTM, and discriminative training with DNNs. Based on the CLDNN architecture, several novel attention-based mechanisms are proposed and applied on the LSTM layer to predict the importance of each time step. We evaluate the proposed method on the truncated version of the 2016 TUT acoustic scenes dataset which consists of recordings from 15 different scenes. By using CLDNNs with bidirectional LSTM, we achieve higher performance compared to the conventional neural network architectures. Moreover, by combining the attention-weighted output with LSTM final time step output, significant improvement can be further achieved. |
CrowdInside: automatic construction of indoor floorplans | The existence of a worldwide indoor floorplans database can lead to significant growth in location-based applications, especially for indoor environments. In this paper, we present CrowdInside: a crowdsourcing-based system for the automatic construction of buildings floorplans. CrowdInside leverages the smart phones sensors that are ubiquitously available with humans who use a building to automatically and transparently construct accurate motion traces. These accurate traces are generated based on a novel technique for reducing the errors in the inertial motion traces by using the points of interest in the indoor environment, such as elevators and stairs, for error resetting. The collected traces are then processed to detect the overall floorplan shape as well as higher level semantics such as detecting rooms and corridors shapes along with a variety of points of interest in the environment.
Implementation of the system in two testbeds, using different Android phones, shows that CrowdInside can detect the points of interest accurately with 0.2% false positive rate and 1.3% false negative rate. In addition, the proposed error resetting technique leads to more than 12 times enhancement in the median distance error compared to the state-of-the-art. Moreover, the detailed floorplan can be accurately estimated with a relatively small number of traces. This number is amortized over the number of users of the building. We also discuss possible extensions to CrowdInside for inferring even higher level semantics about the discovered floorplans. |
Novel Radiation-Hardened-by-Design (RHBD) 12T Memory Cell for Aerospace Applications in Nanoscale CMOS Technology | In this paper, a novel radiation-hardened-by-design (RHBD) 12T memory cell is proposed to tolerate single node upset and multiple-node upset based on upset physical mechanism behind soft errors together with reasonable layout-topology. The verification results obtained confirm that the proposed 12T cell can provide a good radiation robustness. Compared with 13T cell, the increased area, power, read/write access time overheads of the proposed 12T cell are −18.9%, −23.8%, and 171.6%/−50.0%, respectively. Moreover, its hold static noise margin is 986.2 mV which is higher than that of 13T cell. This means that the proposed 12T cell also has higher stability when it provides fault tolerance capability. |
Comparison of efficacy of selamectin, ivermectin and mebendazole for the control of gastrointestinal nematodes in rhesus macaques, China. | An experiment was conducted to evaluate the efficacy of ivermectin and mebendazole compared with selamectin against gastrointestinal nematodes in rhesus macaques. A total of 60 rhesus macaques (Macaca mulatta), which were all infected with gastrointestinal nematodes, were randomly assigned to three treatment groups (selamectin, ivermectin and mebendazole) and one control group. Fecal samples for determining nematode egg counts were collected pre- and post-treatment. All treatments resulted in decrease in the number of eggs per gram (EPG) in the post-treatment sample compared with the pre-treatment sample. Reductions of mean egg counts from day -3 levels were 99.4% for selamectin, 99.2% for ivermectin and 99.4% for mebendazole on trial day 11, respectively. However, no significant difference was found among treatment groups. According to the data demonstrating a similar efficacy in selamectin-, ivermectin- and mebendazole-treated rhesus macaques, it was effective and convenient to apply either selamectin and ivermectin or mebendazole in rotation on the local conditions. |
Study of the bronchodilating effect of three doses of nebulized oxitropium bromide in asthmatic preschool children using the forced oscillation technique | The aim of this study was to evaluate the bronchodilating capacity of nebulized oxitropium bromide (OB) in preschool asthmatic children and to determine an appropriate dose for usage in this age group. The trial enrolled 20 patients with moderate to severe stable asthma aged between 3.2 and 6.2 years (mean 4.7). Applying a placebo controlled, double-blind design, the effect of placebo was compared with three different doses of OB (375, 750 and 1500 μg) and with 400 μg fenoterol. The three different doses of OB resulted in a highly significant bronchodilation within 15 min after administration. The observed bronchodilation was comparable between the three doses during the first 2 h. However, after 4 h the lowest dose was significantly less powerful than the highest dose. Compared to the additional bronchodilation induced by fenoterol, no difference was found with the degree of bronchodilation of OB which occurred during the first 2 h. Furthermore, <%b>after 4 h only the lowest dose of OB was significantly less powerful than fenoterol assessed 10 min following a single 400 μg dose. Conclusion Oxitropium bromide is a potent and long-acting bronchodilator in preschool children at a dose of 750 μg and 1500 μg. No side-effects were observed. The exact duration of action remains uncertain, but even 4␣h after inhaling 750 or 1500 μg of OB no additive bronchodilation induced by fenoterol could be observed. |
MauveDB: supporting model-based user views in database systems | Real-world data --- especially when generated by distributed measurement infrastructures such as sensor networks --- tends to be incomplete, imprecise, and erroneous, making it impossible to present it to users or feed it directly into applications. The traditional approach to dealing with this problem is to first process the data using statistical or probabilistic models that can provide more robust interpretations of the data. Current database systems, however, do not provide adequate support for applying models to such data, especially when those models need to be frequently updated as new data arrives in the system. Hence, most scientists and engineers who depend on models for managing their data do not use database systems for archival or querying at all; at best, databases serve as a persistent raw data store.In this paper we define a new abstraction called model-based views and present the architecture of MauveDB, the system we are building to support such views. Just as traditional database views provide logical data independence, model-based views provide independence from the details of the underlying data generating mechanism and hide the irregularities of the data by using models to present a consistent view to the users. MauveDB supports a declarative language for defining model-based views, allows declarative querying over such views using SQL, and supports several different materialization strategies and techniques to efficiently maintain them in the face of frequent updates. We have implemented a prototype system that currently supports views based on regression and interpolation, using the Apache Derby open source DBMS, and we present results that show the utility and performance benefits that can be obtained by supporting several different types of model-based views in a database system. |
Fast ray-tracing of human eye optics on Graphics Processing Units | We present a new technique for simulating retinal image formation by tracing a large number of rays from objects in three dimensions as they pass through the optic apparatus of the eye to objects. Simulating human optics is useful for understanding basic questions of vision science and for studying vision defects and their corrections. Because of the complexity of computing such simulations accurately, most previous efforts used simplified analytical models of the normal eye. This makes them less effective in modeling vision disorders associated with abnormal shapes of the ocular structures which are hard to be precisely represented by analytical surfaces. We have developed a computer simulator that can simulate ocular structures of arbitrary shapes, for instance represented by polygon meshes. Topographic and geometric measurements of the cornea, lens, and retina from keratometer or medical imaging data can be integrated for individualized examination. We utilize parallel processing using modern Graphics Processing Units (GPUs) to efficiently compute retinal images by tracing millions of rays. A stable retinal image can be generated within minutes. We simulated depth-of-field, accommodation, chromatic aberrations, as well as astigmatism and correction. We also show application of the technique in patient specific vision correction by incorporating geometric models of the orbit reconstructed from clinical medical images. |
S^3FD: Single Shot Scale-Invariant Face Detector | This paper presents a real-time face detector, named Single Shot Scale-invariant Face Detector (S3FD), which performs superiorly on various scales of faces with a single deep neural network, especially for small faces. Specifically, we try to solve the common problem that anchorbased detectors deteriorate dramatically as the objects become smaller. We make contributions in the following three aspects: 1) proposing a scale-equitable face detection framework to handle different scales of faces well. We tile anchors on a wide range of layers to ensure that all scales of faces have enough features for detection. Besides, we design anchor scales based on the effective receptive field and a proposed equal proportion interval principle; 2) improving the recall rate of small faces by a scale compensation anchor matching strategy; 3) reducing the false positive rate of small faces via a max-out background label. As a consequence, our method achieves state-of-theart detection performance on all the common face detection benchmarks, including the AFW, PASCAL face, FDDB and WIDER FACE datasets, and can run at 36 FPS on a Nvidia Titan X (Pascal) for VGA-resolution images. |
Pharmacokinetics, bioavailability and opioid effects of liquid versus tablet buprenorphine. | AIMS
Two tablet formulations of buprenorphine (a buprenorphine mono-product, Subutex, and a buprenorphine/naloxone combination product, Suboxone) are available for use in the treatment of opioid addiction; however, the bulk of the clinical studies supporting its approval by the US Food and Drug Administration (FDA) were conducted with a sublingual liquid preparation. To assist the clinician in interpreting the relevant literature in establishing dosing parameters for prescription of tablet buprenorphine, this study was designed to compare the steady state: (1) pharmacokinetics and bioavailability, and (2) physiological, subjective and objective opiate effects of two 8 mg buprenorphine tablets (16 mg) to those of 1 ml (8 mg/ml) buprenorphine solution based upon early reports suggesting that the bioavailability of the tablet was approximately 50% of that of the liquid.
DESIGN
Randomized, open-label, two-way crossover study.
SETTING
Inpatient hospitalization for 21 days.
PARTICIPANTS
Twenty-four male and females in general good health and meeting DSM-IV criteria for opiate dependence.
INTERVENTION
Subjects received one of the two buprenorphine formulations in the first 10-day period, and the other for the second 10-day period with no washout.
MEASUREMENTS
Pharmacokinetic analyses, opiate effects and adverse events.
FINDINGS
Drug steady state was reached by Day 7 of each 10-day period, area under the curve for 16 mg (two 8 mg) tablets was higher than the solution. The only non-kinetic statistically significant difference observed between the formulations was in changes in total opioid agonist score.
CONCLUSIONS
The serum concentration achieved by 16 mg of tablet buprenorphine is higher than that of the 8 mg solution, although differences between physiologic, subjective and objective opioid effects were not noted. The relative bioavailability of tablet versus solution is estimated to be 0.71; thus, with respect to dosing parameters for the tablet, clinicians should consider using less than 16 mg to achieve bioequivalence to the 8 mg solution. |
Optimisation of the SHA-2 family of hash functions on FPGAs | Hash functions play an important role in modern cryptography. This paper investigates optimisation techniques that have recently been proposed in the literature. A new VLSI architecture for the SHA-256 and SHA-512 hash functions is presented, which combines two popular hardware optimisation techniques, namely pipelining and unrolling. The SHA processors are developed for implementation on FPGAs, thereby allowing rapid prototyping of several designs. Speed/area results from these processors are analysed and are shown to compare favourably with other FPGA-based implementations, achieving the fastest data throughputs in the literature to date |
Body Centred Interaction in Immersive Virtual Environments | The technology to immerse people in computer generated worlds was proposed by Sutherland in 1965, and realised in 1968 with a head-mounted display that could present a user with a stereoscopic 3-dimensional view slaved to a sensing device tracking the user's head movements (Sutherland 1965; 1968). The views presented at that time were simple wire frame models. The advance of computer graphics knowledge and technology, itself tied to the enormous increase in processing power and decrease in cost, together with the development of relatively efficient and unobtrusive sensing devices, has led to the emergence of participatory immersive virtual environments, commonly referred to as "virtual reality" (VR) (Fisher 1982; Fisher et. al. 1986; Teitel 1990; see also SIGGRAPH Panel Proceedings 1989,1990). Ellis defines virtualisation as "the process by which a human viewer interprets a patterned sensory impression to be an extended object in an environment other than that in which it physically exists" (Ellis, 1991). In this definition the idea is taken from geometric optics, where the concept of a "virtual image" is precisely defined, and is well understood. In the context of virtual reality the "patterned sensory impressions" are generated to the human senses through visual, auditory, tactile and kinesthetic displays, though systems that effectively present information in all such sensory modalities do not exist at present. Ellis further distinguishes between a virtual space, image and environment. An example of the first is a flat surface on which an image is rendered. Perspective depth cues, texture gradients, occlusion, and other similar aspects of the image lead to an observer perceiving |
A pre-crash discrimination system for an airbag deployment algorithm | The airbag system has been a standard safety equipment for vehicles and it is efficient for enhancing the safety of a driver and passengers from crash. However, inadvertent injuries have been caused by airbag deployment in a rough and uncertain temporal interval. In this paper, a pre-crash discrimination system is proposed to prevent airbag deployment from malfunction. The system consists of a radar sensor of ACC system and vehicle state sensors of VDC system. The pre-crash information includes crash probability, time-to-crash and crash type. Using the information, the host vehicle recognizes crash situation and airbags are deployed accurately at the predefined moment for each crash situation. |
The Delayed D* Algorithm for Efficient Path Replanning | Mobile robots are often required to navigate environments for which prior maps are incomplete or inaccurate. In such cases, initial paths generated for the robots may need to be amended as new information is received that is in conflict with the original maps. The most widely used algorithm for performing this path replanning is Focussed Dynamic A* (D*), which is a generalization of A* for dynamic environments. D* has been shown to be up to two orders of magnitude faster than planning from scratch. In this paper, we present a new replanning algorithm that generates equivalent paths to D* while requiring about half its computation time. Like D*, our algorithm incrementally repairs previous paths and focusses these repairs towards the current robot position. However, it performs these repairs in a novel way that leads to improved efficiency. |
Trade-off between water transport efficiency and leaf life-span in a tropical dry forest | Drought-deciduous and evergreen species coexist in tropical dry forests. Drought-deciduous species must cope with greater seasonal leaf water-potential fluctuations than evergreen species and this may increase their susceptibility to drought-induced xylem embolism. The relationship between water transport efficiency and leaf life-span were determined for both groups. They differed in seasonal changes of both, wood water content (W c) and wood specific gravity (G). During the dry season, the W c in drought-deciduous species declined and the minimum value was recorded when leaf fall was complete. At this time, the volumetric fraction of gas (V g) increased indicating air entry into xylem vessels. In contrast, W c, G and V g changed only slightly throughout the year for evergreen species. Maximum hydraulic conductivity of drought-deciduous species was 2–6 times that of the evergreen species. but was severely reduced at leaf fall. In the evergreen species, similar water conductivities were measured during wet and dry seasons. The trade-off between xylem water transport capacity and leaf lifespan found in species coexisting in this forest reveals the existence of contrasting but successful adaptations to this environment. Drought-deciduous species maximize production in the short term with higher water transport efficiency which leads to the seasonal occurrence of embolisms. Conversely, the behaviour of evergreen species with reduced maximum efficiency is conservative but safe in relation to xylem embolism. |
Hidden Markov model-based speech emotion recognition | In this contribution we introduce speech emotion recognition by use of continuous hidden Markov models. Two methods are propagated and compared throughout the paper. Within the first method a global statistics framework of an utterance is classified by Gaussian mixture models using derived features of the raw pitch and energy contour of the speech signal. A second method introduces increased temporal complexity applying continuous hidden Markov models considering several states using low-level instantaneous features instead of global statistics. The paper addresses the design of working recognition engines and results achieved with respect to the alluded alternatives. A speech corpus consisting of acted and spontaneous emotion samples in German and English language is described in detail. Both engines have been tested and trained using this equivalent speech corpus. Results in recognition of seven discrete emotions exceeded 86% recognition rate. As a basis of comparison the similar judgment of human deciders classifying the same corpus at 79.8% recognition rate was analyzed. |
Automatically extracting information needs from complex clinical questions | OBJECTIVE
Clinicians pose complex clinical questions when seeing patients, and identifying the answers to those questions in a timely manner helps improve the quality of patient care. We report here on two natural language processing models, namely, automatic topic assignment and keyword identification, that together automatically and effectively extract information needs from ad hoc clinical questions. Our study is motivated in the context of developing the larger clinical question answering system AskHERMES (Help clinicians to Extract and aRrticulate Multimedia information for answering clinical quEstionS).
DESIGN AND MEASUREMENTS
We developed supervised machine-learning systems to automatically assign predefined general categories (e.g. etiology, procedure, and diagnosis) to a question. We also explored both supervised and unsupervised systems to automatically identify keywords that capture the main content of the question.
RESULTS
We evaluated our systems on 4654 annotated clinical questions that were collected in practice. We achieved an F1 score of 76.0% for the task of general topic classification and 58.0% for keyword extraction. Our systems have been implemented into the larger question answering system AskHERMES. Our error analyses suggested that inconsistent annotation in our training data have hurt both question analysis tasks.
CONCLUSION
Our systems, available at http://www.askhermes.org, can automatically extract information needs from both short (the number of word tokens <20) and long questions (the number of word tokens >20), and from both well-structured and ill-formed questions. We speculate that the performance of general topic classification and keyword extraction can be further improved if consistently annotated data are made available. |
Thiabendazole for the treatment of strongyloidiasis in patients with hematologic malignancies. | A total of 21 patients with hematologic malignancies were given thiabendazole for treatment of strongyloidiasis. Fifteen patients were cured. Since there were no relapses, it is unlikely that maintenance therapy has a role in the management of strongyloidiasis in this population of patients. |
Low-Voltage Super class AB CMOS OTA cells with very high slew rate and power efficiency | A simple technique to achieve low-voltage power-efficient class AB operational transconductance amplifiers (OTAs) is presented. It is based on the combination of class AB differential input stages and local common-mode feedback (LCMFB) which provides additional dynamic current boosting, increased gain-bandwidth product (GBW), and near-optimal current efficiency. LCMFB is applied to various class AB differential input stages, leading to different class AB OTA topologies. Three OTA realizations based on this technique have been fabricated in a 0.5-/spl mu/m CMOS technology. For an 80-pF load they show enhancement factors of slew rate and GBW of up to 280 and 3.6, respectively, compared to a conventional class A OTA with the same 10-/spl mu/A quiescent currents and /spl plusmn/1-V supply voltages. In addition, the overhead in terms of common-mode input range, output swing, silicon area, noise, and static power consumption, is minimal. |
Neural Discourse Modeling of Conversations | Deep neural networks have shown recent promise in many language-related tasks such as the modeling of conversations. We extend RNN-based sequence to sequence models to capture the long range discourse across many turns of conversation. We perform a sensitivity analysis on how much additional context affects performance, and provide quantitative and qualitative evidence that these models are able to capture discourse relationships across multiple utterances. Our results quantifies how adding an additional RNN layer for modeling discourse improves the quality of output utterances and providing more of the previous conversation as input also improves performance. By searching the generated outputs for specific discourse markers we show how neural discourse models can exhibit increased coherence and cohesion in conversations. |
Beyond Calvin: The Intellectual, Political and Cultural World of Europe's Reformed Churches, c. 1540-1620 | List of Abbreviations Acknowledgements Introduction Reformed Ideas International Connections Politics and Rebellion Moral Discipline Religious Life and Culture Select Bibliography Notes Index |
Integrated Commonsense Reasoning and Probabilistic Planning | Commonsense reasoning and probabilistic planning are two of the most important research areas in artificial intelligence. This paper focuses on Integrated commonsense Reasoning and probabilistic Planning (IRP) problems. On one hand, commonsense reasoning algorithms aim at drawing conclusions using structured knowledge that is typically provided in a declarative way. On the other hand, probabilistic planning algorithms aim at generating an action policy that can be used for action selection under uncertainty. Intuitively, reasoning and planning techniques are good at “understanding the world” and “accomplishing the task” respectively. This paper discusses the complementary features of the two computing paradigms, presents the (potential) advantages of their integration, and summarizes existing research on this topic. |
Sentiment Flow - A General Model of Web Review Argumentation | Web reviews have been intensively studied in argumentation-related tasks such as sentiment analysis. However, due to their focus on content-based features, many sentiment analysis approaches are effective only for reviews from those domains they have been specifically modeled for. This paper puts its focus on domain independence and asks whether a general model can be found for how people argue in web reviews. Our hypothesis is that people express their global sentiment on a topic with similar sequences of local sentiment independent of the domain. We model such sentiment flow robustly under uncertainty through abstraction. To test our hypothesis, we predict global sentiment based on sentiment flow. In systematic experiments, we improve over the domain independence of strong baselines. Our findings suggest that sentiment flow qualifies as a general model of web review argumentation. |
Drug use patterns in young German women and association with mental disorders. | BACKGROUND
There is a lack of data about drug use patterns in young women. Mental disorders may influence those drug use patterns.
OBJECTIVE
To evaluate drug use patterns (prescribed drugs, self-medication) in general and in relation to the prevalence rates of mental disorders in young German women.
METHODS
A total of 2064 women (18-24 y old), obtained in a random clustered sample, were asked about their actual and former medication use. Moreover, a structured psychological interview (Diagnostic Interview for Mental Disorders) was conducted with each woman to evaluate the prevalence of mental disorders (according to the Diagnostic and Statistical Manual of Mental Disorders, 4th edition).
RESULTS
Oral contraceptives (55.9%), thyroid preparations (7.1%), respiratory system drugs (9.4%), and nervous system drugs (8%) were the most commonly used medications. Only 10% of the women with one or more mental disorders used psychotropic medication. As expected, women with mental disorders were significantly more likely to use antidepressants and psycholeptic agents (ie, sedatives/hypnotics, antipsychotics) than were women without any mental disorder. However, there were no significant differences in use of pain medication.
CONCLUSIONS
The results of this study indicate an apparently inadequate supply of drugs acting on the nervous system for women with mental disorders in Germany. Further studies on different age and gender groups are needed. It is important to evaluate the prevalence of diseases and drug use at the same time so as to identify deficits in drug therapy and optimize prescription and self-medication use. |
MODEC: Multimodal Decomposable Models for Human Pose Estimation | We propose a multimodal, decomposable model for articulated human pose estimation in monocular images. A typical approach to this problem is to use a linear structured model, which struggles to capture the wide range of appearance present in realistic, unconstrained images. In this paper, we instead propose a model of human pose that explicitly captures a variety of pose modes. Unlike other multimodal models, our approach includes both global and local pose cues and uses a convex objective and joint training for mode selection and pose estimation. We also employ a cascaded mode selection step which controls the trade-off between speed and accuracy, yielding a 5x speedup in inference and learning. Our model outperforms state-of-the-art approaches across the accuracy-speed trade-off curve for several pose datasets. This includes our newly-collected dataset of people in movies, FLIC, which contains an order of magnitude more labeled data for training and testing than existing datasets. |
An All-Silk-Derived Dual-Mode E-skin for Simultaneous Temperature-Pressure Detection. | Flexible skin-mimicking electronics are highly desired for development of smart human-machine interfaces and wearable human-health monitors. Human skins are able to simultaneously detect different information, such as touch, friction, temperature, and humidity. However, due to the mutual interferences of sensors with different functions, it is still a big challenge to fabricate multifunctional electronic skins (E-skins). Herein, a combo temperature-pressure E-skin is reported through assembling a temperature sensor and a strain sensor in both of which flexible and transparent silk-nanofiber-derived carbon fiber membranes (SilkCFM) are used as the active material. The temperature sensor presents high temperature sensitivity of 0.81% per centigrade. The strain sensor shows an extremely high sensitivity with a gauge factor of ∼8350 at 50% strain, enabling the detection of subtle pressure stimuli that induce local strain. Importantly, the structure of the SilkCFM in each sensor is designed to be passive to other stimuli, enabling the integrated E-skin to precisely detect temperature and pressure at the same time. It is demonstrated that the E-skin can detect and distinguish exhaling, finger pressing, and spatial distribution of temperature and pressure, which cannot be realized using single mode sensors. The remarkable performance of the silk-based combo temperature-pressure sensor, together with its green and large-scalable fabrication process, promising its applications in human-machine interfaces and soft electronics. |
Nanoscale memory cell based on a nanoelectromechanical switched capacitor. | The demand for increased information storage densities has pushed silicon technology to its limits and led to a focus on research on novel materials and device structures, such as magnetoresistive random access memory and carbon nanotube field-effect transistors, for ultra-large-scale integrated memory. Electromechanical devices are suitable for memory applications because of their excellent 'ON-OFF' ratios and fast switching characteristics, but they involve larger cells and more complex fabrication processes than silicon-based arrangements. Nanoelectromechanical devices based on carbon nanotubes have been reported previously, but it is still not possible to control the number and spatial location of nanotubes over large areas with the precision needed for the production of integrated circuits. Here we report a novel nanoelectromechanical switched capacitor structure based on vertically aligned multiwalled carbon nanotubes in which the mechanical movement of a nanotube relative to a carbon nanotube based capacitor defines 'ON' and 'OFF' states. The carbon nanotubes are grown with controlled dimensions at pre-defined locations on a silicon substrate in a process that could be made compatible with existing silicon technology, and the vertical orientation allows for a significant decrease in cell area over conventional devices. We have written data to the structure and it should be possible to read data with standard dynamic random access memory sensing circuitry. Simulations suggest that the use of high-k dielectrics in the capacitors will increase the capacitance to the levels needed for dynamic random access memory applications. |
Subsets and Splits