title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Low-Power Pulse-Triggered Flip-Flop Design Based on a Signal Feed-Through | In this brief, a low-power flip-flop (FF) design featuring an explicit type pulse-triggered structure and a modified true single phase clock latch based on a signal feed-through scheme is presented. The proposed design successfully solves the long discharging path problem in conventional explicit type pulse-triggered FF (P-FF) designs and achieves better speed and power performance. Based on post-layout simulation results using TSMC CMOS 90-nm technology, the proposed design outperforms the conventional P-FF design data-close-to-output (ep-DCO) by 8.2% in data-to-Q delay. In the mean time, the performance edges on power and power- delay-product metrics are 22.7% and 29.7%, respectively. |
Deriving physical connectivity from neuronal morphology | A model is presented that allows prediction of the probability for the formation of appositions between the axons and dendrites of any two neurons based only on their morphological statistics and relative separation. Statistics of axonal and dendritic morphologies of single neurons are obtained from 3D reconstructions of biocytin-filled cells, and a statistical representation of the same cell type is obtained by averaging across neurons according to the model. A simple mathematical formulation is applied to the axonal and dendritic statistical representations to yield the probability for close appositions. The model is validated by a mathematical proof and by comparison of predicted appositions made by layer 5 pyramidal neurons in the rat somatosensory cortex with real anatomical data. The model could be useful for studying microcircuit connectivity and for designing artificial neural networks. |
Beta-binomial/Poisson regression models for repeated bivariate counts. | We analyze data obtained from a study designed to evaluate training effects on the performance of certain motor activities of Parkinson's disease patients. Maximum likelihood methods were used to fit beta-binomial/Poisson regression models tailored to evaluate the effects of training on the numbers of attempted and successful specified manual movements in 1 min periods, controlling for disease stage and use of the preferred hand. We extend models previously considered by other authors in univariate settings to account for the repeated measures nature of the data. The results suggest that the expected number of attempts and successes increase with training, except for patients with advanced stages of the disease using the non-preferred hand. |
Reconfigurable RFICs in Si-based technologies for a compact intelligent RF front-end | This paper presents reconfigurable RF integrated circuits (ICs) for a compact implementation of an intelligent RF front-end for multiband and multistandard applications. Reconfigurability has been addressed at each level starting from the basic elements to the RF blocks and the overall front-end architecture. An active resistor tunable from 400 to 1600 /spl Omega/ up to 10 GHz has been designed and an equivalent model has been extracted. A fully tunable active inductor using a tunable feedback resistor has been proposed that provides inductances between 0.1-15 nH with Q>50 in the C-band. To demonstrate reconfigurability at the block level, voltage-controlled oscillators with very wide tuning ranges have been implemented in the C-band using the proposed active inductor, as well as using a switched-spiral resonator with capacitive tuning. The ICs have been implemented using 0.18-/spl mu/m Si-CMOS and 0.18-/spl mu/m SiGe-BiCMOS technologies. |
The impact of social and conventional media on firm equity value: A sentiment analysis approach | Available online 30 December 2012 |
Skin disinfection and its efficacy before administering injections. | The need to disinfect a patient's skin before subcutaneous or intramuscular injection is a much debated practice. Guidance on this issue varies between NHS organisations that provide primary and secondary care. However, with patients being increasingly concerned with healthcare-associated infections, a general consensus needs to be reached whereby this practice is either rejected or made mandatory. |
The anatomy of the aging face: volume loss and changes in 3-dimensional topography. | Facial aging reflects the dynamic, cumulative effects of time on the skin, soft tissues, and deep structural components of the face, and is a complex synergy of skin textural changes and loss of facial volume. Many of the facial manifestations of aging reflect the combined effects of gravity, progressive bone resorption, decreased tissue elasticity, and redistribution of subcutaneous fullness. A convenient method for assessing the morphological effects of aging is to divide the face into the upper third (forehead and brows), middle third (midface and nose), and lower third (chin, jawline, and neck). The midface is an important factor in facial aesthetics because perceptions of facial attractiveness are largely founded on the synergy of the eyes, nose, lips, and cheek bones (central facial triangle). For aesthetic purposes, this area should be considered from a 3-dimensional rather than a 2-dimensional perspective, and restoration of a youthful 3-dimensional facial topography should be regarded as the primary goal in facial rejuvenation. Recent years have seen a significant increase in the number of nonsurgical procedures performed for facial rejuvenation. Patients seeking alternatives to surgical procedures include those who require restoration of lost facial volume, those who wish to enhance normal facial features, and those who want to correct facial asymmetry. Important factors in selecting a nonsurgical treatment option include the advantages of an immediate cosmetic result and a short recovery time. |
Automatic Online Evaluation of Intelligent Assistants | Voice-activated intelligent assistants, such as Siri, Google Now, and Cortana, are prevalent on mobile devices. However, it is challenging to evaluate them due to the varied and evolving number of tasks supported, e.g., voice command, web search, and chat. Since each task may have its own procedure and a unique form of correct answers, it is expensive to evaluate each task individually. This paper is the first attempt to solve this challenge. We develop consistent and automatic approaches that can evaluate different tasks in voice-activated intelligent assistants. We use implicit feedback from users to predict whether users are satisfied with the intelligent assistant as well as its components, i.e., speech recognition and intent classification. Using this approach, we can potentially evaluate and compare different tasks within and across intelligent assistants ac-cording to the predicted user satisfaction rates. Our approach is characterized by an automatic scheme of categorizing user-system interaction into task-independent dialog actions, e.g., the user is commanding, selecting, or confirming an action. We use the action sequence in a session to predict user satisfaction and the quality of speech recognition and intent classification. We also incorporate other features to further improve our approach, including features derived from previous work on web search satisfaction prediction, and those utilizing acoustic characteristics of voice requests. We evaluate our approach using data collected from a user study. Results show our approach can accurately identify satisfactory and unsatisfactory sessions. |
Learning Distributed Representations of Symbolic Structure Using Binding and Unbinding Operations | Widely used recurrent units, including Longshort Term Memory (LSTM) and Gated Recurrent Unit (GRU), perform well on natural language tasks, but their ability to learn structured representations is still questionable. Exploiting Tensor Product Representations (TPRs) — distributed representations of symbolic structure in which vector-embedded symbols are bound to vector-embedded structural positions — we propose the TPRU, a recurrent unit that, at each time step, explicitly executes structural-role binding and unbinding operations to incorporate structural information into learning. Experiments are conducted on the Logical Entailment, Multigenre Natural Language Inference (MNLI), and language modelling tasks, and our TPR-derived recurrent unit provides strong performance compared to LSTM and GRU baselines. Furthermore, our TPRU demonstrates expected performance gain when certain symbol-related hyperparameters are varied. |
Environmental Conditions in the Estuarine Coast of Montevideo (Uruguay): Historical Aspects and Present Status: An Updated | Over the past 120 years, urban growth has been reaching extremely high rates worldwide. Human interaction with the ecosystem includes consumptive and nonconsumptive activities. The external supply of organic matter causes one of the most severe impacts on coastal marine environments. Montevideo City is currently suffering from the same environmental problems as other capital cities in the world, and its adjacent coastal zone has been influenced by anthropogenic alterations during the past 150 years. By integrating abiotic and biotic data, and using different statistical and ecological approaches, the coastal area of Uruguay close to Montevideo City has been classified into four zones with different environmental quality and degree of anthropogenic perturbation. In this area, several environmental problems and their impacts on economic productivity, human, and ecosystem health are the outcome of political and socioeconomical factors. Therefore, governmental authorities need to follow the concept of sustainable use and development, to preserve and optimize the ecological quality of natural resources, and to ensure the availability of this coastal ecosystem to future generations. Solutions to the conflictive uses of environment should be laid down on a solid scientific base to establish priorities and make the best decisions for its management and conservation. |
Towards unobtrusive emotion recognition for affective social communication | Awareness of the emotion of those who communicate with others is a fundamental challenge in building affective intelligent systems. Emotion is a complex state of the mind influenced by external events, physiological changes, or relationships with others. Because emotions can represent a user's internal context or intention, researchers suggested various methods to measure the user's emotions from analysis of physiological signals, facial expressions, or voice. However, existing methods have practical limitations to be used with consumer devices, such as smartphones; they may cause inconvenience to users and require special equipment such as a skin conductance sensor. Our approach is to recognize emotions of the user by inconspicuously collecting and analyzing user-generated data from different types of sensors on the smartphone. To achieve this, we adopted a machine learning approach to gather, analyze and classify device usage patterns, and developed a social network service client for Android smartphones which unobtrusively find various behavioral patterns and the current context of users. Also, we conducted a pilot study to gather real-world data which imply various behaviors and situations of a participant in her/his everyday life. From these data, we extracted 10 features and applied them to build a Bayesian Network classifier for emotion recognition. Experimental results show that our system can classify user emotions into 7 classes such as happiness, surprise, anger, disgust, sadness, fear, and neutral with a surprisingly high accuracy. The proposed system applied to a smartphone demonstrated the feasibility of an unobtrusive emotion recognition approach and a user scenario for emotion-oriented social communication between users. |
Control of heave-induced pressure fluctuations in managed pressure drilling | Managed pressure drilling is an advanced pressure control method which is intended to meet increasingly high demands in drilling operations in the oil and gas industry. In this method, the circulating drilling fluid, which takes cuttings out of the well, is released at the surface through a controlled choke. This choke is used for active control of the fluid pressure in the well. The corresponding automatic control system keeps the pressure at the bottom of the well at a specified set-point despite various disturbances. One of such disturbances, vertical motion of the drill string, causes severe pressure fluctuations which need to be actively attenuated. In this paper we present two different disturbance rejection strategies based on discretized partial differential equations for the well hydraulic system. The performance of the controllers is shown through simulations both under idealized conditions as well as by simulations on a high fidelity drilling simulator. |
Testing adaptive toolbox models: a Bayesian hierarchical approach. | Many theories of human cognition postulate that people are equipped with a repertoire of strategies to solve the tasks they face. This theoretical framework of a cognitive toolbox provides a plausible account of intra- and interindividual differences in human behavior. Unfortunately, it is often unclear how to rigorously test the toolbox framework. How can a toolbox model be quantitatively specified? How can the number of toolbox strategies be limited to prevent uncontrolled strategy sprawl? How can a toolbox model be formally tested against alternative theories? The authors show how these challenges can be met by using Bayesian inference techniques. By means of parameter recovery simulations and the analysis of empirical data across a variety of domains (i.e., judgment and decision making, children's cognitive development, function learning, and perceptual categorization), the authors illustrate how Bayesian inference techniques allow toolbox models to be quantitatively specified, strategy sprawl to be contained, and toolbox models to be rigorously tested against competing theories. The authors demonstrate that their approach applies at the individual level but can also be generalized to the group level with hierarchical Bayesian procedures. The suggested Bayesian inference techniques represent a theoretical and methodological advancement for toolbox theories of cognition and behavior. |
First-in-Human Phase I Study of GSK2126458, an Oral Pan-Class I Phosphatidylinositol-3-Kinase Inhibitor, in Patients with Advanced Solid Tumor Malignancies. | PURPOSE
GSK2126458 (GSK458) is a potent inhibitor of PI3K (α, β, γ, and δ), with preclinical studies demonstrating broad antitumor activity. We performed a first-in-human phase I study in patients with advanced solid tumors.
MATERIALS AND METHODS
Patients received oral GSK458 once or twice daily in a dose-escalation design to define the maximum tolerated dose (MTD). Expansion cohorts evaluated pharmacodynamics, pharmacokinetics, and clinical activity in histologically and molecularly defined cohorts.
RESULTS
One hundred and seventy patients received doses ranging from 0.1 to 3 mg once or twice daily. Dose-limiting toxicities (grade 3 diarrhea,n= 4; fatigue and rash,n= 1) occurred in 5 patients (n= 3 at 3 mg/day). The MTD was 2.5 mg/day (MTD with twice daily dosing undefined). The most common grade ≥3 treatment-related adverse events included diarrhea (8%) and skin rash (5%). Pharmacokinetic analyses demonstrated increased duration of drug exposure above target level with twice daily dosing. Fasting insulin and glucose levels increased with dose and exposure of GSK458. Durable objective responses (ORs) were observed across multiple tumor types (sarcoma, kidney, breast, endometrial, oropharyngeal, and bladder cancer). Responses were not associated withPIK3CAmutations (OR rate: 5% wild-type vs. 6% mutant).
CONCLUSIONS
Although the MTD of GSK458 was 2.5 mg once daily, twice-daily dosing may increase duration of target inhibition. Fasting insulin and glucose levels served as pharmacodynamic markers of drug exposure. Select patients achieved durable responses; however,PIK3CAmutations were neither necessary nor predictive of response. Combination treatment strategies and novel biomarkers may be needed to optimally target PI3K. |
Real-Time Biologically Inspired Action Recognition from Key Poses Using a Neuromorphic Architecture | Intelligent agents, such as robots, have to serve a multitude of autonomous functions. Examples are, e.g., collision avoidance, navigation and route planning, active sensing of its environment, or the interaction and non-verbal communication with people in the extended reach space. Here, we focus on the recognition of the action of a human agent based on a biologically inspired visual architecture of analyzing articulated movements. The proposed processing architecture builds upon coarsely segregated streams of sensory processing along different pathways which separately process form and motion information (Layher et al., 2014). Action recognition is performed in an event-based scheme by identifying representations of characteristic pose configurations (key poses) in an image sequence. In line with perceptual studies, key poses are selected unsupervised utilizing a feature-driven criterion which combines extrema in the motion energy with the horizontal and the vertical extendedness of a body shape. Per class representations of key pose frames are learned using a deep convolutional neural network consisting of 15 convolutional layers. The network is trained using the energy-efficient deep neuromorphic networks (Eedn) framework (Esser et al., 2016), which realizes the mapping of the trained synaptic weights onto the IBM Neurosynaptic System platform (Merolla et al., 2014). After the mapping, the trained network achieves real-time capabilities for processing input streams and classify input images at about 1,000 frames per second while the computational stages only consume about 70 mW of energy (without spike transduction). Particularly regarding mobile robotic systems, a low energy profile might be crucial in a variety of application scenarios. Cross-validation results are reported for two different datasets and compared to state-of-the-art action recognition approaches. The results demonstrate, that (I) the presented approach is on par with other key pose based methods described in the literature, which select key pose frames by optimizing classification accuracy, (II) compared to the training on the full set of frames, representations trained on key pose frames result in a higher confidence in class assignments, and (III) key pose representations show promising generalization capabilities in a cross-dataset evaluation. |
Wafer Level Multi-chip Gang Bonding Using TCNCF | High throughput interconnection technology has been achieved using a multi-chip gang bonding process with an advanced chip on wafer (CoW) test vehicle (TV). The TV had 30 μm of fine-pitch copper pillar (CuP) and the bonding test was performed using non-conductive film (NCF). Therefore, all the steps including NCF lamination, wafer back-grinding and sawing, and thermo-compression bonding (TCB) were accomplished at the wafer level. Firstly, a layer of non-conductive film (NCF) was laminated on top of the wafer prior to back-grind and saw processes. Secondly, eight multi chips of 6x8 mm2 and 4x8 mm2 having 200 μm thickness were attached and aligned serially on the interposer wafer. For micro-bump interconnection, parallel bonding for the eight chips was applied for a few seconds after the serial chip attach process. Visual inspection and measurement confirmed the lateral fillet coverage was less than 110 μm without resin bleed-out and creeping. Destructive and non-destructive analyses were performed to examine solder joint formation & void inspection. Finally, reliability test was performed to confirm stability of the bonding process in a moisture resistance test (MRT) of L2Aa (60'C/60%RH, 120hrs) and L3 (30'C/60RH%, 192hrs) and temperature cycle (T/C) B 1000x. Electrical resistance was measured by probing for micro-bumps that are daisy-chained together. The resistance showed negligible change in upper and lower limit and no electrical failures or delamination occurred after reliability conditioning. |
On the compensation of dead time and zero-current crossing for a PWM-inverter-controlled AC servo drive | Conventional dead-time compensation methods, for pulsewidth-modulated (PWM) inverters, improve the current output waveforms; however, the zero-current crossing effect is still apparent. This letter proposes a new method, based on angle domain repetitive control, to reduce the distortions in the PWM inverters output waveforms caused by the dead time and the zero-crossing problem. |
DASH fast start using HTTP/2 | Dynamic Adaptive Streaming over HTTP (DASH) is broadly deployed on the Internet for live and on-demand video streaming services. Recently, a new version of HTTP was proposed, named HTTP/2. One of the objectives of HTTP/2 is to improve the end-user perceived latency compared to HTTP/1.1. HTTP/2 introduces the possibility for the server to push resources to the client. This paper focuses on using the HTTP/2 protocol and the server push feature to reduce the start-up delay in a DASH streaming session. In addition, the paper proposes a new approach for video adaptation, which consists in estimating the bandwidth, using WebSocket (WS) over HTTP/2, and in making partial adaptation on the server side. Obtained results show that, using the server push feature and WebSocket layered over HTTP/2 allow faster loading time and faster convergence to the nominal state. Proposed solution is studied in the context of a direct client-server HTTP/2 connection. Intermediate caches are not considered in this study. |
On the O(1=k) convergence of asynchronous distributed alternating Direction Method of Multipliers | We consider a network of agents that are cooperatively solving a global optimization problem, where the objective function is the sum of privately known local objective functions of the agents and the decision variables are coupled via linear constraints. Recent literature focused on special cases of this formulation and studied their distributed solution through either subgradient based methods with O(1/√k) rate of convergence (where k is the iteration number) or Alternating Direction Method of Multipliers (ADMM) based methods, which require a synchronous implementation and a globally known order on the agents. In this paper, we present a novel asynchronous ADMM based distributed method for the general formulation and show that it converges at the rate O (1=k). |
Geodesic K-means clustering | We introduce a class of geodesic distances and extend the K-means clustering algorithm to employ this distance metric. Empirically, we demonstrate that our geodesic K-means algorithm exhibits several desirable characteristics missing in the classical K-means. These include adjusting to varying densities of clusters, high levels of resistance to outliers, and handling clusters that are not linearly separable. Furthermore our comparative experiments show that geodesic K-means comes very close to competing with state-of-the-art algorithms such as spectral and hierarchical clustering. |
Reduction in hospital admissions for pneumonia in non-institutionalised elderly people as a result of influenza vaccination: a case-control study in Spain. | OBJECTIVE
To estimate the effectiveness of influenza vaccine in preventing hospital admission for pneumonia in non-institutionalised elderly people.
DESIGN
This was a case-control study.
SETTING
All three public hospitals in the Castellón area of Spain.
PARTICIPANTS
Cases were people aged 65 or more not living in an institution who were admitted to hospital for pneumonia between November 15, 1994 and March 31, 1995. Each case was matched with two sex matched control subjects aged 65 years or older admitted to hospital in the same week for acute abdominal surgical conditions or trauma. The sampling of incident cases was consecutive. Eighty three cases and 166 controls were identified and included in the study.
MEASUREMENTS
Trained interviewers completed a questionnaire for each subject on the vaccination status, smoking habits, previous diseases, health care use, social contacts, family background, the vaccination status of the family carer, home characteristics, and socioeconomic status.
RESULTS
The adjusted odds ratio of the influenza vaccination preventing admission to hospital for pneumonia was 0.21 (95% confidence interval 0.09, 0.55). The variables which best explained the risk of being a case were age, intensity of social contacts, health care use, previous diseases, and the existence of a vaccinated family carer.
CONCLUSIONS
Influenza vaccination reduced significantly hospital admissions for pneumonia in non-institutionalised elderly people. |
Fixation Precision in High-Speed Noncontact Eye-Gaze Tracking | The precision of point-of-gaze (POG) estimation during a fixation is an important factor in determining the usability of a noncontact eye-gaze tracking system for real-time applications. The objective of this paper is to define and measure POG fixation precision, propose methods for increasing the fixation precision, and examine the improvements when the methods are applied to two POG estimation approaches. To achieve these objectives, techniques for high-speed image processing that allow POG sampling rates of over 400 Hz are presented. With these high-speed POG sampling rates, the fixation precision can be improved by filtering while maintaining an acceptable real-time latency. The high-speed sampling and digital filtering techniques developed were applied to two POG estimation techniques, i.e., the highspeed pupil-corneal reflection (HS P-CR) vector method and a 3-D model-based method allowing free head motion. Evaluation on the subjects has shown that when operating at 407 frames per second (fps) with filtering, the fixation precision for the HS P-CR POG estimation method was improved by a factor of 5.8 to 0.035deg (1.6 screen pixels) compared to the unfiltered operation at 30 fps. For the 3-D POG estimation method, the fixation precision was improved by a factor of 11 to 0.050deg (2.3 screen pixels) compared to the unfiltered operation at 30 fps. |
Survey propagation: An algorithm for satisfiability | We study the satisfiability of randomly generated formulas formed by M clauses of exactly K literals over N Boolean variables. For a given value of N the problem is known to be most difficult when α = M/N is close to the experimental threshold αc separating the region where almost all formulas are SAT from the region where all formulas are UNSAT. Recent results from a statistical physics analysis suggest that the difficulty is related to the existence of a clustering phenomenon of the solutions when α is close to (but smaller than) αc. We introduce a new type of message passing algorithm which allows to find efficiently a satisfying assignment of the variables in this difficult region. This algorithm is iterative and composed of two main parts. The first is a message-passing procedure which generalizes the usual methods like Sum-Product or Belief Propagation: It passes messages that may be thought of as surveys over clusters of the ordinary messages. The second part uses the detailed probabilistic information obtained from the surveys in order to fix variables and simplify the problem. Eventually, the simplified problem that remains is solved by a conventional heuristic. © 2005 Wiley Periodicals, Inc. Random Struct. Alg., 27, 201–226, 2005 |
Towards HMD-based Immersive Analytics | Advances in 3D hardware and software have led to increasingly cheaper and simple-to-use immersive virtual reality systems that can provide real-time interactive 3D data representation. The immersive analytics field is developing as the newest avatar of 3D visual analytics, which may relaunch the long enduring 2D vs 3D visualization debate. However, the terms of the debate have changed: leveraging 3D human perception within virtual environments is now easier, and the immersive quality of today’s rendering is sufficient enough for researchers to concentrate on testing and designing immersive data representation and interaction rather than on technological problems. In this position paper we propose a short historical perspective on the use of immersive technologies for visual analytics and on the 2D vs 3D debate. We stress out five principles that we think should be followed to explore the HMD-based visual analytics design space, before introducing ongoing work within the IDEA project. CCS CONCEPTS • Human-Centered Computing→ Virtual Reality;Visual Analytics;Information visualisation |
Timed Commitments | We introduce and construct timed commitment schemes, an extension to the standard notion of commitments in which a potential forced opening phase permits the receiver to recover (with effort) the committed value without the help of the committer. An important application of our timed-commitment scheme is contract signing: two mutually suspicious parties wish to exchange signatures on a contract. We show a two-party protocol that allows them to exchange RSA or Rabin signatures. The protocol is strongly fair: if one party quits the protocol early, then the two parties must invest comparable amounts of time to retrieve the signatures. This statement holds even if one party has many more machines than the other. Other applications, including honesty preserving auctions and collective coin-flipping, are discussed. |
A study of maximum power point tracking algorithms for stand-alone Photovoltaic Systems | The Photovoltaic (PV) energy is one of the renewable energies that attracts attention of researchers in the recent decades. Since the conversion efficiency of PV arrays is very low, it requires maximum power point tracking (MPPT) control techniques to extract the maximum available power from PV arrays. In this paper, two categories of MPPT algorithms, namely indirect and direct methods are discussed. In addition to that, the advantages and disadvantages of each MPPT algorithm are reviewed. Simulations of PV modules were also performed using Perturb and Observe algorithm and Fuzzy Logic controller. The simulation results produced by the two algorithms are compared with the expected results generated by Solarex MSX60 PV modules. Besides that, the P-V characteristics of PV arrays under partial shaded conditions are discussed in the last section. |
A Multi-Sensorial Simultaneous Localization and Mapping (SLAM) System for Low-Cost Micro Aerial Vehicles in GPS-Denied Environments | One of the main challenges of aerial robots navigation in indoor or GPS-denied environments is position estimation using only the available onboard sensors. This paper presents a Simultaneous Localization and Mapping (SLAM) system that remotely calculates the pose and environment map of different low-cost commercial aerial platforms, whose onboard computing capacity is usually limited. The proposed system adapts to the sensory configuration of the aerial robot, by integrating different state-of-the art SLAM methods based on vision, laser and/or inertial measurements using an Extended Kalman Filter (EKF). To do this, a minimum onboard sensory configuration is supposed, consisting of a monocular camera, an Inertial Measurement Unit (IMU) and an altimeter. It allows to improve the results of well-known monocular visual SLAM methods (LSD-SLAM and ORB-SLAM are tested and compared in this work) by solving scale ambiguity and providing additional information to the EKF. When payload and computational capabilities permit, a 2D laser sensor can be easily incorporated to the SLAM system, obtaining a local 2.5D map and a footprint estimation of the robot position that improves the 6D pose estimation through the EKF. We present some experimental results with two different commercial platforms, and validate the system by applying it to their position control. |
Large-Scale Weakly Supervised Audio Classification Using Gated Convolutional Neural Network | In this paper, we present a gated convolutional neural network and a temporal attention-based localization method for audio classification, which won the 1st place in the large-scale weakly supervised sound event detection task of Detection and Classification of Acoustic Scenes and Events (DCASE) 2017 challenge. The audio clips in this task, which are extracted from YouTube videos, are manually labelled with one or more audio tags, but without time stamps of the audio events, hence referred to as weakly labelled data. Two subtasks are defined in this challenge including audio tagging and sound event detection using this weakly labelled data. We propose a convolutional recurrent neural network (CRNN) with learnable gated linear units (GLUs) non-linearity applied on the log Mel spectrogram. In addition, we propose a temporal attention method along the frames to predict the locations of each audio event in a chunk from the weakly labelled data. The performances of our systems were ranked the 1st and the 2nd as a team in these two sub-tasks of DCASE 2017 challenge with F value 55.6% and Equal error 0.73, respectively. |
Transparency and Corporate Governance | An objective of many proposed corporate governance reforms is increased transparency. This goal has been relatively uncontroversial, as most observers believe increased transparency to be unambiguously good. We argue that, from a corporate governance perspective, there are likely to be both costs and benefits to increased transparency, leading to an optimum level beyond which increasing transparency lowers profits. This result holds even when there is no direct cost of increasing transparency and no issue of revealing information to regulators or product-market rivals. We show that reforms that seek to increase transparency can reduce firm profits, raise executive compensation, and inefficiently increase the rate of CEO turnover. We further consider the possibility that executives will take actions to distort information. We show that executives could have incentives, due to career concerns, to increase transparency and that increases in penalties for distorting information can be profit reducing. |
Income and Wealth Heterogeneity in the Macroeconomy | How do movements in the distribution of income and wealth affect the macroeconomy? We analyze this question using a calibrated version of the stochastic growth model with partially uninsurable idiosyncratic risk and movements in aggregate productivity. Our main finding is that, in the stationary stochastic equilibrium, the behavior of the macroeconomic aggregates can be almost perfectly described using only the mean of the wealth distribution. This result is robust to substantial changes in both parameter values and model specification. Our benchmark model, whose only difference from the representative-agent framework is the existence of uninsurable idiosyncratic risk, displays far less cross-sectional dispersion |
Molecular Neurobiology of Drug Addiction | The purpose of this review is to illustrate the ways in which molecular neurobiological investigations will contribute to an improved understanding of drug addiction and, ultimately, to the development of more effective treatments. Such molecular studies of drug addiction are needed to establish two general types of information: (1) mechanisms of pathophysiology, identification of the changes that drugs of abuse produce in the brain that lead to addiction; and (2) mechanisms of individual risk, identification of specific genetic and environmental factors that increase or decrease an individual's vulnerability for addiction. This information will one day lead to fundamentally new approaches to the treatment and prevention of addictive disorders. |
Shallow and Deep Convolutional Networks for Saliency Prediction | The prediction of salient areas in images has been traditionally addressed with hand-crafted features based on neuroscience principles. This paper, however, addresses the problem with a completely data-driven approach by training a convolutional neural network (convnet). The learning process is formulated as a minimization of a loss function that measures the Euclidean distance of the predicted saliency map with the provided ground truth. The recent publication of large datasets of saliency prediction has provided enough data to train end-to-end architectures that are both fast and accurate. Two designs are proposed: a shallow convnet trained from scratch, and a another deeper solution whose first three layers are adapted from another network trained for classification. To the authors' knowledge, these are the first end-to-end CNNs trained and tested for the purpose of saliency prediction. |
Rotor Integrity Design for a High-Speed Modular Air-Cored Axial-Flux Permanent-Magnet Generator | The rotor integrity design for a high-speed modular air-cored axial-flux permanent-magnet (AFPM) generator is presented. The main focus is on the mechanical parametric optimization of the rotor, which becomes a more dominating design issue over electromagnetic optimization at high operational speeds. Approximate analytical formulas are employed for preliminary sizing of the mechanical parameters of the rotor, which consists of the permanent magnets, retainment ring, and back iron. Two-dimensional (2-D) finite-element analysis (FEA) models are used to optimize the values of the parameters. Then, 3-D FEA models are developed to verify the final design. Finally, based on the final design, an AFPM prototype is built for experimental validation, and mechanical integrity tests for the rotor are undertaken. The results confirm the validity of the analytical and FEA models, as well as the overall design approach. |
Light-Weight Instruction Set Extensions for Bit-Sliced Cryptography | Bit-slicing is a non-conventional implementation technique for cryptographic software where an n-bit processor is considered as a collection of n 1-bit execution units operating in SIMD mode. Particularly when implementing symmetric ciphers, the bit-slicing approach has several advantages over more conventional alternatives: it often allows one to reduce memory footprint by eliminating large look-up tables, and it permits more predictable performance characteristics that can foil time based side-channel attacks. Both features are attractive for mobile and embedded processors, but the performance overhead that results from bit-sliced implementation often represents a significant disadvantage. In this paper we describe a set of light-weight Instruction Set Extensions (ISEs) that can improve said performance while retaining all advantages of bit-sliced implementation. Contrary to other crypto-ISE, our design is generic and allows for a high degree of algorithm agility: we demonstrate applicability to several well-known cryptographic primitives including four block ciphers (DES, Serpent, AES, and PRESENT), a hash function (SHA-1), as well as multiplication of ternary polynomials. |
Depressive symptoms in PD correlate with higher 5-HTT binding in raphe and limbic structures. | BACKGROUND
Depression associated with Parkinson disease (PD) has a different symptom profile to endogenous depression. The etiology of depression in PD remains uncertain though abnormal serotonergic neurotransmission could play a role.
OBJECTIVE
To assess with PET serotonergic function via in vivo serotonin transporter (5-HTT) availability in antidepressant-naive patients with PD.
METHODS
Thirty-four patients with PD and 10 healthy matched control subjects had a clinical battery of tests including the patient-report Beck Depression Inventory-II (BDI-II), the clinician-report Hamilton Rating Scale for Depression (HRSD), and the structured clinical interview for DSM-IV Axis I Disorders (SCID-I). They underwent ¹¹C-DASB PET, a selective in vivo marker of 5-HTT binding in humans.
RESULTS
BDI-II scores correlated with HRSD scores. Ten of 34 patients with PD (29.4%) had BDI-II and HRSD scores above the discriminative cutoff for PD depression though only half of these patients could be classed on SCID-I criteria as having an anxiety/mood disorder. Patients with PD with the highest scores for depression symptoms showed significantly raised ¹¹C-DASB binding in amygdala, hypothalamus, caudal raphe nuclei, and posterior cingulate cortex compared to low score cases, while ¹¹C-DASB binding values in other regions were similarly decreased in depressed and nondepressed patients with PD compared to healthy controls.
CONCLUSION
Depressive symptoms in antidepressant-naive patients with PD correlate with relatively higher 5-HTT binding in raphe nuclei and limbic structures possibly reflecting lower extracellular serotonin levels. Our data are compatible with a key role of abnormal serotonergic neurotransmission contributing to the pathophysiology of PD depression and justify the use of agents acting on 5-HTT. |
Mining anchor text for query refinement | When searching large hypertext document collections, it is often possible that there are too many results available for ambiguous queries. Query refinement is an interactive process of query modification that can be used to narrow down the scope of search results. We propose a new method for automatically generating refinements or related terms to queries by mining anchor text for a large hypertext document collection. We show that the usage of anchor text as a basis for query refinement produces high quality refinement suggestions that are significantly better in terms of perceived usefulness compared to refinements that are derived using the document content. Furthermore, our study suggests that anchor text refinements can also be used to augment traditional query refinement algorithms based on query logs, since they typically differ in coverage and produce different refinements. Our results are based on experiments on an anchor text collection of a large corporate intranet. |
The Simplified Disease Activity Index (SDAI) and the Clinical Disease Activity Index (CDAI): a review of their usefulness and validity in rheumatoid arthritis. | Composite indices or pooled indices are useful tools for the evaluation of disease activity in patients with rheumatoid arthritis (RA). They allow the integration of various aspects of the disease into a single numerical value, and may therefore facilitate consistent patient care and improve patient compliance, which both can lead to improved outcomes. The Simplified Disease Activity Index (SDAI) and the Clinical Disease Activity Index (CDAI) are two new tools for the evaluation of disease activity in RA. They have been developed to provide physicians and patients with simple and more comprehensible instruments. Moreover, the CDAI is the only composite index that does not incorporate an acute phase response and can therefore be used to conduct a disease activity evaluation essentially anytime and anywhere. These two new tools have not been developed to replace currently available instruments such as the DAS28, but rather to provide options for different environments. The comparative construct, content, and discriminant validity of all three indices--the DAS28, the SDAI, and the CDAI--allow physicians to base their choice of instrument on their infrastructure and their needs, and all of them can also be used in clinical trials. |
Isolation Enhancement Between Two Closely Packed Antennas | This paper introduces a coupling element to enhance the isolation between two closely packed antennas operating at the same frequency band. The proposed structure consists of two antenna elements and a coupling element which is located in between the two antenna elements. The idea is to use field cancellation to enhance isolation by putting a coupling element which artificially creates an additional coupling path between the antenna elements. To validate the idea, a design for a USB dongle MIMO antenna for the 2.4 GHz WLAN band is presented. In this design, the antenna elements are etched on a compact low-cost FR4 PCB board with dimensions of 20times40times1.6 mm3. According to our measurement results, we can achieve more than 30 dB isolation between the antenna elements even though the two parallel individual planar inverted F antenna (PIFA) in the design share a solid ground plane with inter-antenna spacing (Center to Center) of less than 0.095 lambdao or edge to edge separations of just 3.6 mm (0.0294 lambdao). Both simulation and measurement results are used to confirm the antenna isolation and performance. The method can also be applied to different types of antennas such as non-planar antennas. Parametric studies and current distribution for the design are also included to show how to tune the structure and control the isolation. |
Designing Cooperative Gamification: Conceptualization and Prototypical Implementation | Organizations deploy gamification in CSCW systems to enhance motivation and behavioral outcomes of users. However, gamification approaches often cause competition between users, which might be inappropriate for working environments that seek cooperation. Drawing on the social interdependence theory, this paper provides a classification for gamification features and insights about the design of cooperative gamification. Using the example of an innova-tion community of a German engineering company, we present the design of a cooperative gamification approach and results from a first experimental evaluation. The findings indicate that the developed gamification approach has positive effects on perceived enjoyment and the intention towards knowledge sharing in the considered innovation community. Besides our conceptual contribu-tion, our findings suggest that cooperative gamification may be beneficial for cooperative working environments and represents a promising field for future research. |
A survey and assessment of the capabilities of Cubesats for Earth observation | In less than a decade, Cubesats have evolved from purely educational tools to a standard platform for technology demonstration and scientific instrumentation. The use of COTS (Commercial-Off-The-Shelf) components and the ongoing miniaturization of several technologies have already led to scattered instances of missions with promising scientific value. Furthermore, advantages in terms of development cost and development time with respect to larger satellites, as well as the possibility of launching several dozens of Cubesats with a single rocket launch, have brought forth the potential for radically new mission architectures consisting of very large constellations or clusters of Cubesats. These architectures promise to combine the temporal resolution of GEO missions with the spatial resolution of LEO missions, thus breaking a traditional tradeoff in Earth observation mission design. This paper assesses the current capabilities of Cubesats with respect to potential employment in Earth observation missions. A thorough review of Cubesat bus technology capabilities is performed, identifying potential limitations and their implications on 17 different Earth observation payload technologies. These results are matched to an exhaustive review of scientific requirements in the field of Earth observation, assessing the possibilities of Cubesats to cope with the requirements set for each one of 21 measurement categories. Based on this review, several Earth observation measurements are identified that can potentially be compatible with the current state-of-the-art of Cubesat technology although some of them have actually never been addressed by any Cubesat mission. Simultaneously, other measurements are identified which are unlikely to be performed by Cubesats in the next few years due to insuperable constraints. Ultimately, this paper is intended to supply a box of ideas for universities to design future Cubesat missions with high |
A flexible approach for extracting metadata from bibliographic citations | In this article we present FLUX-CiM, a novel method for extracting components (e.g., author names, article titles, venues, page numbers) from bibliographic citations. Our method does not rely on patterns encoding specific delimiters used in a particular citation style.This feature yields a high degree of automation and flexibility, and allows FLUX-CiM to extract from citations in any given format. Differently from previous methods that are based on models learned from user-driven training, our method relies on a knowledge base automatically constructed from an existing set of sample metadata records from a given field (e.g., computer science, health sciences, social sciences, etc.). These records are usually available on the Web or other public data repositories. To demonstrate the effectiveness and applicability of our proposed method, we present a series of experiments in which we apply it to extract bibliographic data from citations in articles of different fields. Results of these experiments exhibit precision and recall levels above 94% for all fields, and perfect extraction for the large majority of citations tested. In addition, in a comparison against a stateof-the-art information-extraction method, ours produced superior results without the training phase required by that method. Finally, we present a strategy for using bibliographic data resulting from the extraction process with FLUX-CiM to automatically update and expand the knowledge base of a given domain. We show that this strategy can be used to achieve good extraction results even if only a very small initial sample of bibliographic records is available for building the knowledge base. |
DVL1 frameshift mutations clustering in the penultimate exon cause autosomal-dominant Robinow syndrome. | Robinow syndrome is a genetically heterogeneous disorder characterized by mesomelic limb shortening, genital hypoplasia, and distinctive facial features and for which both autosomal-recessive and autosomal-dominant inheritance patterns have been described. Causative variants in the non-canonical signaling gene WNT5A underlie a subset of autosomal-dominant Robinow syndrome (DRS) cases, but most individuals with DRS remain without a molecular diagnosis. We performed whole-exome sequencing in four unrelated DRS-affected individuals without coding mutations in WNT5A and found heterozygous DVL1 exon 14 mutations in three of them. Targeted Sanger sequencing in additional subjects with DRS uncovered DVL1 exon 14 mutations in five individuals, including a pair of monozygotic twins. In total, six distinct frameshift mutations were found in eight subjects, and all were heterozygous truncating variants within the penultimate exon of DVL1. In five families in which samples from unaffected parents were available, the variants were demonstrated to represent de novo mutations. All variant alleles are predicted to result in a premature termination codon within the last exon, escape nonsense-mediated decay (NMD), and most likely generate a C-terminally truncated protein with a distinct -1 reading-frame terminus. Study of the transcripts extracted from affected subjects' leukocytes confirmed expression of both wild-type and variant alleles, supporting the hypothesis that mutant mRNA escapes NMD. Genomic variants identified in our study suggest that truncation of the C-terminal domain of DVL1, a protein hypothesized to have a downstream role in the Wnt-5a non-canonical pathway, is a common cause of DRS. |
Depression in late life: review and commentary. | Depression is perhaps the most frequent cause of emotional suffering in later life and significantly decreases quality of life in older adults. In recent years, the literature on late-life depression has exploded. Many gaps in our understanding of the outcome of late-life depression have been filled. Intriguing findings have emerged regarding the etiology of late-onset depression. The number of studies documenting the evidence base for therapy has increased dramatically. Here, I first address case definition, and then I review the current community- and clinic-based epidemiological studies. Next I address the outcome of late-life depression, including morbidity and mortality studies. Then I present the extant evidence regarding the etiology of depression in late life from a biopsychosocial perspective. Finally, I present evidence for the current therapies prescribed for depressed elders, ranging from medications to group therapy. |
ElasticSwitch: practical work-conserving bandwidth guarantees for cloud computing | While cloud computing providers offer guaranteed allocations for resources such as CPU and memory, they do not offer any guarantees for network resources. The lack of network guarantees prevents tenants from predicting lower bounds on the performance of their applications. The research community has recognized this limitation but, unfortunately, prior solutions have significant limitations: either they are inefficient, because they are not work-conserving, or they are impractical, because they require expensive switch support or congestion-free network cores.
In this paper, we propose ElasticSwitch, an efficient and practical approach for providing bandwidth guarantees. ElasticSwitch is efficient because it utilizes the spare bandwidth from unreserved capacity or underutilized reservations. ElasticSwitch is practical because it can be fully implemented in hypervisors, without requiring a specific topology or any support from switches. Because hypervisors operate mostly independently, there is no need for complex coordination between them or with a central controller. Our experiments, with a prototype implementation on a 100-server testbed, demonstrate that ElasticSwitch provides bandwidth guarantees and is work-conserving, even in challenging situations. |
A Planar Magic-T Using Substrate Integrated Circuits Concept | In this letter, a slotline to substrate integrated waveguide transition is proposed for the development of substrate integrated circuits. The insertion loss of the back-to-back transition is less than 1 dB from 8.7 to 9.0 GHz. With this transition, a planar magic-T is studied and designed. Measured results indicate a very good performance of the fabricated magic-T is observed within the experimental frequency range of 8.4-9.4 GHz. The amplitude and phase imbalances are less than 0.2 dB and 1.5deg, respectively. |
Desiccation diagnosis in lumbar discs from clinical MRI with a probabilistic model | Lumbar intervertebral disc diseases are among the main causes of lower back pain (LBP). Desiccation is a common disease resulting from various reasons and ultimately most people are affected by desiccation at some age. We propose a probabilistic model that incorporates intervertebral disc appearance and contextual information for automating the diagnosis of lumbar disc desiccation. We utilize a Gibbs distribution for processing localized lumbar intervertebral discs' appearance and contextual information. We use 55 clinical T2-weighted MRI for lumbar area and achieve over 96% accuracy on a cross validation experiment. |
The lithographic lens: its history and evolution | The history of Nikon’s projection lens development for optical microlithography started with the first “Ultra MicroNikkor” in 1962, which was used for making photo-masks. Nikon’s first wafer stepper “NSR-1010G” was developed with a g-line projection lens in 1980. Since then, many kinds of projection lenses have been developed for each generation of stepper or scanner. In addition to increasing numerical aperture (NA) and field size, there have been many technical transitions for the projection lens, such as shortening the wavelength, controlling Zernike aberrations with phase measurement interferometry (PMI) for low k1 lithography, using aspherical lenses, applying kinematic optomechanical mounts, and utilizing free asphere re-polishing steps in the lens manufacturing process. The most recent advancement in projection lens technology is liquid immersion and polarization control for high NA imaging. NA now exceeds 1.0, which is the theoretical limit for dry (in air) imaging. At each transition, the amount of information that goes through the projection lens has been increased. In this paper, the history of the microlithographic lens is reviewed from several different points of view, such as specification, optical design, lens manufacturing, etc. In addition, future options of the projection lens are discussed briefly. |
Protocols for self-organization of a wireless sensor network | We present a suite of algorithms for self-organization of wireless sensor networks, in which there is a scalably large number of mainly static nodes with highly constrained energy resources. The protocols further support slow mobility by a subset of the nodes, energy-efficient routing, and formation of ad hoc subnetworks for carrying out cooperative signal processing functions among a set of the nodes. † This research is supported by DARPA contract number F04701-97-C-0010, and was presented in part at the 37 Allerton Conference on Communication, Computing and Control, September 1999. ‡ Corresponding author. |
Diet rapidly and reproducibly alters the human gut
microbiome | Long-term dietary intake influences the structure and activity of the trillions of microorganisms residing in the human gut, but it remains unclear how rapidly and reproducibly the human gut microbiome responds to short-term macronutrient change. Here we show that the short-term consumption of diets composed entirely of animal or plant products alters microbial community structure and overwhelms inter-individual differences in microbial gene expression. The animal-based diet increased the abundance of bile-tolerant microorganisms (Alistipes, Bilophila and Bacteroides) and decreased the levels of Firmicutes that metabolize dietary plant polysaccharides (Roseburia, Eubacterium rectale and Ruminococcus bromii). Microbial activity mirrored differences between herbivorous and carnivorous mammals, reflecting trade-offs between carbohydrate and protein fermentation. Foodborne microbes from both diets transiently colonized the gut, including bacteria, fungi and even viruses. Finally, increases in the abundance and activity of Bilophila wadsworthia on the animal-based diet support a link between dietary fat, bile acids and the outgrowth of microorganisms capable of triggering inflammatory bowel disease. In concert, these results demonstrate that the gut microbiome can rapidly respond to altered diet, potentially facilitating the diversity of human dietary lifestyles. |
Teachers' perceptions of students' mathematics proficiency may exacerbate early gender gaps in achievement. | A recent wave of research suggests that teachers overrate the performance of girls relative to boys and hold more positive attitudes toward girls' mathematics abilities. However, these prior estimates of teachers' supposed female bias are potentially misleading because these estimates (and teachers themselves) confound achievement with teachers' perceptions of behavior and effort. Using data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-1999 (ECLS-K), Study 1 demonstrates that teachers actually rate boys' mathematics proficiency higher than that of girls when conditioning on both teachers' ratings of behavior and approaches to learning as well as past and current test scores. In other words, on average girls are only perceived to be as mathematically competent as similarly achieving boys when the girls are also seen as working harder, behaving better, and being more eager to learn. Study 2 uses mediation analysis with an instrumental-variables approach, as well as a matching strategy, to explore the extent to which this conditional underrating of girls may explain the widening gender gap in mathematics in early elementary school. We find robust evidence suggesting that underrating girls' mathematics proficiency accounts for a substantial portion of the development of the mathematics achievement gap between similarly performing and behaving boys and girls in the early grades. |
Information-theoretic model comparison unifies saliency metrics. | Learning the properties of an image associated with human gaze placement is important both for understanding how biological systems explore the environment and for computer vision applications. There is a large literature on quantitative eye movement models that seeks to predict fixations from images (sometimes termed "saliency" prediction). A major problem known to the field is that existing model comparison metrics give inconsistent results, causing confusion. We argue that the primary reason for these inconsistencies is because different metrics and models use different definitions of what a "saliency map" entails. For example, some metrics expect a model to account for image-independent central fixation bias whereas others will penalize a model that does. Here we bring saliency evaluation into the domain of information by framing fixation prediction models probabilistically and calculating information gain. We jointly optimize the scale, the center bias, and spatial blurring of all models within this framework. Evaluating existing metrics on these rephrased models produces almost perfect agreement in model rankings across the metrics. Model performance is separated from center bias and spatial blurring, avoiding the confounding of these factors in model comparison. We additionally provide a method to show where and how models fail to capture information in the fixations on the pixel level. These methods are readily extended to spatiotemporal models of fixation scanpaths, and we provide a software package to facilitate their use. |
DeepVesselNet: Vessel Segmentation, Centerline Prediction, and Bifurcation Detection in 3-D Angiographic Volumes | We present DeepVesselNet, an architecture tailored to the challenges to be addressed when extracting vessel networks and corresponding features in 3-D angiographic volume using deep learning. We discuss the problems of low execution speed and high memory requirements associated with full 3-D convolutional networks, high class imbalance arising from low percentage (less than 3%) of vessel voxels, and unavailability of accurately annotated training data and offer solutions that are the building blocks of DeepVesselNet. First, we formulate 2-D orthogonal cross-hair filters which make use of 3-D context information at a reduced computational burden. Second, we introduce a class balancing cross-entropy score with false positive rate correction to handle the high class imbalance and high false positive rate problems associated with existing loss functions. Finally, we generate synthetic dataset using a computational angiogenesis model, capable of generating vascular networks under physiological constraints on local network structure and topology, and use these data for transfer learning. DeepVesselNet is optimized for segmenting and analyzing vessels, and we test the performance on a range of angiographic volumes including clinical Time-of-Flight MRA data of the human brain, as well as synchrotron radiation X-ray tomographic microscopy scans of the rat brain. Our experiments show that, by replacing 3-D filters with 2-D orthogonal cross-hair filters in our network, we achieve over 23% improvement in speed, lower memory footprint, lower network complexity which prevents over fitting and comparable and even sometimes higher accuracy. Our class balancing metric is crucial for training the network and pre-training with synthetic data helps in early convergence of the training process. |
Named Entity Recognition on Code-Switched Data: Overview of the CALCS 2018 Shared Task | In the third shared task of the Computational Approaches to Linguistic CodeSwitching (CALCS) workshop, we focus on Named Entity Recognition (NER) on code-switched social-media data. We divide the shared task into two competitions based on the English-Spanish (ENG-SPA) and Modern Standard ArabicEgyptian (MSA-EGY) language pairs. We use Twitter data and 9 entity types to establish a new dataset for code-switched NER benchmarks. In addition to the CS phenomenon, the diversity of the entities and the social media challenges make the task considerably hard to process. As a result, the best scores of the competitions are 63.76% and 71.61% for ENG-SPA and MSA-EGY, respectively. We present the scores of 9 participants and discuss the most common challenges among submissions. |
4G- A NEW ERA IN WIRELESS TELECOMMUNICATION | 4G – “connect anytime, anywhere, anyhow” promising ubiquitous network access at high speed to the end users, has been a topic of great interest especially for the wireless telecom industry. 4G seems to be the solution for the growing user requirements of wireless broadband access and the limitations of the existing wireless communication system. The purpose of this paper is to provide an overview of the different aspects of 4G which includes its features, its proposed architecture and key technological enablers. It also elaborates on the roadblocks in its implementations. A special consideration has been given to the security concerns of 4G by discussing a security threat analysis model proposed by International Telecommunication Union (ITU). By applying this model, a detailed analysis of threats to 4G and the corresponding measures to counter them can be performed. |
PRE-CRASH SENSOR FOR PRE-CRASH SAFETY | Improvement of vehicle safety performance is one of the targets of ITS development. A pre-crash safety system has been developed that utilizes ITS technologies. The Pre-crash Safety system reduces collision injury by estimating TTC(time-tocollision) to preemptively activate safety devices, which consist of “Pre-crash Seatbelt” system and “Pre-crash Brake Assist” system. The key technology of these systems is a “Pre-crash Sensor” to detect obstacles and estimate TTC. In this paper, the Pre-crash Sensor is presented. The Pre-crash Sensor uses millimeter-wave radar to detect preceding vehicles, oncoming vehicles, roadside objects, etc. on the road ahead. Furthermore, by using a phased array system as a vehicle radar for the first time, a compact electronically scanned millimeter-wave radar with high recognition performance has been achieved. With respect to the obstacle determination algorithm, a crash determination algorithm has been newly developed, taking into account estimation of the direction of advance of the vehicle, in addition to the distance, relative speed and direction of the object. |
Localizing by Describing: Attribute-Guided Attention Localization for Fine-Grained Recognition | A key challenge in fine-grained recognition is how to find and represent discriminative local regions. Recent attention models are capable of learning discriminative region localizers only from category labels with reinforcement learning. However, not utilizing any explicit part information, they are not able to accurately find multiple distinctive regions. In this work, we introduce an attribute-guided attention localization scheme where the local region localizers are learned under the guidance of part attribute descriptions. By designing a novel reward strategy, we are able to learn to locate regions that are spatially and semantically distinctive with reinforcement learning algorithm. The attribute labeling requirement of the scheme is more amenable than the accurate part location annotation required by traditional part-based fine-grained recognition methods. Experimental results on the CUB-200-2011 dataset [1] demonstrate the superiority of the proposed scheme on both fine-grained recognition and attribute recognition. |
Compliant leg behaviour explains basic dynamics of walking and running. | The basic mechanics of human locomotion are associated with vaulting over stiff legs in walking and rebounding on compliant legs in running. However, while rebounding legs well explain the stance dynamics of running, stiff legs cannot reproduce that of walking. With a simple bipedal spring-mass model, we show that not stiff but compliant legs are essential to obtain the basic walking mechanics; incorporating the double support as an essential part of the walking motion, the model reproduces the characteristic stance dynamics that result in the observed small vertical oscillation of the body and the observed out-of-phase changes in forward kinetic and gravitational potential energies. Exploring the parameter space of this model, we further show that it not only combines the basic dynamics of walking and running in one mechanical system, but also reveals these gaits to be just two out of the many solutions to legged locomotion offered by compliant leg behaviour and accessed by energy or speed. |
Deep Learning for Event-Driven Stock Prediction | Neural Tensor Network for Learning Event Embeddings Event Representation E = (O1, P, O2, T), where P is the action, O1 is the actor, O2 is the object and T is the timestamp (T is mainly used for aligning stock data with news data). For example, the event “Sep 3, 2013 Microsoft agrees to buy Nokia’s mobile phone business for $7.2 billion.” is modeled as: (Actor = Microsoft, Action = buy, Object = Nokia’s mobile phone business, Time = Sep 3, 2013) Event Embedding |
Representational similarity encoding for fMRI: Pattern-based synthesis to predict brain activity using stimulus-model-similarities | Patterns of neural activity are systematically elicited as the brain experiences categorical stimuli and a major challenge is to understand what these patterns represent. Two influential approaches, hitherto treated as separate analyses, have targeted this problem by using model-representations of stimuli to interpret the corresponding neural activity patterns. Stimulus-model-based-encoding synthesizes neural activity patterns by first training weights to map between stimulus-model features and voxels. This allows novel model-stimuli to be mapped into voxel space, and hence the strength of the model to be assessed by comparing predicted against observed neural activity. Representational Similarity Analysis (RSA) assesses models by testing how well the grand structure of pattern-similarities measured between all pairs of model-stimuli aligns with the same structure computed from neural activity patterns. RSA does not require model fitting, but also does not allow synthesis of neural activity patterns, thereby limiting its applicability. We introduce a new approach, representational similarity-encoding, that builds on the strengths of RSA and robustly enables stimulus-model-based neural encoding without model fitting. The approach therefore sidesteps problems associated with overfitting that notoriously confront any approach requiring parameter estimation (and is consequently low cost computationally), and importantly enables encoding analyses to be incorporated within the wider Representational Similarity Analysis framework. We illustrate this new approach by using it to synthesize and decode fMRI patterns representing the meanings of words, and discuss its potential biological relevance to encoding in semantic memory. Our new similarity-based encoding approach unites the two previously disparate methods of encoding models and RSA, capturing the strengths of both, and enabling similarity-based synthesis of predicted fMRI patterns. |
A Framework for Robust Subspace Learning | Many computer vision, signal processing and statistical problems can be posed as problems of learning low dimensional linear or multi-linear models. These models have been widely used for the representation of shape, appearance, motion, etc., in computer vision applications. Methods for learning linear models can be seen as a special case of subspace fitting. One draw-back of previous learning methods is that they are based on least squares estimation techniques and hence fail to account for “outliers” which are common in realistic training sets. We review previous approaches for making linear learning methods robust to outliers and present a new method that uses an intra-sample outlier process to account for pixel outliers. We develop the theory of Robust Subspace Learning (RSL) for linear models within a continuous optimization framework based on robust M-estimation. The framework applies to a variety of linear learning problems in computer vision including eigen-analysis and structure from motion. Several synthetic and natural examples are used to develop and illustrate the theory and applications of robust subspace learning in computer vision. |
Axioms of the Analytic Hierarchy Process (AHP) and its Generalization to Dependence and Feedback: The Analytic Network Process (ANP) | The AHP/ANP are multicriteria decision-making theories that deal with both hierarchic structures when the criteria are independent of the alternatives and with networks when there is any dependence within and between elements of the decision. Both of them have been repeatedly used in practice by various researchers and practitioners. From the perspective of almost 40 years of practice in solving problems using both theories, some of their properties seem to be more important than others. The article indicates four of them as fundamental for understanding AHP/ANP. These are the axioms related to structure, computation, and expectation. The mathematical formulation of the axioms is preceded by an introduction explaining the motivation behind the introduced concepts. The article is expository and it is an improved and refined version of the work [1]. |
Effects of actinobacteria on plant disease suppression and growth promotion | Biological control and plant growth promotion by plant beneficial microbes has been viewed as an alternative to the use of chemical pesticides and fertilizers. Bacteria and fungi that are naturally associated with plants and have a beneficial effect on plant growth by the alleviation of biotic and abiotic stresses were isolated and developed into biocontrol (BCA) and plant growth-promoting agents (PGPA). Actinobacteria are a group of important plant-associated spore-forming bacteria, which have been studied for their biocontrol, plant growth promotion, and interaction with plants. This review summarizes the effects of actinobacteria as BCA, PGPA, and its beneficial associations with plants. |
Environmental geology in the United States: Present practice and future training needs | Environmental geology as practiced in the United States confronts issues in three large areas: Threats to human society from geologic phenomena (geologic hazards); impacts of human activities on natural systems (environmental impact), and natural-resource management. This paper illustrates present U.S. practice in environmental geology by sampling the work of 7 of the 50 state geological surveys and of the United States Geological Survey as well. Study of the work of these agencies provides a basis for identifying avenues for the training of those who will deal with environmental issues in the future. This training must deal not only with the subdisciplines of geology but with education to cope with the ethical, interdisciplinary, and public-communication aspects of the work of the environmental geologist. |
A review on intelligent process for smart home applications based on IoT: coherent taxonomy, motivation, open challenges, and recommendations | Innovative technology on intelligent processes for smart home applications that utilize Internet of Things (IoT) is mainly limited and dispersed. The available trends and gaps were investigated in this study to provide valued visions for technical environments and researchers. Thus, a survey was conducted to create a coherent taxonomy on the research landscape. An extensive search was conducted for articles on (a) smart homes, (b) IoT and (c) applications. Three databases, namely, IEEE Explore, ScienceDirect and Web of Science, were used in the article search. These databases comprised comprehensive literature that concentrate on IoT-based smart home applications. Subsequently, filtering process was achieved on the basis of intelligent processes. The final classification scheme outcome of the dataset contained 40 articles that were classified into four classes. The first class includes the knowledge engineering process that examines data representation to identify the means of accomplishing a task for IoT applications and their utilisation in smart homes. The second class includes papers on the detection process that uses artificial intelligence (AI) techniques to capture the possible changes in IoT-based smart home applications. The third class comprises the analytical process that refers to the use of AI techniques to understand the underlying problems in smart homes by inferring new knowledge and suggesting appropriate solutions for the problem. The fourth class comprises the control process that describes the process of measuring and instructing the performance of IoT-based smart home applications against the specifications with the involvement of intelligent techniques. The basic features of this evolving approach were then identified in the aspects of motivation of intelligent process utilisation for IoT-based smart home applications and open-issue restriction utilisation. The recommendations for the approval and utilisation of intelligent process for IoT-based smart home applications were also determined from the literature. |
Deep learning algorithm for arrhythmia detection | Most of cardiovascular disorders or diseases can be prevented, but death continues to rise due to improper treatment because of misdiagnose. One of cardiovascular diseases is Arrhythmia. It is sometimes difficult to observe electrocardiogram (ECG) recording for Arrhythmia detection. Therefore, it needs a good learning method to be applied in the computer as a way to help the detection of Arrhythmia. There is a powerful approach in Machine Learning, named Deep Learning. It starts to be widely used for Speech Recognition, Bioinformatics, Computer Vision, and many others. This research used the Deep Learning to classify the Arrhythmia data. We compared the result to other popular machine learning algorithm, such as Naive Bayes, K-Nearest Neighbor, Artificial Neural Network, and Support Vector Machine. Our experiment showed that Deep Learning algorithm achieved the best accuracy, which was 76,51%. |
Evaluating written patient education materials. | Evaluation, the last component in the process of patient education, is the most frequently omitted step in producing informative material. Yet, it is essential that program/material evaluation be conducted to determine the impact and success or failure of the program or material. It is vital to be aware of this need and possess knowledge about types of evaluation and those areas that must be included to provide a comprehensive evaluation. |
Stability analysis of method of fundamental solutions for mixed boundary value problems of Laplace’s equation | Since the stability of the method of fundamental solutions (MFS) is a severe issue, the estimation on the bounds of condition number Cond is important to real application. In this paper, we propose the new approaches for deriving the asymptotes of Cond, and apply them for the Dirichlet problem of Laplace’s equation, to provide the sharp bound of Cond for disk domains. Then the new bound of Cond is derived for bounded simply connected domains with mixed types of boundary conditions. Numerical results are reported for Motz’s problem by adding singular functions. The values of Cond grow exponentially with respect to the number of fundamental solutions used. Note that there seems to exist no stability analysis for the MFS on non-disk (or non-elliptic) domains. Moreover, the expansion coefficients obtained by the MFS are oscillatingly large, to cause the other kind of instability: subtraction cancelation errors in the final harmonic solutions. |
Modulation of phasic and tonic muscle synergies with reaching direction and speed. | How the CNS masters the many degrees of freedom of the musculoskeletal system to control goal-directed movements is a long-standing question. We have recently provided support to the hypothesis that the CNS relies on a modular control architecture by showing that the phasic muscle patterns for fast reaching movements in different directions are generated by combinations of a few time-varying muscle synergies: coordinated recruitment of groups of muscles with specific activation profiles. However, natural reaching movements occur at different speeds and require the control of both movement and posture. Thus we have investigated whether muscle synergies also underlie reaching at different speeds as well as the maintenance of stable arm postures. Hand kinematics and shoulder and elbow muscle surface EMGs were recorded in five subjects during reaches to eight targets in the frontal plane at different speeds. We found that the amplitude modulation of three time-invariant synergies captured the variations in the postural muscle patterns at the end of the movement. During movement, three phasic and three tonic time-varying synergies could reconstruct the time-normalized muscle pattern in all conditions. Phasic synergies were modulated in both amplitude and timing by direction and speed. Tonic synergies were modulated only in amplitude by direction. The directional tuning of both types of synergies was well described by a single or a double cosine function. These results suggest that muscle synergies are basic control modules that allow generating the appropriate muscle patterns through simple modulation and combination rules. |
Granulocytic nuclear differentiation of lamin B receptor-deficient mouse EPRO cells. | OBJECTIVE
Lamin B receptor (LBR) is an integral protein of the inner nuclear membrane. Recent studies have demonstrated that genetic deficiency of LBR during granulopoiesis results in hypolobulation of the mature neutrophil nucleus, as observed in human Pelger-Huët anomaly and mouse ichthyosis (ic). In this study, we utilized differentiated early promyelocytes (EPRO cells) that were derived from the bone marrow of homozygous and heterozygous ichthyosis mice to examine changes to the expression of nuclear envelope proteins and heterochromatin structure that result from deficient LBR expression.
MATERIALS AND METHODS
Wild-type (+/+), heterozygous (+/ic), and homozygous (ic/ic) granulocytic forms of EPRO cells were analyzed for the expression of multiple lamins and inner nuclear envelope proteins by immunostaining and immunoblotting techniques. The heterochromatin architecture was also examined by immunostaining for histone lysine methylation.
RESULTS
Wild-type (+/+) and heterozygous (+/ic) granulocytic forms revealed ring-shaped nuclei and contained LBR within the nuclear envelope; ic/ic granulocytes exhibited smaller ovoid nuclei devoid of LBR. The pericentric heterochromatin of undifferentiated and granulocytic ic/ic cells was condensed into larger spots and shifted away from the nuclear envelope, compared to +/+ and +/ic cell forms. Lamin A/C, which is normally not present in mature granulocytes, was significantly elevated in LBR-deficient EPRO cells.
CONCLUSIONS
Our observations suggest roles for LBR during granulopoiesis, which can involve augmenting nuclear membrane growth, facilitating compartmentalization of heterochromatin, and promoting downregulation of lamin A/C expression. |
A Survey of Maneuvering Target Tracking — Part II : Ballistic Target Models | This paper is the second part in a series that provides a comprehensive survey of the problems and techniques of tracking maneuvering targets in the absence of the so-called measurement-origin uncertainty. It surveys motion models of ballistic targets used for target tracking. Models for all three phases (i.e., boost, coast, and reentry) of motion are covered. |
N-gram Counts and Language Models from the Common Crawl | We contribute 5-gram counts and language models trained on the Common Crawl corpus, a collection over 9 billion web pages. This release improves upon the Google n-gram counts in two key ways: the inclusion of low-count entries and deduplication to reduce boilerplate. By preserving singletons, we were able to use Kneser-Ney smoothing to build large language models. This paper describes how the corpus was processed with emphasis on the problems that arise in working with data at this scale. Our unpruned Kneser-Ney English 5-gram language model, built on 975 billion deduplicated tokens, contains over 500 billion unique n-grams. We show gains of 0.5–1.4 BLEU by using large language models to translate into various languages. |
Task-Relevant Object Discovery and Categorization for Playing First-person Shooter Games | We consider the problem of learning to play first-person shooter (FPS) video games using raw screen images as observations and keyboard inputs as actions. The high-dimensionality of the observations in this type of applications leads to prohibitive needs of training data for model-free methods, such as the deep Q-network (DQN), and its recurrent variant DRQN. Thus, recent works focused on learning low-dimensional representations that may reduce the need for data. This paper presents a new and efficient method for learning such representations. Salient segments of consecutive frames are detected from their optical flow, and clustered based on their feature descriptors. The clusters typically correspond to different discovered categories of objects. Segments detected in new frames are then classified based on their nearest clusters. Because only a few categories are relevant to a given task, the importance of a category is defined as the correlation between its occurrence and the agent’s performance. The result is encoded as a vector indicating objects that are in the frame and their locations, and used as a side input to DRQN. Experiments on the game Doom provide a good evidence for the benefit of this approach. |
Proportional-resonant controllers and filters for grid-connected voltage-source converters | The recently introduced proportional-resonant (PR) controllers and filters, and their suitability for current/voltage control of grid-connected converters, are described. Using the PR controllers, the converter reference tracking performance can be enhanced and previously known shortcomings associated with conventional PI controllers can be alleviated. These shortcomings include steady-state errors in single-phase systems and the need for synchronous d–q transformation in three-phase systems. Based on similar control theory, PR filters can also be used for generating the harmonic command reference precisely in an active power filter, especially for single-phase systems, where d–q transformation theory is not directly applicable. Another advantage associated with the PR controllers and filters is the possibility of implementing selective harmonic compensation without requiring excessive computational resources. Given these advantages and the belief that PR control will find wide-ranging applications in grid-interfaced converters, PR control theory is revised in detail with a number of practical cases that have been implemented previously, described clearly to give a comprehensive reference on PR control and filtering. |
Some biological properties of curcumin: A review | Curcumin (diferuloyl methane), a small-molecular weight compound isolated from the roots of Curcuma longa L. (family Zingiberaceae), has been used traditionally for centuries in Asia for medicinal, culinary and other purposes. A large number of in vitro and in vivo studies in both animals and man have indicated that curcumin has strong antioxidant, anti-carcinogenic, anti-inflammatory, anti-angiogenic, antispasmodic, antimicrobial, anti-parasitic and other activities. The mechanisms of some of these actions have recently been intensively investigated. Curcumin inhibits the promotion/ progression stage of carcinogenesis by induction of apoptosis and the arrest of cancer cells in the S, G2/M cell cycle phase. The compound inhibits the activity of growth factor receptors. The anti-inflammatory properties of curcumin are mediated through their effects on cytokines, lipid mediators, eicosanoids and proteolytic enzymes. Curcumin scavenges the superoxide radical, hydrogen peroxide and nitric oxide, and inhibits lipid peroxidation. These actions may be the basis for many of its pharmacological and therapeutic properties. |
Scorpion envenoming in the north of Mali (West Africa): epidemiological, clinical and therapeutic aspects. | Scorpion envenomation remains a poorly known problem in sub-Saharan Africa, particularly in Mali, where the incidence is high in Northern area of the country (Sahara desert). We conducted a prospective study in two district health centers, Kidal and Tessalit (North-east of Mali), to describe the epidemiological, clinical and therapeutic features of scorpion stings. This study consisted of an exhaustive follow-up from admission to discharge of all patients stung by scorpions. Of a total of 282 cases recorded during one year, 207 (73.4%) occurred in Kidal, and the remaining 75 (26.6%) took place in Tessalit. The annual incidence was significantly higher in Tessalit (437 cases/100,000 population/year) than in Kidal (243 cases/100,000 population/year) (p < 10⁻⁶). Two hundred two (71.6%) stings occurred inside human dwellings, 142 (50.4%) during sleeping/resting, especially in August. One hundred ninety-one (67.7%) were on the lower extremities. Nocturnal stings, 168 (59.6%), occurred more often than diurnal stings, 114 (40.4%). Most patients, 163 (57.8%), were admitted less than 1 h after being stung. Local pain at the sting site was the common primary complaint. However, moderate and severe clinical signs were significantly higher in children than in adults (p < 0.05). The death rate (3.9%) was higher in children (3.5%) than in adults (0.3%) (p = 8.10⁻⁶; RR = 0.90 [IC: 0.84-0.06]). Of the 22 scorpion species identified, 13 (59.1%) were Leiurus quinquestriiatus, 8 (36.4%) were Androctonus amoreuxi, and 1 (4.5%) specimen was Buthiscus bicalcaratus. From these species, L. quinquestriiatus and A. amoreuxi were responsible of stings. The medical treatment was only symptomatic, and one hundred twenty-eight (45.3%) patients received traditional remedies before seeking medical attention. Our findings suggest that scorpion stings are common in the north of Mali and are a significant threat to human health. |
AliMe Chat: A Sequence to Sequence and Rerank based Chatbot Engine | We propose AliMe Chat, an open-domain chatbot engine that integrates the joint results of Information Retrieval (IR) and Sequence to Sequence (Seq2Seq) based generation models. AliMe Chat uses an attentive Seq2Seq based rerank model to optimize the joint results. Extensive experiments show our engine outperforms both IR and generation based models. We launch AliMe Chat for a real-world industrial application and observe better results than another public chatbot. |
CHAPTER ONE. Production and Distribution of Income in a Market Economy | This book looks at the distribution of income and wealth and the effects that this has on the macroeconomy, and vice versa. Is a more equal distribution of income beneficial or harmful for macroeconomic growth, and how does the distribution of wealth evolve in a market economy? Taking stock of results and methods developed in the context of the 1990s revival of growth theory, the authors focus on capital accumulation and long-run growth. They show how rigorous, optimization-based technical tools can be applied, beyond the representative-agent framework of analysis, to account for realistic market imperfections and for political-economic interactions. The treatment is thorough, yet accessible to students and nonspecialist economists, and it offers specialist readers a wide-ranging and innovative treatment of an increasingly important research field. The book follows a single analytical thread through a series of different growth models, allowing readers to appreciate their structure and crucial assumptions. This is particularly useful at a time when the literature on income distribution and growth has developed quickly and in several different directions, becoming difficult to overview. |
A Table Detection Method for Multipage PDF Documents via Visual Seperators and Tabular Structures | Table detection is always an important task of document analysis and recognition. In this paper, we propose a novel and effective table detection method via visual separators and geometric content layout information, targeting at PDF documents. The visual separators refer to not only the graphic ruling lines but also the white spaces to handle tables with or without ruling lines. Furthermore, we detect page columns in order to assist table region delimitation in complex layout pages. Evaluations of our algorithm on an e-Book dataset and a scientific document dataset show competitive performance. It is noteworthy that the proposed method has been successfully incorporated into a commercial software package for large-scale Chinese e-Book production. |
Sunflower Management and Capital Budgeting | A sandbox provided for indoor use consists of a sheet of fabric-like material having a collapsible wall about its perimeter such that, when the sheet is smoothed into a substantially flat horizontal condition, the collapsible wall stands in substantially vertical relation along the perimeter of the sheet. Thus, the smoothed out sheet and the vertical wall define a contained play area to hold sand. Preferably, the wall is formed of sections of a cushion-like material such as foam rubber covered by material such as that used to form the horizontal sheet. Thus the wall would be collapsible along the vertical folds defined at the junctions of the covered foam rubber sections. Interior to the perimeter of the sandbox, the upper surface of the sheet is fitted with a gathering mechanism, typically a drawstring laid out in circular fashion on the top surface of the sheet with a circular stretch of material covering the drawstring and fixed to the sheet. |
A Low-Profile Dual-Band Dual-Polarized Antenna With an AMC Surface for WLAN Applications | In this letter, a novel low-profile dual-band dual-polarized antenna with an artificial magnetic conductor (AMC) surface for wireless local area network (WLAN) applications is presented. Two sets of bowtie patches are introduced to form two dipoles in ±45° polarizations. Two integrated baluns are used to excite the dipoles. By inserting an AMC surface, the antenna can achieve unidirectional radiation pattern and low-profile characteristics. The height of the proposed antenna is 0.088λ0 at 2.4 GHz. It can provide measured 15.6% (2.36–2.76 GHz) and 9.3% (5.12–5.62 GHz) relative bandwidth, respectively. Port-to-port isolation higher than 22 dB can be realized. Stable radiation patterns with peak gain of 7.2 dBi in lower band and 7.3 dBi in upper band are also obtained. The proposed antenna can be used for multiband base stations for WLAN applications. |
A real-time COFDM transmission system based on the GNU radio: USRP N210 platform | The term "Software Defined Radio" (SDR) has become familiar in research and development of wireless communication systems today. SDR allows users to adjust a system with its flexibility and re-configurability for any frequency band and different modulation of various physical parameters by using programmable hardware and software. In this paper, we implement an Orthogonal Frequency Division Multiplexing (OFDM) system combined with channel coding blocks based on Software-Defined Radio. The software design and implementation are proposed for a GNU Radio-based OFDM system in real-time wireless transmission. Different scenarios with and without channel coding have been implemented in order to evaluate the Packet Failure Rate (PFR) performance. Quadrature amplitude modulation QAM-16, convolutional encoder combined with block interleaver and Viterbi decoder are deployed for the implementation. It is shown that the introduction of channel coding blocks into GNU Radio-based OFDM system reduced the PFR of data signals transmitted over Universal Software Radio Peripheral (USRP) boards. |
Gale-Shapley Matching in an Evolutionary Trade NetWork Game | This study investigates the performance of Gale-Shapley matching in an evolutionary market context. Computational experimental findings are reported for an evolutionary match-and-play trade network game in which resource constrained traders repeatedly choose and refuse trade partners in accordance with Gale-Shapley matching, participate in risky trades modeled as two-person prisoner's dilemma games, and evolve their trade behavior over time. Particular attention is focused on correlations between ex ante market structure and the formation of trade networks, and between trade network formation and the types of trade behavior and social welfare outcomes that these trade networks support. Related work can be accessed here: http://www2.econ.iastate.edu/tesfatsi/tnghome.htm |
Lactic acid: New roles in a new millennium. | T study of lactic acid (HLa) and muscular contraction has a long history, beginning perhaps as early as 1807 when Berzelius found HLa in muscular fluid and thought that ‘‘the amount of free lactic acid in a muscle [was] proportional to the extent to which the muscle had previously been exercised’’ (cited in ref. 1). Several subsequent studies in the 19th century established the view that HLa was a byproduct of metabolism under conditions of O2 limitation. For example, in 1891, Araki (cited in ref. 2) reported elevated HLa levels in the blood and urine of a variety of animals subjected to hypoxia. In the early part of the 20th century, Fletcher and Hopkins (3) found an accumulation of HLa in anoxia as well as after prolonged stimulation to fatigue in amphibian muscle in vitro. Subsequently, based on the work of Fletcher and Hopkins (3) as well as his own studies, Hill (and colleagues; ref. 4) postulated that HLa increased during muscular exercise because of a lack of O2 for the energy requirements of the contracting muscles. These studies laid the groundwork for the anaerobic threshold concept, which was introduced and detailed by Wasserman and colleagues in the 1960s and early 1970s (5–7). The basic anaerobic threshold paradigm is that elevated HLa production and concentration during muscular contractions or exercise are the result of cellular hypoxia. Table 1 summarizes the essential components of the anaerobic threshold concept. However, several studies during the past '30 years have presented evidence questioning the idea that O2 limitation is a prerequisite for HLa production and accumulation in muscle and blood. Jöbsis and Stainsby (8) stimulated the canine gastrocnemius in situ at a rate known to elicit peak twitch oxygen uptake (V̇O2) and high net HLa output. They (8) reasoned that if the HLA output was caused by O2-limited oxidative phosphorylation, then there should be an accompanying reduction of members of the respiratory chain, including the NADHyNAD1 pair. Instead, muscle surface fluorometry indicated NADHyNAD1 oxidation in comparison to the resting condition. Later, Connett and colleagues (9–11), by using myoglobin cryomicrospectroscopy in small volumes of dog gracilis muscle, were unable to find loci with a PO2 less than the critical PO2 for maximal mitochondrial ox idative phosphorylation (0.1– 0.5 mmHg) during muscle contractions resulting in HLa output and an increase in muscle HLa concentration. More recently, Richardson and colleagues (12) used proton magnetic resonance spectroscopy to determine myoglobin saturation (and thereby an estimate of intramuscular PO2) during progressive exercise in humans. They found that HLa efflux was unrelated to muscle cytoplasmic PO2 during normoxia. Although there are legitimate criticisms of these studies, they and many others of a related nature have led to alternative explanations for HLa production that do not involve O2 limitation. In the present issue of PNAS, two papers (13, 14) illustrate the dichotomous relationship between lactic acid and oxygen. First, Kemper and colleagues (13) add further evidence against O2 as the key regulator of HLa production. They (13) used a unique model, the rattlesnake tailshaker muscle complex, to study intracellular glycolysis during ischemia in comparison to HLa efflux during free flow conditions; in both protocols, the muscle complex was active and producing rattling. In their first experiment, rattling was induced for 29 s during ischemia resulting from blood pressure cuff inflation between the cloaca and tailshaker muscle complex. In a second experiment, measures were taken during 108 s of rattling with normal, spontaneous blood flow. In both experiments, 31P magnetic resonance spectroscopy permitted measurement of changes in muscle levels of PCr, ATP, Pi, and pH before, during, and after rattling. Based on previous methods established in their laboratory, Kemper and colleagues (13) estimated glycolytic f lux during the ischemic and aerobic rattling protocols. The result was that total glycolytic f lux was the same under both conditions! Kemper and colleagues (13) conclude that HLa generation does not necessarily ref lect O2 limitation. To be fair, there are potential limitations to the excellent paper by Kemper and colleagues (13). First, and most importantly, they studied muscle metabolism in the transition from rest to rattling (29 s during ischemia and 108 s during free flow). Some investigators argue that oxidative phosphorylation is limited by O2 delivery to the exercising muscles during this nonsteady-state transition even with spontaneous blood flow (for review, see ref. 15). This remains a matter of debate, and the role of O2 in the transition from rest to contractions may depend on the intensity of contractions (16, 17). Of course, it is possible that the role of O2 in the transition to rattling may be tempered by the high volume density of mitochondria and the high blood supply to this unique muscle complex (13, 18). Second, there could be significant early lactate production within the first seconds of the transition (19). Third, it would have been helpful to have measurements of intramuscular lactate and glycogen concentra- |
An introduction to voice search | Voice search is the technology underlying many spoken dialog systems (SDSs) that provide users with the information they request with a spoken query. The information normally exists in a large database, and the query has to be compared with a field in the database to obtain the relevant information. The contents of the field, such as business or product names, are often unstructured text. This article categorized spoken dialog technology into form filling, call routing, and voice search, and reviewed the voice search technology. The categorization was made from the technological perspective. It is important to note that a single SDS may apply the technology from multiple categories. Robustness is the central issue in voice search. The technology in acoustic modeling aims at improved robustness to environment noise, different channel conditions, and speaker variance; the pronunciation research addresses the problem of unseen word pronunciation and pronunciation variance; the language model research focuses on linguistic variance; the studies in search give rise to improved robustness to linguistic variance and ASR errors; the dialog management research enables graceful recovery from confusions and understanding errors; and the learning in the feedback loop speeds up system tuning for more robust performance. While tremendous achievements have been accomplished in the past decade on voice search, large challenges remain. Many voice search dialog systems have automation rates around or below 50% in field trials. |
Parallelizing Skip Lists for In-Memory Multi-Core Database Systems | Due to the coarse granularity of data accesses and the heavy use of latches, indices in the B-tree family are not efficient for in-memory databases, especially in the context of today's multi-core architecture. In this paper, we study the parallelizability of skip lists for the parallel and concurrent environment, and present PSL, a Parallel in-memory Skip List that lends itself naturally to the multi-core environment, particularly with non-uniform memory access. For each query, PSL traverses the index in a Breadth-First-Search (BFS) to find the list node with the matching key, and exploits SIMD processing to speed up this process. Furthermore, PSL distributes incoming queries among multiple execution threads disjointly and uniformly to eliminate the use of latches and achieve a high parallelizability. The experimental results show that PSL is comparable to a readonly index, FAST, in terms of read performance, and outperforms ART and Masstree respectively by up to 30% and 5x for a variety of workloads. |
Systematic Reviews: The Good, the Bad, and the Ugly | Systematic reviews systematically evaluate and summarize current knowledge and have many advantages over narrative reviews. Meta-analyses provide a more reliable and enhanced precision of effect estimate than do individual studies. Systematic reviews are invaluable for defining the methods used in subsequent studies, but, as retrospective research projects, they are subject to bias. Rigorous research methods are essential, and the quality depends on the extent to which scientific review methods are used. Systematic reviews can be misleading, unhelpful, or even harmful when data are inappropriately handled; meta-analyses can be misused when the difference between a patient seen in the clinic and those included in the meta-analysis is not considered. Furthermore, systematic reviews cannot answer all clinically relevant questions, and their conclusions may be difficult to incorporate into practice. They should be reviewed on an ongoing basis. As clinicians, we need proper methodological training to perform good systematic reviews and must ask the appropriate questions before we can properly interpret such a review and apply its conclusions to our patients. This paper aims to assist in the reading of a systematic review. |
Interspecific pollinator movements reduce pollen deposition and seed production in Mimulus ringens (Phrymaceae). | Movement of pollinators between coflowering plant species may influence conspecific pollen deposition and seed set. Interspecific pollinator movements between native and showy invasive plants may be particularly detrimental to the pollination and reproductive success of native species. We explored the effects of invasive Lythrum salicaria on the reproductive success of Mimulus ringens, a wetland plant native to eastern North America. Pollinator flights between these species significantly reduced the amount of conspecific pollen deposited on Mimulus stigmas and the number of seeds in Mimulus fruits, suggesting that pollen loss is an important mechanism of competition for pollination. Although pollen loss is often attributed to pollen wastage on heterospecific floral structures, our novel findings suggest that grooming by bees as they forage on a competitor may also significantly reduce outcross pollen export and seed set in Mimulus ringens. |
Design and implementation of web based on Laravel framework | With the traditional framework design methods to design web, resulting in large limitations, time-consuming and other issues, for such problems, this paper presents the design and implementation method of a web based on Laravel framework, Laravel make the development process is standardized, processing some non-business logic relationship automatically. This paper designs and implements a simple Laravel model, which achieved automated processing for part of the design. The experimental and simulation proved, web design based on Laravel framework, has scalability and robust scalability, so as to improve the developing efficiency. |
Smart fabrics and interactive textile enabling wearable personal applications: R&D state of the art and future challenges | Smart fabrics and interactive textiles (SFIT) are fibrous structures that are capable of sensing, actuating, generating/storing power and/or communicating. Research and development towards wearable textile-based personal systems allowing e.g. health monitoring, protection & safety, and healthy lifestyle gained strong interest during the last 10 years. Under the Information and Communication Programme of the European Commission, a cluster of R&D projects dealing with smart fabrics and interactive textile wearable systems regroup activities along two different and complementary approaches i.e. “application pull” and “technology push”. This includes projects aiming at personal health management through integration, validation, and use of smart clothing and other networked mobile devices as well as projects targeting the full integration of sensors/actuators, energy sources, processing and communication within the clothes to enable personal applications such as protection/safety, emergency and healthcare. The integration part of the technologies into a real SFIT product is at present stage on the threshold of prototyping and testing. Several issues, technical as well user-centred, societal and business, remain to be solved. The paper presents on going major R&D activities, identifies gaps and discuss key challenges for the future. |
CLOUDS: A Decision Tree Classifier for Large Datasets | Classification for very large datasets has many practical applications in data mining. Techniques such as discretization and dataset sampling can be used to scale up decision tree classifiers to large datasets. Unfortunately, both of these techniques can cause a significant loss in accuracy. We present a novel decision tree classifier called CLOUDS, which samples the splitting points for numeric attributes followed by an estimation step to narrow the search space of the best split. CLOUDS reduces computation and I/O complexity substantially compared to state of the art classifters, while maintaining the quality of the generated trees in terms of accuracy and tree size. We provide experimental results with a number of real and synthetic data~ets. |
Adding Noun Phrase Structure to the Penn Treebank | The Penn Treebank does not annotate within base noun phrases (NPs), committing only to flat structures that ignore the complexity of English NPs. This means that tools trained on Treebank data cannot learn the correct internal structure of NPs. This paper details the process of adding gold-standard bracketing within each noun phrase in the Penn Treebank. We then examine the consistency and reliability of our annotations. Finally, we use this resource to determine NP structure using several statistical approaches, thus demonstrating the utility of the corpus. This adds detail to the Penn Treebank that is necessary for many NLP applications. |
Egg size and egg mass of Daphnia magna: response to food availability | The influence of different food availability on egg size and egg mass in Daphnia magna Straus was studied in long-term experiments using a flow-through system. Daphnia were either kept a constant high or low food levels or subjected to alternating periods of high food and starvation. Some animals were starved continuously after they had deposited their first clutch of eggs. Eggs were measured and weighed and their density (dry mass per volume) was determined. The results support the model of Glazier (1992), which defines a region of ‘reproductive constraint’ at very low food concentrations and a region of ‘adaptive response’ as food concentration increase. Egg sizes were largest under continuously low food concentrations (0.1 mg Cl−1), which indicates that the maximum of Glazier's non-linear response curve is at very low food levels. Eggs produced during starvation were small, probably as a result of reproductive constraints. Egg density was about 0.37 mg dry weight mm−1 and did not differ between treatments. |
Demonstration of 3 kV 4H-SiC reverse blocking MOSFET | The authors developed 3 kV 4H-SÍC reverse blocking (RB) metal-oxide-semiconductor field-effect transistors (MOSFETs) for the first time. To achieve reverse blocking capability, the n+-substrate layer was removed by polishing, and both a Schottky contact and edge-termination structure were introduced onto the wafer backside. Fabricated SiC RB MOSFETs exhibited good Schottky characteristics, and measured differential specific on-resistance was 20 mΩ·cm2. Both forward and reverse blocking voltages of RB MOSFETs are higher than 3 kV. On-state power loss of a developed RB MOSFET is 35% lower than that of anti-serially connected standard 3 kV SiC MOSFETs, demonstrating the advantage of the developed RB MOSFET as a high-voltage bi-directional switch. |
Bus detection system for blind people using RFID | This paper presents a bus detection system using RFID technology that aims to ease the traveling and movement of blind people. The proposed system consists of two detection subsystems; one on the buses and the other on the bus stations, database system and a website. In the bus detection subsystem, the nearby stations will be easily detected and then announced through a voice message inside the bus. Moreover, any existing blind person in the surrounding area of the station will be detected by the bus subsystem to alert the bus driver about the number of blind persons. In the bus station subsystem, the coming buses will be detected and then announced in the station in order to alert the blind people. A complete system prototype has been constructed and tested to validate the proposed system. The results show that the system performance is promising in terms of system functionality, safety, and cost. |
GPS-denied Indoor and Outdoor Monocular Vision Aided Navigation and Control of Unmanned Aircraft | GPS-denied closed-loop autonomous control of unstable Unmanned Aerial Vehicles (UAVs) such as rotorcraft using information from a monocular camera has been an open problem. Most proposed Vision aided Inertial Navigation Systems (V-INS) have been too computationally intensive or do not have sufficient integrity for closed-loop flight. We provide an affirmative answer to the question of whether V-INS can be used to sustain prolonged real-world GPS-denied flight by presenting a V-INS that is validated through autonomous flight-tests over prolonged closed-loop dynamic operation in both indoor and outdoor GPSdenied environments with two rotorcraft UAS. The architecture efficiently combines visual feature information from a monocular camera with measurements from inertial sensors. Inertial measurements are used to predict frame-to-frame transition of online selected feature locations, and the difference between predicted and observed feature locations is used to bind in real-time the inertial measurement unit drift, estimate its bias, and account for initial misalignment errors. A novel algorithm to manage a library of features online is presented that can add or remove features based on a measure of relative confidence in each feature location. The resulting V-INS is sufficiently efficient and reliable to enable real-time implementation on resource constrained aerial vehicles. The presented algorithms are validated on multiple platforms in real-world conditions: through a 16 minute flight test, including an autonomous landing, of a 66 kg rotorcraft UAV operating in an unconctrolled outdoor environment without using GPS and through a Micro-UAV operating in a cluttered, unmapped, ∗[email protected], [email protected], [email protected]. G. Chowdhary, A. Wu, E. Johnson, D. Magree, are affiliated with Georgia Institute of Technology, Atlanta, GA. A. Shein is affiliated with CyPhy Works Inc., Danvers MA. and gusty indoor environment. |
Expression invariant 3D face recognition with a Morphable Model | We describe an expression-invariant method for face recognition by fitting an identity/expression separated 3D Morphable Model to shape data. The expression model greatly improves recognition and retrieval rates in the uncooperative setting, while achieving recognition rates on par with the best recognition algorithms in the face recognition great vendor test. The fitting is performed with a robust nonrigid ICP algorithm. It is able to perform face recognition in a fully automated scenario and on noisy data. The system was evaluated on two datasets, one with a high noise level and strong expressions, and the standard UND range scan database, showing that while expression invariance increases recognition and retrieval performance for the expression dataset, it does not decrease performance on the neutral dataset. The high recognition rates are achieved even with a purely shape based method, without taking image data into account. |
A novel secure hash algorithm for public key digital signature schemes | Hash functions are the most widespread among all cryptographic primitives, and are currently used in multiple cryptographic schemes and in security protocols. This paper presents a new Secure Hash Algorithm called (SHA-192). It uses a famous secure hash algorithm given by the National Institute of Standard and Technology (NIST).The basic design of SHA192 is to have the output length of 192.The SHA-192 has been designed to satisfy the different level of enhanced security and to resist the advanced SHA attacks. The security analysis of the SHA-192 is compared to the old one given by NIST and gives more security and excellent results as shown in our discussion. In this paper the digital signature algorithm which is given by NIST has been modified using the proposed algorithms SHA-192. Using proposed SHA-192 hash algorithm a new digital signature schemes is also proposed. The SHA-192 can be used in many applications such s public key cryptosystem, digital signcryption, message authentication code, random generator and in security architecture of upcoming wireless devices like software defined radio etc. |
Quality data extraction methodology based on the labeling of coffee leaves with nutritional deficiencies | Nutritional deficiencies detection for coffee leaves is a task which is often undertaken manually by experts on the field known as agronomists. The process they follow to carry this task is based on observation of the different characteristics of the coffee leaves while relying on their own experience. Visual fatigue and human error in this empiric approach cause leaves to be incorrectly labeled and thus affecting the quality of the data obtained. In this context, different crowdsourcing approaches can be applied to enhance the quality of the data extracted. These approaches separately propose the use of voting systems, association rule filters and evolutive learning. In this paper, we extend the use of association rule filters and evolutive approach by combining them in a methodology to enhance the quality of the data while guiding the users during the main stages of data extraction tasks. Moreover, our methodology proposes a reward component to engage users and keep them motivated during the crowdsourcing tasks. The extracted dataset by applying our proposed methodology in a case study on Peruvian coffee leaves resulted in 93.33% accuracy with 30 instances collected by 8 experts and evaluated by 2 agronomic engineers with background on coffee leaves. The accuracy of the dataset was higher than independently implementing the evolutive feedback strategy and an empiric approach which resulted in 86.67% and 70% accuracy respectively under the same conditions. |
Multimodal Machine Learning: A Survey and Taxonomy | Our experience of the world is multimodal - we see objects, hear sounds, feel texture, smell odors, and taste flavors. Modality refers to the way in which something happens or is experienced and a research problem is characterized as multimodal when it includes multiple such modalities. In order for Artificial Intelligence to make progress in understanding the world around us, it needs to be able to interpret such multimodal signals together. Multimodal machine learning aims to build models that can process and relate information from multiple modalities. It is a vibrant multi-disciplinary field of increasing importance and with extraordinary potential. Instead of focusing on specific multimodal applications, this paper surveys the recent advances in multimodal machine learning itself and presents them in a common taxonomy. We go beyond the typical early and late fusion categorization and identify broader challenges that are faced by multimodal machine learning, namely: representation, translation, alignment, fusion, and co-learning. This new taxonomy will enable researchers to better understand the state of the field and identify directions for future research. |
Cisapride decreases gastroesophageal reflux in preterm infants. | OBJECTIVE
Gastrointestinal prokinetic agents, such as cisapride, are commonly used in pediatric practice to improve gastric emptying, to decrease emesis, to improve lower esophageal sphincter tone, and to improve irritability and feeding aversion associated with gastroesophageal reflux (GER). Although cisapride seems to be effective in infants from 2 months to 14 years old, data for younger and preterm infants are not available. Whether reflux is a significant cause of reflex apnea or feeding intolerance in the preterm infant is controversial. The objective of this 1-year prospective study, started in 1998, was to determine the efficacy of cisapride for treatment of reflux and reflux-associated apnea (RAAP) in preterm infants. Before this study, the diagnosis of reflux was often made clinically and the effect of therapy on reflux or the decision to increase the dose of cisapride was made empirically. The clinical bias was that persistent apnea, not responding to caffeine, was caused by GER. We reasoned that a systematic approach to the diagnosis and treatment of reflux would improve the care of preterm infants and reduce the risk of toxicity, especially if an increased dose of cisapride showed no improvement in reflux or apnea.
STUDY DESIGN
Twenty-four preterm infants (24-36 weeks' gestational age) had clinical apnea/pH studies when they were referred by the attending neonatologist for suspected GER. These infants were born at 28.8 +/- 3.1 weeks with birth weight of 1169 +/- 387 g (range: 631-2263 g). Each infant was studied before and 8 days after starting cisapride treatment. Cisapride dose was 0.09 to 0.25 mg/kg every 6 hours enterally. Treatment decisions regarding dose of cisapride were the responsibility of the attending neonatologist. The pH was recorded continuously for 24 hours at 0.25 Hz and was analyzed using EsopHogram software. A single sensor pH catheter was inserted to ~2 cm above the esophageal gastric junction. GER was defined as a drop in esophageal pH below 4.0 for a least 5 seconds, or pathologic GER was defined as a reflux index (RI) >2 standard deviation (SD) from the mean based on published norms for term infants. The following parameters were calculated from the pH recording: number of reflux events per 24 hours, duration of the longest episode, number of episodes >5 minutes per 24 hours, and RI, ie, percentage of time with pH <4.0. Each study had a combined time-lapse video recording and multichannel digital recording. Recorded parameters were: continuous pulse oximetry, electrocardiogram, respiratory effort (piezo sensor), and airflow (temperature sensor at nostrils and mouth). The recording was scored for central apneas of 10 to 14 seconds and >/=15 seconds (prolonged) and >/=10 seconds for obstructive and mixed apneas. RAAP was scored when an apnea (irrespective of the type) occurred within 1 minute of a GER event. Baseline, after cisapride, and follow-up electrocardiograms were performed because of concern about prolonged QTc and cardiac arrhythmias. The infants were 35.6 +/- 4.5 weeks postconceptional age when first studied. Twelve infants (mean birth weight: 1821 +/- 749 g; gestational age: 32 +/- 2 weeks; postconceptional age: 35.6 +/- 2.6 weeks) were identified retrospectively as controls because their baseline GER parameters were within the normal range using Vandenplas' criteria.
RESULTS
Overall, cisapride treatment significantly improved the RI from 16.6 +/- 15.2 to 9.1 +/- 8.4 SD. The number of reflux episodes >/=5 minutes was reduced from 7.1 +/- 5.8 to 4.3 +/- 4.4 SD. No significant effect was seen on the total number of refluxes (/24 hours). Eight infants (33%) had no decrease in the RI after a week of treatment. Three of these infants improved after cisapride dose was increased from 0.09 to 0.25 mg/kg/dose every 6 hours. Although 0.09 mg/kg/day is the minimum effective dose, 67% of our infants did respond to this low dose. Cisapride was discontinued in 3 infants because of prolonged QTc >/=0.450 seconds (0.473 in 1 and 0.470 in 2). More data about the effect of cisapride on QTc interval are reported in Pediatrics in a separate article. Only 1 infant showed no improvement with increased dose. Caffeine treatment had no effect on the baseline or follow-up GER values. Although apnea indexes for central and obstructive apnea were similar before and after cisapride, mixed apnea was less during treatment. There was a significant decrease (0.32 +/- 0.40 to 0.12 +/- 0.17/hour) in RAAP when the one infant who had increased reflux on increased dose of cisapride was excluded as an outlier. The statistical difference, before and after cisapride, for the group is significant with the outlier omitted. The clinical significance is unclear because ~50% of the infants had minimal changes in their apnea indexes. Furthermore, ~40% of infants did not have RAAP. (ABSTRACT TRUNCATED) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.