title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Assessment of lexical semantic judgment abilities in alcohol-dependent subjects: An fMRI study | Neuropsychological studies have shown that alcohol dependence is associated with neurocognitive deficits in tasks requiring memory, perceptual motor skills, abstraction and problem solving, whereas language skills are relatively spared in alcoholics despite structural abnormalities in the language-related brain regions. To investigate the preserved mechanisms of language processing in alcohol-dependents, functional brain imaging was undertaken in healthy controls (n=18) and alcohol-dependents (n=16) while completing a lexical semantic judgment task in a 3 T MR scanner. Behavioural data indicated that alcohol-dependents took more time than controls for performing the task but there was no significant difference in their response accuracy. fMRI data analysis revealed that while performing the task, the alcoholics showed enhanced activations in left supramarginal gyrus, precuneus bilaterally, left angular gyrus, and left middle temporal gyrus as compared to control subjects. The extensive activations observed in alcoholics as compared to controls suggest that alcoholics recruit additional brain areas to meet the behavioural demands for equivalent task performance. The results are consistent with previous fMRI studies suggesting compensatory mechanisms for the execution of task for showing an equivalent performance or decreased neural efficiency of relevant brain networks. However, on direct comparison of the two groups, the results did not survive correction for multiple comparisons; therefore, the present findings need further exploration. |
Local partial-likelihood estimation for lifetime data | This paper considers a proportional hazards model, which allows one to examine the extent to which covariates interact nonlinearly with an exposure variable, for analysis of lifetime data. A local partial-likelihood technique is proposed to estimate nonlinear interactions. Asymptotic normality of the proposed estimator is established. The baseline hazard function, the bias and the variance of the local likelihood estimator are consistently estimated. In addition, a one-step local partial-likelihood estimator is presented to facilitate the computation of the proposed procedure and is demonstrated to be as efficient as the fully iterated local partial-likelihood estimator. Furthermore, a penalized local likelihood estimator is proposed to select important risk variables in the model. Numerical examples are used to illustrate the effectiveness of the proposed procedures. |
Character Recognition Using Convolutional Neural Networks | Pattern recognition is one of the traditional uses of neural networks. When trained with gradient-based learning methods, these networks can learn the classification of input data by example. An introduction to classifiers and gradient-based learning is given. It is shown how several perceptrons can be combined and trained gradient-based. Furthermore, an overview of convolutional neural networks, as well as a real-world example, are discussed. |
CoAPthon: Easy development of CoAP-based IoT applications with Python | The Internet of Things (IoT) vision foresees billions of devices seamlessly integrated into information systems. In this context, the Constrained Application Protocol (CoAP) has been defined as a technology enabler to allow applications to interact with physical objects. In this work we present CoAPthon, an open-source Python-based CoAP library, which aims at simplifying the development of CoAP-enabled IoT applications. The library offers software developers a simple and easy-to-use programming interface to exploit CoAP as a communication protocol for rapid prototyping and deployment of IoT systems. The CoAPthon library is fully compliant with the CoAP RFC and implements in addition popular extensions such as the block-wise transfer and resource observing. |
Distant Supervision with Transductive Learning for Adverse Drug Reaction Identification from Electronic Medical Records | Information extraction and knowledge discovery regarding adverse drug reaction (ADR) from large-scale clinical texts are very useful and needy processes. Two major difficulties of this task are the lack of domain experts for labeling examples and intractable processing of unstructured clinical texts. Even though most previous works have been conducted on these issues by applying semisupervised learning for the former and a word-based approach for the latter, they face with complexity in an acquisition of initial labeled data and ignorance of structured sequence of natural language. In this study, we propose automatic data labeling by distant supervision where knowledge bases are exploited to assign an entity-level relation label for each drug-event pair in texts, and then, we use patterns for characterizing ADR relation. The multiple-instance learning with expectation-maximization method is employed to estimate model parameters. The method applies transductive learning to iteratively reassign a probability of unknown drug-event pair at the training time. By investigating experiments with 50,998 discharge summaries, we evaluate our method by varying large number of parameters, that is, pattern types, pattern-weighting models, and initial and iterative weightings of relations for unlabeled data. Based on evaluations, our proposed method outperforms the word-based feature for NB-EM (iEM), MILR, and TSVM with F1 score of 11.3%, 9.3%, and 6.5% improvement, respectively. |
Older Adults' Experiences Using a Commercially Available Monitor to Self-Track Their Physical Activity. | BACKGROUND
Physical activity contributes to older adults' autonomy, mobility, and quality of life as they age, yet fewer than 1 in 5 engage in activities as recommended. Many older adults track their exercise using pencil and paper, or their memory. Commercially available physical activity monitors (PAM) have the potential to facilitate these tracking practices and, in turn, physical activity. An assessment of older adults' long-term experiences with PAM is needed to understand this potential.
OBJECTIVE
To assess short and long-term experiences of adults >70 years old using a PAM (Fitbit One) in terms of acceptance, ease-of-use, and usefulness: domains in the technology acceptance model.
METHODS
This prospective study included 95 community-dwelling older adults, all of whom received a PAM as part of randomized controlled trial piloting a fall-reducing physical activity promotion intervention. Ten-item surveys were administered 10 weeks and 8 months after the study started. Survey ratings are described and analyzed over time, and compared by sex, education, and age.
RESULTS
Participants were mostly women (71/95, 75%), 70 to 96 years old, and had some college education (68/95, 72%). Most participants (86/95, 91%) agreed or strongly agreed that the PAM was easy to use, useful, and acceptable both 10 weeks and 8 months after enrolling in the study. Ratings dropped between these time points in all survey domains: ease-of-use (median difference 0.66 points, P=.001); usefulness (median difference 0.16 points, P=.193); and acceptance (median difference 0.17 points, P=.032). Differences in ratings by sex or educational attainment were not statistically significant at either time point. Most participants 80+ years of age (28/37, 76%) agreed or strongly agreed with survey items at long-term follow-up, however their ratings were significantly lower than participants in younger age groups at both time points.
CONCLUSIONS
Study results indicate it is feasible for older adults (70-90+ years of age) to use PAMs when self-tracking their physical activity, and provide a basis for developing recommendations to integrate PAMs into promotional efforts.
TRIAL REGISTRATION
Clinicaltrials.gov NCT02433249; https://clinicaltrials.gov/ct2/show/NCT02433249 (Archived by WebCite at http://www.webcitation.org/6gED6eh0I). |
Validation of biomarkers that complement CA19.9 in detecting early pancreatic cancer. | PURPOSE
Pancreatic ductal adenocarcinoma (PDAC) is a significant cause of cancer mortality. Carbohydrate antigen 19.9 (CA19.9), the only tumor marker available to detect and monitor PDAC, is not sufficiently sensitive and specific to consistently differentiate early cancer from benign disease. In this study, we aimed to validate recently discovered serum protein biomarkers for the early detection of PDAC and ultimately develop a biomarker panel that could discriminate PDAC from other benign disease better than the existing marker CA19.9.
PATIENTS AND METHODS
We performed a retrospective blinded evaluation of 400 serum samples collected from individuals recruited on a consecutive basis. The sample population consisted of 250 individuals with PDAC at various stages, 130 individuals with benign conditions and 20 healthy individuals. The serum levels of each biomarker were determined by ELISAs or automated immunoassay.
RESULTS
By randomly splitting matched samples into a training (n = 186) and validation (n = 214) set, we were able to develop and validate a biomarker panel consisting of CA19.9, CA125, and LAMC2 that significantly improved the performance of CA19.9 alone. Improved discrimination was observed in the validation set between all PDAC and benign conditions (AUCCA19.9 = 0.80 vs. AUCCA19.9+CA125+LAMC2 = 0.87; P < 0.005) as well as between early-stage PDAC and benign conditions (AUCCA19.9 = 0.69 vs. AUCCA19.9+CA125+LAMC2 = 0.76; P < 0.05) and between early-stage PDAC and chronic pancreatitis (CP; AUCCA19.9 = 0.59 vs. AUCCA19.9+CA125+LAMC2 = 0.74; P < 0.05).
CONCLUSIONS
The data demonstrate that a serum protein biomarker panel consisting of CA125, CA19.9, and LAMC2 is able to significantly improve upon the performance of CA19.9 alone in detecting PDAC. |
Physical layer security in wireless networks: a tutorial | Wireless networking plays an extremely important role in civil and military applications. However, security of information transfer via wireless networks remains a challenging issue. It is critical to ensure that confidential data are accessible only to the intended users rather than intruders. Jamming and eavesdropping are two primary attacks at the physical layer of a wireless network. This article offers a tutorial on several prevalent methods to enhance security at the physical layer in wireless networks. We classify these methods based on their characteristic features into five categories, each of which is discussed in terms of two metrics. First, we compare their secret channel capacities, and then we show their computational complexities in exhaustive key search. Finally, we illustrate their security requirements via some examples with respect to these two metrics. |
Can peer educators influence healthy eating in people with diabetes? Results of a randomized controlled trial. | AIMS
To assess whether the Expert Patient Programme (EPP), adapted for people with Type 2 diabetes, can be used to promote healthy eating to improve glycaemic control.
METHODS
Adults with Type 2 diabetes (n = 317) were randomized to receive either a diabetes-specific EPP (n = 162) or individual one-off appointments with a dietitian (control group) (n = 155). The diabetes-specific EPP followed the standard National Health Service programme although all participants in the group had diabetes only, rather than a mix of chronic conditions. Participants attended a group session for 2 h once per week for 6 weeks. In addition, a final seventh-week 2-h session was included that was specific to issues concerning diabetes. Outcomes were assessed at baseline, 6 and 12 months.
RESULTS
There were no statistically significant differences between the control and the intervention group in any of the clinical outcomes measured. There was no significant difference between the groups in any dietary outcome. There was a higher starch intake in the EPP group, although this did not reach statistical significance (effect size for starch adjusted for baseline values 8.8 g; 95% CI -1.3 to 18.9). There was some loss of participants between baseline measurement and randomization, although this did not appear to have had an important impact on baseline balance.
CONCLUSIONS
In this study of people with Type 2 diabetes, the EPP approach was not effective in changing measures of diabetes control or diet. |
In Vivo Assessment of Architecture and Micro-Finite Element Analysis Derived Indices of Mechanical Properties of Trabecular Bone in the Radius | Measurement of microstructural parameters of trabecular bone noninvasively in vivo is possible with high-resolution magnetic resonance (MR) imaging. These measurements may prove useful in the determination of bone strength and fracture risk, but must be related to other measures of bone properties. In this study in vivo MR imaging was used to derive trabecular bone structure measures and combined with micro-finite element analysis (μFE) to determine the effects of trabecular bone microarchitecture on bone mechanical properties in the distal radius. The subjects were studied in two groups: (I) postmenopausal women with normal bone mineral density (BMD) (n= 22, mean age 58 ± 7 years) and (II) postmenopausal women with spine or femur BMD −1 SD to −2.5 SD below young normal (n= 37, mean age 62 ± 11 years). MR images of the distal radius were obtained at 1.5 T, and measures such as apparent trabecular bone volume fraction (App BV/TV), spacing, number and thickness (App TbSp, TbN, TbTh) were derived in regions of interest extending from the joint line to the radial shaft. The high-resolution images were also used in a micro-finite element model to derive the directional Young’s moduli (E1, E2 and E3), shear moduli (G12, G23 and G13) and anisotropy ratios such as E1/E3. BMD at the distal radius, lumbar spine and hip were assessed using dual-energy X-ray absorptiometry (DXA). Bone formation was assessed by serum osteocalcin and bone resorption by serum type I collagen C-terminal telopeptide breakdown products (serum CTX) and urinary CTX biochemical markers. The trabecular architecture displayed considerable anisotropy. Measures of BMD such as the ultradistal radial BMD were lower in the osteopenic group (p<0.01). Biochemical markers between the two groups were comparable in value and showed no significant difference between the two groups. App BV/TV, TbTh and TbN were higher, and App TbSp lower, in the normal group than the osteopenic group. All three directional measures of elastic and shear moduli were lower in the osteopenic group compared with the normal group. Anisotropy of trabecular bone microarchitecture, as measured by the ratios of the mean intercept length (MIL) values (MIL1/MIL3, etc.), and the anisotropy in elastic modulus (E1/E3, etc.), were greater in the osteopenic group compared with the normal group. The correlations between the measures of architecture and moduli are higher than those between elastic moduli and BMD. Stepwise multiple regression analysis showed that while App BV/TV is highly correlated with the mechanical properties, additional structural measures do contribute to the improved prediction of the mechanical measures. This study demonstrates the feasibility and potential of using MR imaging with μFE modeling in vivo in the study of osteoporosis. |
Adversarial Symmetric Variational Autoencoder | A new form of variational autoencoder (VAE) is developed, in which the joint distribution of data and codes is considered in two (symmetric) forms: (i) from observed data fed through the encoder to yield codes, and (ii) from latent codes drawn from a simple prior and propagated through the decoder to manifest data. Lower bounds are learned for marginal log-likelihood fits observed data and latent codes. When learning with the variational bound, one seeks to minimize the symmetric Kullback-Leibler divergence of joint density functions from (i) and (ii), while simultaneously seeking to maximize the two marginal log-likelihoods. To facilitate learning, a new form of adversarial training is developed. An extensive set of experiments is performed, in which we demonstrate state-of-the-art data reconstruction and generation on several image benchmark datasets. |
Conditional Models of Identity Uncertainty with Application to Noun Coreference | Coreference analysis, also known as record linkage or identity uncertainty, is a difficult and important problem in natural language processing, databases, citation matching and many other tasks. This paper introduces several discriminative, conditional-probability models for coreference analysis, all examples of undirected graphical models. Unlike many historical approaches to coreference, the models presented here are relational—they do not assume that pairwise coreference decisions should be made independently from each other. Unlike other relational models of coreference that are generative, the conditional model here can incorporate a great variety of features of the input without having to be concerned about their dependencies—paralleling the advantages of conditional random fields over hidden Markov models. We present positive results on noun phrase coreference in two standard text data sets. |
The Penn Arabic Treebank : Building a Large-Scale Annotated Arabic Corpus | From our three year experience of developing a large-scale corpus of annotated Arabic text, our paper will address the following: (a) review pertinent Arabic language issues as they relate to methodology choices, (b) explain our choice to use the Penn English Treebank style of guidelines, (requiring the Arabic-speaking annotators to deal with a new grammatical system) rather than doing the annotation in a more traditional Arabic grammar style (requiring NLP researchers to deal with a new system); (c) show several ways in which human annotation is important and automatic analysis difficult, including the handling of orthographic ambiguity by both the morphological analyzer and human annotators; (d) give an illustrative example of the Arabic Treebank methodology, focusing on a particular construction in both morphological analysis and tagging and syntactic analysis and following it in detail through the entire annotation process, and finally, (e) conclude with what has been achieved so far and what remains to be done. |
The quantum field theory of laser acceleration | Abstract We determine the scattering rate and the energy loss of electrons due to a laser photon beam. From the energy loss formula we determine the force accelerating an electron by the laser photon beam and the corresponding relativistic dynamical equation describing its motion. Numerically, we calculate the velocity of electron after an acceleration time Δt = 0.1 s. |
Dynamic Charging of Electric Vehicles by Wireless Power Transfer | I N RECENT TIMES, wireless power charging of electric vehicles (EV) has gained huge attentions. Static wireless charging for EVs has seamlessly been achieved using the inductive power transfer (IPT) technology. More recently, dynamic (also termed in-motion) wireless charging with IPT technology has been successfully demonstrated for electric mass transportation means like electric trains, trams, buses, and utility vehicles. This special section looks forward to high-quality manuscripts highlighting the state-of-the-art on dynamic wireless charging of EVs. Papers are welcomed on analysis, design, prototype development, and testing of wireless systems for dynamic EV charging. Advanced researches on system-relevant issues such as coupling coil, coil misalignment compensation, power electronics converters, and LC-compensation circuitry for dynamic wireless charging systems are also welcome. Papers presenting quality work on wireless both IPT and non-IPT technology for dynamic wireless EV charging will be considered. Topics of interest of this Special Section include, but are not limited to: |
From Cellular Cultures to Cellular Spheroids: Is Impedance Spectroscopy a Viable Tool for Monitoring Multicellular Spheroid (MCS) Drug Models? | The use of 3-D multicellular spheroid (MCS) models is increasingly being accepted as a viable means to study cell-cell, cell-matrix and cell-drug interactions. Behavioral differences between traditional monolayer (2-D) cell cultures and more recent 3-D MCS confirm that 3-D MCS more closely model the in vivo environment. However, analyzing the effect of pharmaceutical agents on both monolayer cultures and MCS is very time intensive. This paper reviews the use of electrical impedance spectroscopy (EIS), a label-free whole cell assay technique, as a tool for automated screening of cell drug interactions in MCS models for biologically/physiologically relevant events over long periods of time. EIS calculates the impedance of a sample by applying an AC current through a range of frequencies and measuring the resulting voltage. This review will introduce techniques used in impedance-based analysis of 2-D systems; highlight recently developed impedance-based techniques for analyzing 3-D cell cultures; and discuss applications of 3-D culture impedance monitoring systems. |
Ginger and its health claims: molecular aspects. | Recent research has rejuvenated centuries-old traditional herbs to cure various ailments by using modern tools like diet-based therapy and other regimens. Ginger is one of the classic examples of an herb used for not only culinary preparations but also for unique therapeutic significance owing to its antioxidant, antimicrobial, and anti-inflammatory potential. The pungent fractions of ginger, namely gingerols, shogaols, paradols, and volatile constituents like sesquiterpenes and monoterpenes, are mainly attributed to the health-enhancing perspectives of ginger. This review elucidates the health claims of ginger and the molecular aspects and targets, with special reference to anticancer perspectives, immunonutrition, antioxidant potential, and cardiovascular cure. The molecular targets involved in chemoprevention like the inhibition of NF-κB activation via impairing nuclear translocation, suppresses cIAP1 expression, increases caspase-3/7 activation, arrests cell cycle in G2 + M phases, up-regulates Cytochrome-c, Apaf-1, activates PI3K/Akt/I kappaB kinases IKK, suppresses cell proliferation, and inducts apoptosis and chromatin condensation. Similarly, facts are presented regarding the anti-inflammatory response of ginger components and molecular targets including inhibition of prostaglandin and leukotriene biosynthesis and suppression of 5-lipoxygenase. Furthermore, inhibition of phosphorylation of three mitogen-activated protein kinases (MAPKs), extracellular signal-regulated kinases 1 and 2 (ERK1/2), and c-Jun N-terminal kinase (JNK) are also discussed. The role of ginger in reducing the extent of cardiovascular disorders, diabetes mellitus, and digestive problems has also been described in detail. Although, current review articles summarized the literature pertaining to ginger and its components. However, authors are still of the view that further research should be immediately carried out for meticulousness. |
Neural synchronization deficits linked to cortical hyper-excitability and auditory hypersensitivity in fragile X syndrome | BACKGROUND
Studies in the fmr1 KO mouse demonstrate hyper-excitability and increased high-frequency neuronal activity in sensory cortex. These abnormalities may contribute to prominent and distressing sensory hypersensitivities in patients with fragile X syndrome (FXS). The current study investigated functional properties of auditory cortex using a sensory entrainment task in FXS.
METHODS
EEG recordings were obtained from 17 adolescents and adults with FXS and 17 age- and sex-matched healthy controls. Participants heard an auditory chirp stimulus generated using a 1000-Hz tone that was amplitude modulated by a sinusoid linearly increasing in frequency from 0-100 Hz over 2 s.
RESULTS
Single trial time-frequency analyses revealed decreased gamma band phase-locking to the chirp stimulus in FXS, which was strongly coupled with broadband increases in gamma power. Abnormalities in gamma phase-locking and power were also associated with theta-gamma amplitude-amplitude coupling during the pre-stimulus period and with parent reports of heightened sensory sensitivities and social communication deficits.
CONCLUSIONS
This represents the first demonstration of neural entrainment alterations in FXS patients and suggests that fast-spiking interneurons regulating synchronous high-frequency neural activity have reduced functionality. This reduced ability to synchronize high-frequency neural activity was related to the total power of background gamma band activity. These observations extend findings from fmr1 KO models of FXS, characterize a core pathophysiological aspect of FXS, and may provide a translational biomarker strategy for evaluating promising therapeutics. |
Confined lateral selective epitaxial growth of silicon for device fabrication | An epitaxy technique, confined lateral selective epitaxial growth (CLSEG), which produces wide, thin slabs of single-crystal silicon over insulator, using only conventional processing, is discussed. As-grown films of CLSEG 0.9 mu m thick, 8.0 mu m wide, and 500 mu m long were produced at 1000 degrees C at reduced pressure. Junction diodes fabricated in CLSEG material show ideality factors of 1.05 with reverse leakage currents comparable to those of diodes built in SEG homoepitaxial material. Metal-gate p-channel MOSFETs in CLSEG with channel dopings of 2*10/sup 16/ cm/sup -3/ exhibit average mobilities of 283 cm/sup 2//V-s and subthreshold slopes of 223 mV/decade.<<ETX>> |
Vision-based state estimation for autonomous rotorcraft MAVs in complex environments | In this paper, we consider the development of a rotorcraft micro aerial vehicle (MAV) system capable of vision-based state estimation in complex environments. We pursue a systems solution for the hardware and software to enable autonomous flight with a small rotorcraft in complex indoor and outdoor environments using only onboard vision and inertial sensors. As rotorcrafts frequently operate in hover or nearhover conditions, we propose a vision-based state estimation approach that does not drift when the vehicle remains stationary. The vision-based estimation approach combines the advantages of monocular vision (range, faster processing) with that of stereo vision (availability of scale and depth information), while overcoming several disadvantages of both. Specifically, our system relies on fisheye camera images at 25 Hz and imagery from a second camera at a much lower frequency for metric scale initialization and failure recovery. This estimate is fused with IMU information to yield state estimates at 100 Hz for feedback control. We show indoor experimental results with performance benchmarking and illustrate the autonomous operation of the system in challenging indoor and outdoor environments. |
NFC-based Mobile Payment Protocol with User Anonymity | Following the developments of wireless and mobile communication technologies, mobile-commerce (M-commerce) has become more and more popular. However, most of the existing M-commerce protocols do not consider the user anonymity during transactions. This means that it is possible to trace the identity of a payer from a M-commerce transaction. Luo et al. in 2014 proposed an NFC-based anonymous mobile payment protocol. It used an NFC-enabled smartphone and combined a built-in secure element (SE) as a trusted execution environment to build an anonymous mobile payment service. But their scheme has several problems and cannot be functional in practice. In this paper, we introduce a new NFC-based anonymous mobile payment protocol. Our scheme has the following features:(1) Anonymity. It prevents the disclosure of user's identity by using virtual identities instead of real identity during the transmission. (2) Efficiency. Confidentiality is achieved by symmetric key cryptography instead of public key cryptography so as to increase the performance. (3) Convenience. The protocol is based on NFC and is EMV compatible. (4) Security. All the transaction is either encrypted or signed by the sender so the confidentiality and authenticity are preserved. |
Face recognition with liveness detection using eye and mouth movement | The recent literature on face recognition technology discusses the issue of face spoofing which can bypass the authentication system by placing a photo/video/mask of the enrolled person in front of the camera. This problem could be minimized by detecting the liveness of the person. Therefore, in this paper, we propose a robust liveness detection scheme based on challenge and response method. The liveness module is added as extra layer of security before the face recognition module. The liveness module utilizes face macro features, especially eye and mouth movements in order to generate random challenges and observing the user's response on account of this. The reliability of liveness module is tested by placing different types of spoofing attacks with various means, like using photograph, videos, etc. In all, five types of attacks have been taken care of and prevented by our system. Experimental results show that system is able to detect the liveness when subjected to all these attacks except the eye & mouth imposter attack. This attack is able to bypass the liveness test but it creates massive changes in face structure. Therefore resultant unrecognized or misclassified by the face recognition module. An experimental test conducted on 65 persons on university of Essex face database confirms that removal of eye and nose components results 75% misclassification. |
Collaborative conceptual design: A large software project case study | During software development, the activities of requirements analysis, functional specification, and architectural design all require a team of developers to converge on a common vision of what they are developing. There have been remarkably few studies of conceptual design during real projects. In this paper, we describe a detailed field study of a large industrial software project. We observed the development team's conceptual design activities for three months with follow-up observations and discussions over the following eight months. In this paper, we emphasize the organization of the project and how patterns of collaboration affected the team's convergence on a common vision. Three observations stand out: First, convergence on a common vision was not only painfully slow but was punctuated by several reorientations of direction; second, the design process seemed to be inherently forgetful, involving repeated resurfacing of previously discussed issues; finally, a conflict of values persisted between team members responsible for system development and those responsible for overseeing the development process. These findings have clear implications for collaborative support tools and process interventions. |
Oscillations, neutrino masses and scales of new physics | Abstract We show that all the available experimental information involving neutrinos can be accounted for within the framework of already existing models where neutrinos have zero mass at tree level, but obtain a small Dirac mass by radiative corrections. |
NFC and its application to mobile payment: Overview and comparison | NFC (Near Field Communication) is a recently emerging technology for short range communications aimed to enhance existing near field technologies such as RFID (Radio Frequency Identification). In this paper, NFC is introduced in terms of operation principles and compared with the existing short range communication technologies. The NFC-enabled mobile systems are technically discussed with respect to architecture and operating modes. Then, NFC as a mobile payment solution is analyzed in terms of security and compared with other existing mobile payment solutions by observing various metrics. |
High resolution velocity estimation for all digital, AC servo drives | Because the position transducers commonly used (optical encoders and electromagnetic resolvers) do not inherently produce a true, instantaneous velocity measurement, some signal processing techniques are generally used to estimate the velocity at each sample instant. This estimated signal is then used as the velocity feedback signal for the velocity loop control. An analysis is presented of the limitations of such approaches, and a technique which optimally estimates the velocity at each sample instant is presented. The method is shown to offer a significant improvement in command-driven systems and to reduce the effect of quantized angular resolution which limits the ultimate performance of all digital servo drives. The noise reduction is especially relevant for AC servo drives due to the high current loop bandwidths required for their correct operation. The method demonstrates improved measurement performance over a classical DC tachometer.<<ETX>> |
Fixed point theorems for generalized F-contractions in b-metric-like spaces | In this paper, we introduce some new F-contractions in b-metric-like spaces and investigate some fixed point theorems for such F-contractions. Presented theorems generalize related results in the literature. An example is also given to support our main result. c ©2016 All rights reserved. |
ACADEMIC PRESSURE AND IMPACT ON STUDENTS' DEVELOPMENT IN CHINA | o This paper examines the enormous pressure Chinese students must bear at home and in school in order to obtain high academic achievemento The authors look at students' lives from their own perspective and study the impact of home and school pressures on students' intellectral, psychological, and physical development. Cultural, political, and economic factors are analyzed to provide an explanation of the situation. The paper raises questions as to what is the purpose of education and argues for the importance of balancing educational goals with other aspects of students' lives. RtSUMto Cet article s'intéresse aux pressions considérables dont font l'objet les étudiants chinois à la maison et à l'école en vue de réussir sur le plan scolaire. Les auteurs étudient la vie des étudiants à la lumière de leurs propres points de vue de même que l'impact des pressions familliales et scolaires sur leur développement intellectuel, psychologique et physique. Les facteurs culturels, politiques et économiques sont analysés en vue d'expliquer la situation. L'article soulève des questions sur le but de l'éducation et insiste sur l'importance d'équilibrer les objetifs pédagogiques avec les autres aspects de la vie des |
Caching based socially-aware D2D communications in wireless content delivery networks: a hypergraph framework | The emergence of the D2D communications paradigm has transformed the way in which cellular networks are operated. Previous work in D2D communications, however, mainly focused on interference management and capacity maximization of both D2D links and cellular links. Recent literature has observed that the QoE can be greatly enhanced by caching contents in mobile devices, with a carefully designed caching strategy. In this article, we propose a novel hypergraph framework that designs the caching based D2D communication scheme by taking social ties among users and common interests into consideration. We first present the considered features of the D2D transmission scheme, social characteristics consideration, and interest similarity impact. The key concepts of hypergraph and related techniques, such as hypergraph coloring and multidimensional matching, are explained. Then some design issues with simulation results in the proposed framework are discussed in detail, which show the potential of the proposed approach. By jointly considering social ties, common interests, and the D2D transmission scheme, we believe the proposed framework explores new opportunities and future directions in caching based socially-aware D2D communications. |
Coordinated Charging of Plug-In Hybrid Electric Vehicles to Minimize Distribution System Losses | As the number of plug-in hybrid vehicles (PHEVs) increases, so might the impacts on the power system performance, such as overloading, reduced efficiency, power quality, and voltage regulation particularly at the distribution level. Coordinated charging of PHEVs is a possible solution to these problems. In this work, the relationship between feeder losses, load factor, and load variance is explored in the context of coordinated PHEV charging. From these relationships, three optimal charging algorithms are developed which minimize the impacts of PHEV charging on the connected distribution system. The application of the algorithms to two test systems verifies these relationships approximately hold independent of system topology. They also show the additional benefits of reduced computation time and problem convexity when using load factor or load variance as the objective function rather than system losses. This is important for real-time dispatching of PHEVs. |
Delayed Rejection in Reversible | In a Metropolis-Hastings algorithm, rejection of proposed moves is an intrinsic part of ensuring that the chain converges to the intended target distribution. However, persistent rejection, perhaps in particular parts of the state space, may indicate that locally the proposal distribution is badly calibrated to the target. As an alternative to careful oo-line tuning of state-dependent proposals, the basic algorithm can be modiied so that on rejection, a second attempt to move is made. A diierent proposal can be generated from a new distribution, that is allowed to depend on the previously rejected proposal. We generalise this idea of delaying the rejection and adapting the proposal distribution, due to Tierney and Mira (1999), to generate a more exible class of methods, that in particular applies to a variable dimension setting. The approach is illustrated by two pedagogical examples, and a more realistic application, to a change-point analysis for point processes. |
DNA → RNA: What Do Students Think the Arrow Means? | The central dogma of molecular biology, a model that has remained intact for decades, describes the transfer of genetic information from DNA to protein though an RNA intermediate. While recent work has illustrated many exceptions to the central dogma, it is still a common model used to describe and study the relationship between genes and protein products. We investigated understanding of central dogma concepts and found that students are not primed to think about information when presented with the canonical figure of the central dogma. We also uncovered conceptual errors in student interpretation of the meaning of the transcription arrow in the central dogma representation; 36% of students (n = 128; all undergraduate levels) described transcription as a chemical conversion of DNA into RNA or suggested that RNA existed before the process of transcription began. Interviews confirm that students with weak conceptual understanding of information flow find inappropriate meaning in the canonical representation of central dogma. Therefore, we suggest that use of this representation during instruction can be counterproductive unless educators are explicit about the underlying meaning. |
Design of a Blockchain-Based Lottery System for Smart Cities Applications | Among the smart cities applications, optimizing lottery games is one of the urgent needs to ensure their fairness and transparency. The emerging blockchain technology shows a glimpse of solutions to fairness and transparency issues faced by lottery industries. This paper presents the design of a blockchain-based lottery system for smart cities applications. We adopt the smart contracts of blockchain technology and the cryptograph blockchain model, Hawk [8], to design the blockchain-based lottery system, FairLotto, for future smart cities applications. Fairness, transparency, and privacy of the proposed blockchain-based lottery system are discussed and ensured. |
The interaction of cigarette smoking and antioxidants. Part I: diet and carotenoids. | It is logical that the requirement for antioxidant nutrients depends on a person's exposure to endogenous and exogenous reactive oxygen species. Since cigarette smoking results in an increased cumulative exposure to reactive oxygen species from both sources, it would seem cigarette smokers would have an increased requirement for antioxidant nutrients. Logic dictates that a diet high in antioxidant-rich foods such as fruits, vegetables, and spices would be both protective and a prudent preventive strategy for smokers. This review examines available evidence of fruit and vegetable intake, and supplementation of antioxidant compounds by smokers in an attempt to make more appropriate nutritional recommendations to this population. |
Politics and health in eight European countries: a comparative study of mortality decline under social democracies and right-wing governments. | Recent publications have argued that the welfare state is an important determinant of population health, and that social democracy in office and higher levels of health expenditure promote health progress. In the period 1950-2000, Greece, Portugal, and Spain were the poorest market economies in Europe, with a fragmented system of welfare provision, and many years of military or authoritarian right-wing regimes. In contrast, the five Nordic countries were the richest market economies in Europe, governed mostly by center or center-left coalitions often including the social democratic parties, and having a generous and universal welfare state. In spite of the socioeconomic and political differences, and a large gap between the five Nordic and the three southern nations in levels of health in 1950, population health indicators converged among these eight countries. Mean decadal gains in longevity of Portugal and Spain between 1950 and 2000 were almost three times greater than gains in Denmark, and about twice as great as those in Iceland, Norway and Sweden during the same period. All this raises serious doubts regarding the hypothesis that the political regime, the political party in office, the level of health care spending, and the type of welfare state exert major influences on population health. Either these factors are not major determinants of mortality decline, or their impact on population health in Nordic countries was more than offset by other health-promoting factors present in Southern Europe. |
COMPARISON OF LOSSLESS DATA COMPRESSION ALGORITHMS FOR TEXT DATA | Data compression is a common requirement for most of the computerized applications. There are number of data compression algorithms, which are dedicated to compress different data formats. Even for a single data type there are number of different compression algorithms, which use different approaches. This paper examines lossless data compression algorithms and compares their performance. A set of selected algorithms are examined and implemented to evaluate the performance in compressing text data. An experimental comparison of a number of different lossless data compression algorithms is presented in this paper. The article is concluded by stating which algorithm performs well for text data. |
A study of the glutathione metaboloma peptides by energy-resolved mass spectrometry as a tool to investigate into the interference of toxic heavy metals with their metabolic processes. | To better understand the fragmentation processes of the metal-biothiol conjugates and their possible significance in biological terms, an energy-resolved mass spectrometric study of the glutathione conjugates of heavy metals, of several thiols and disulfides of the glutathione metaboloma has been carried out. The main fragmentation process of gamma-glutamyl compounds, whether in the thiol, disulfide, thioether or metal-bis-thiolate form, is the loss of the gamma-glutamyl residue, a process which ERMS data showed to be hardly influenced by the sulfur substitution. However, loss of the gamma-glutamyl residue from the mono-S-glutathionyl-mercury (II) cation is a much more energetic process, possibly pointing at a strong coordination of the carboxylic group to the metal. Moreover, loss of neutral mercury from ions containing the gamma-glutamyl residue to yield a sulfenium cation was a much more energetic process than those not containing them, suggesting that the redox potential of the thiol/disulfide system plays a role in the formal reduction of the mercury dication in the gas phase. Occurrence of complementary sulfenium and protonated thiol fragments in the spectra of protonated disulfides of the glutathione metaboloma mirrors the thiol/disulfide redox process of biological importance. The intensity ratio of the fragments is proportional to the reduction potential in solution of the corresponding redox pairs. This finding has allowed the calculation of the previously unreported reduction potentials for the disulfide/thiol pair of cysteinylglycine, thereby confirming the decomposition scheme of bis- and mono-S-glutathionyl-mercury (II) ions. Finally, on the sole basis of the mass spectrometric fragmentation of the glutathione-mercury conjugates, and supported by independent literature evidence, an unprecedented mechanism for mercury ion-induced cellular oxidative stress could be proposed, based on the depletion of the glutathione pool by a catalytic mechanism acting on the metal (II)-thiol conjugates and involving as a necessary step the enzymatic removal of the glutamic acid residue to yield a mercury (II)-cysteinyl-glycine conjugate capable of regenerating neutral mercury through the oxidation of glutathione thiols to the corresponding disulfides. |
Tri-Branch Vein Structure Assisted Finger Vein Recognition | In the template matching of finger vein recognition, the probe will be accepted if the number of its overlapped vein points with the enrolled user is larger than the predefined threshold. However, the acceptance may be false owing to ignoring the structure of the vein pattern. We find that local vein branches near the bifurcation point of vein pattern vary largely between the imposter images. So, this paper tries to explore this kind of local vein structure to improve the recognition performance of the template matching. The bifurcation point and its local vein branches, named tri-branch vein structure, are extracted from the vein pattern, and fused with the whole vein pattern by a user-specific threshold-based filter framework. The experimental results on two public databases prove the effectiveness of the proposed framework for improving the performance of vein pattern-based finger vein recognition. |
An NFC transceiver with RF-powered RFID transponder mode | A single chip NFC transceiver supporting not only NFC active and passive mode but also 13.56 MHz RFID reader and tag mode was designed and fabricated. The proposed NFC transceiver can operate as a RFID tag even without external power supply thanks to a dual antenna structure for initiator and target. The area increment due to additional target antenna is negligible because the target antenna is constructed by using a shielding layer of initiator antenna. |
Comparison of Quantitative and Qualitative Research Traditions : epistemological , theoretical , and methodological differences | Educational researchers in every discipline need to be cognisant of alternative research traditions to make decisions about which method to use when embarking on a research study. There are two major approaches to research that can be used in the study of the social and the individual world. These are quantitative and qualitative research. Although there are books on research methods that discuss the differences between alternative approaches, it is rare to find an article that examines the design issues at the intersection of the quantitative and qualitative divide based on eminent research literature. The purpose of this article is to explain the major differences between the two research paradigms by comparing them in terms of their epistemological, theoretical, and methodological underpinnings. Since quantitative research has well-established strategies and methods but qualitative research is still growing and becoming more differentiated in methodological approaches, greater consideration will be given to the latter. |
Proceso de Valoración para la Mejora de Procesos Software en Pequeñas Organizaciones | 1 Grupo IDIS Facultad de Ingeniería Electrónica y Telecomunicaciones Universidad del Cauca Calle 5 No. 4 – 70. Popayán, Cauca, Colombia. [email protected]. Web: http://www.unicauca.edu.co/idis/ 2 Grupo Alarcos Escuela Superior de Informática Universidad Castilla-La Mancha Paseo de la Universidad 4, Ciudad Real, España. {Felix.Garcia, Mario.Piattini}@uclm.es. Web: http://alarcos.inf-cr.uclm.es/ |
Robot hands and the mechanics of manipulation | What do you do to start reading robot hands and the mechanics of manipulation? Searching the book that you love to read first or find an interesting book that will make you want to read? Everybody has difference with their reason of reading a book. Actuary, reading habit must be from earlier. Many people may be love to read, but not a book. It's not fault. Someone will be bored to open the thick book with small words to read. In more, this is the real condition. So do happen probably with this robot hands and the mechanics of manipulation. |
Quasi-cyclic LDPC codes for fast encoding | In this correspondence we present a special class of quasi-cyclic low-density parity-check (QC-LDPC) codes, called block-type LDPC (B-LDPC) codes, which have an efficient encoding algorithm due to the simple structure of their parity-check matrices. Since the parity-check matrix of a QC-LDPC code consists of circulant permutation matrices or the zero matrix, the required memory for storing it can be significantly reduced, as compared with randomly constructed LDPC codes. We show that the girth of a QC-LDPC code is upper-bounded by a certain number which is determined by the positions of circulant permutation matrices. The B-LDPC codes are constructed as irregular QC-LDPC codes with parity-check matrices of an almost lower triangular form so that they have an efficient encoding algorithm, good noise threshold, and low error floor. Their encoding complexity is linearly scaled regardless of the size of circulant permutation matrices. |
Lower limb rehabilitation robot | This paper describes a new prototype of lower limb rehabilitation robot (for short: LLRR), including its detailed structure, operative principles, manipulative manual and control mode which give considerate protection to patients. It implements the programmable process during the course of the limbs rehabilitation, furthermore, renders variable and predetermined step posture and force sensing. LLRR could assist patient in-- simulating normal people's footsteps, exercising leg muscles, gradually recovering the neural control toward walking function and finally walking in normal way utterly. Such robot is comprised with steps posture controlling system and weight alleviation controlling mechanism. |
Information Theoretic Learning | Learning systems depend on three interrelated components: topologies, cost/performance functions, and learning algorithms. Topologies provide the constraints for the mapping, and the learning algorithms offer the means to find an optimal solution; but the solution is optimal with respect to what? Optimality is characterized by the criterion and in neural network literature, this is the least addressed component, yet it has a decisive influence in generalization performance. Certainly, the assumptions behind the selection of a criterion should be better understood and investigated. Traditionally, least squares has been the benchmark criterion for regression problems; considering classification as a regression problem towards estimating class posterior probabilities, least squares has been employed to train neural network and other classifier topologies to approximate correct labels. The main motivation to utilize least squares in regression simply comes from the intellectual comfort this criterion provides due to its success in traditional linear least squares regression applications – which can be reduced to solving a system of linear equations. For nonlinear regression, the assumption of Gaussianity for the measurement error combined with the maximum likelihood principle could be emphasized to promote this criterion. In nonparametric regression, least squares principle leads to the conditional expectation solution, which is intuitively appealing. Although these are good reasons to use the mean squared error as the cost, it is inherently linked to the assumptions and habits stated above. Consequently, there is information in the error signal that is not captured during the training of nonlinear adaptive systems under non-Gaussian distribution conditions when one insists on second-order statistical criteria. This argument extends to other linear-second-order techniques such as principal component analysis (PCA), linear discriminant analysis (LDA), and canonical correlation analysis (CCA). Recent work tries to generalize these techniques to nonlinear scenarios by utilizing kernel techniques or other heuristics. This begs the question: what other alternative cost functions could be used to train adaptive systems and how could we establish rigorous techniques for extending useful concepts from linear and second-order statistical techniques to nonlinear and higher-order statistical learning methodologies? |
Large Scale Distributed Deep Networks | Recent work in unsupervised feature learning and deep learning has shown that being able to train large models can dramatically improve performance. In this paper, we consider the problem of training a deep network with billions of parameters using tens of thousands of CPU cores. We have developed a software framework called DistBelief that can utilize computing clusters with thousands of machines to train large models. Within this framework, we have developed two algorithms for large-scale distributed training: (i) Downpour SGD, an asynchronous stochastic gradient descent procedure supporting a large number of model replicas, and (ii) Sandblaster, a framework that supports a variety of distributed batch optimization procedures, including a distributed implementation of L-BFGS. Downpour SGD and Sandblaster L-BFGS both increase the scale and speed of deep network training. We have successfully used our system to train a deep network 30x larger than previously reported in the literature, and achieves state-of-the-art performance on ImageNet, a visual object recognition task with 16 million images and 21k categories. We show that these same techniques dramatically accelerate the training of a more modestlysized deep network for a commercial speech recognition service. Although we focus on and report performance of these methods as applied to training large neural networks, the underlying algorithms are applicable to any gradient-based machine learning algorithm. |
On the faithfulness of graph visualizations | Readability criteria have been commonly used to measure the quality of graph visualizations. In this paper we argue that readability criteria, while necessary, are not sufficient. We propose a new kind of criterion, generically termed faithfulness, for evaluating graph layout methods. We propose a general model for quantifying faithfulness, and contrast it with the well established readability criteria. We use examples of multidimensional scaling, edge bundling and several other visualization metaphors (including matrix-based and map-based visualizations) to illustrate faithfulness. |
An empirical comparison of influence measurements for social network analysis | The studying of social influence can be used to understand and solve many complicated problems in social network analysis such as predicting influential users. This paper focuses on the problem of predicting influential users on social networks. We introduce a three-level hierarchy that classifies the influence measurements. The hierarchy categorizes the influence measurements by three folds, i.e., models, types and algorithms. Using this hierarchy, we classify the existing influence measurements. We further compare them based on an empirical analysis in terms of performance, accuracy and correlation using datasets from two different social networks to investigate the feasibility of influence measurements. Our results show that predicting influential users does not only depend on the influence measurements but also on the nature of social networks. Our goal is to introduce a standardized baseline for the problem of predicting influential users on social networks. |
Effects of interleukin-12 and interleukin-15 on measles-specific T-cell responses in vaccinated infants. | Understanding the infant host response to measles vaccination is important because of their increased mortality from measles and the need to provide effective protection during the first year of life. Measles-specific T and B-cell responses are lower in infants after measles vaccination than in adults. To define potential mechanisms, we investigated age-related differences in measles-specific T-cell proliferation, CD40-L expression, and IFN-gamma production after measles immunization, and the effects of rhIL-12 and rhIL-15 on these responses. Measles-specific T-cell proliferation and mean IFN-gamma release from infant PBMCs were significantly lower when compared with responses of vaccinated children and adults. Infant responses increased to ranges observed in children and adults when both rhIL-12 and rhIL-15 were added to PBMC cultures. Furthermore, a significant rise in T-cell proliferation and IFN-gamma release was observed when infant PBMCs were stimulated with measles antigen in the presence of rhIL-12 and rhIL-15 compared to measles antigen alone. CD40-L expression by infant and adult T cells stimulated with measles antigen was comparable, but fewer infant CD40-L(+) T cells expressed IFN-gamma. These observations suggest that lower measles-specific T-cell immune responses elicited by measles vaccine in infants may be due to diminished levels of key cytokines. |
Placenta Accreta Spectrum. | From the Department of Obstetrics and Gynecology, University of Utah Health Sciences Center (R.M.S., D.W.B.), and the Women and Newborns Clinical Program of Intermountain Healthcare (D.W.B.) — both in Salt Lake City. Address reprint requests to Dr. Silver at the Department of Obstetrics and Gynecology, University of Utah Health Sciences Center, 30 N. 1900 East, Rm. 2B200 SOM, Salt Lake City, UT 84132, or at bob . silver@ hsc . utah . edu. |
An Ant Colony Algorithm for the Capacitated Vehicle Routing | The Vehicle Routing Problem (VRP) requires the determination of an optimal set of routes for a set of vehicles to serve a set of customers. We deal here with the Capacitated Vehicle Routing Problem (CVRP) where there is a maximum weight or volume that each vehicle can load. We developed an Ant Colony algorithm (ACO) for the CVRP based on the metaheuristic technique introduced by Colorni, Dorigo and Maniezzo. We present preliminary results that show that ant algorithms are competitive with other metaheuristics for solving CVRP. |
Automatic detection of invasive ductal carcinoma in whole slide images with convolutional neural networks | This paper presents a deep learning approach for automatic detection and visual analysis of invasive ductal carcinoma (IDC) tissue regions in whole slide images (WSI) of breast cancer (BCa). Deep learning approaches are learn-from-data methods involving computational modeling of the learning process. This approach is similar to how human brain works using different interpretation levels or layers of most representative and useful features resulting into a hierarchical learned representation. These methods have been shown to outpace traditional approaches of most challenging problems in several areas such as speech recognition and object detection. Invasive breast cancer detection is a time consuming and challenging task primarily because it involves a pathologist scanning large swathes of benign regions to ultimately identify the areas of malignancy. Precise delineation of IDC in WSI is crucial to the subsequent estimation of grading tumor aggressiveness and predicting patient outcome. DL approaches are particularly adept at handling these types of problems, especially if a large number of samples are available for training, which would also ensure the generalizability of the learned features and classifier. The DL framework in this paper extends a number of convolutional neural networks (CNN) for visual semantic analysis of tumor regions for diagnosis support. The CNN is trained over a large amount of image patches (tissue regions) from WSI to learn a hierarchical part-based representation. The method was evaluated over a WSI dataset from 162 patients diagnosed with IDC. 113 slides were selected for training and 49 slides were held out for independent testing. Ground truth for quantitative evaluation was provided via expert delineation of the region of cancer by an expert pathologist on the digitized slides. The experimental evaluation was designed to measure classifier accuracy in detecting IDC tissue regions in WSI. Our method yielded the best quantitative results for automatic detection of IDC regions in WSI in terms of F-measure and balanced accuracy (71.80%, 84.23%), in comparison with an approach using handcrafted image features (color, texture and edges, nuclear textural and architecture), and a machine learning classifier for invasive tumor classification using a Random Forest. The best performing handcrafted features were fuzzy color histogram (67.53%, 78.74%) and RGB histogram (66.64%, 77.24%). Our results also suggest that at least some of the tissue classification mistakes (false positives and false negatives) were less due to any fundamental problems associated with the approach, than the inherent limitations in obtaining a very highly granular annotation of the diseased area of interest by an expert pathologist. |
The great time series classification bake off: a review and experimental evaluation of recent algorithmic advances | In the last 5 years there have been a large number of new time series classification algorithms proposed in the literature. These algorithms have been evaluated on subsets of the 47 data sets in the University of California, Riverside time series classification archive. The archive has recently been expanded to 85 data sets, over half of which have been donated by researchers at the University of East Anglia. Aspects of previous evaluations have made comparisons between algorithms difficult. For example, several different programming languages have been used, experiments involved a single train/test split and some used normalised data whilst others did not. The relaunch of the archive provides a timely opportunity to thoroughly evaluate algorithms on a larger number of datasets. We have implemented 18 recently proposed algorithms in a common Java framework and compared them against two standard benchmark classifiers (and each other) by performing 100 resampling experiments on each of the 85 datasets. We use these results to test several hypotheses relating to whether the algorithms are significantly more accurate than the benchmarks and each other. Our results indicate that only nine of these algorithms are significantly more accurate than both benchmarks and that one classifier, the collective of transformation ensembles, is significantly more accurate than all of the others. All of our experiments and results are reproducible: we release all of our code, results and experimental details and we hope these experiments form the basis for more robust testing of new algorithms in the future. |
Adaptive Motion Planning | Mobile robots are increasingly being deployed in the real world in response to a heightened demand for applications such as transportation, delivery and inspection. The motion planning systems for these robots are expected to have consistent performance across the wide range of scenarios that they encounter. While state-of-the-art planners, with provable worst-case guarantees, can be employed to solve these planning problems, their finite time performance varies across scenarios. This thesis proposes that the planning module for a robot must adapt its search strategy to the distribution of planning problems encountered to achieve real-time performance. We address three principal challenges of this problem. Firstly, we show that even when the planning problem distribution is fixed, designing a nonadaptive planner can be challenging as the performance of planning strategies fluctuates with small changes in the environment. We characterize the existence of complementary strategies and propose to hedge our bets by executing a diverse ensemble of planners. Secondly, when the distribution is varying, we require a meta-planner that can automatically select such an ensemble from a library of black-box planners. We show that greedily training a list of predictors to focus on failure cases leads to an effective meta-planner. For situations where we have no training data, we show that we can learn an ensemble on-the-fly by adopting algorithms from online paging theory. Thirdly, in the interest of efficiency, we require a white-box planner that directly adapts its search strategy during a planning cycle. We propose an efficient procedure for training adaptive search heuristics in a data-driven imitation learning framework. We also draw a novel connection to Bayesian active learning, and propose algorithms to adaptively evaluate edges of a graph. Our approach leads to the synthesis of a robust real-time planning module that allows a UAV to navigate seamlessly across environments and speed-regimes. We evaluate our framework on a spectrum of planning problems and show closed-loop results on 3 UAV platforms a full-scale autonomous helicopter, a large scale hexarotor and a small quadrotor. While the thesis was motivated by mobile robots, we have shown that the individual algorithms are broadly applicable to other problem domains such as informative path planning and manipulation planning. We also establish novel connections between the disparate fields of motion planning and active learning, imitation learning and online paging which opens doors to several new research problems. |
Fast object localization and pose estimation in heavy clutter for robotic bin picking | We present a practical vision-based robotic bin-picking sy stem that performs detection and 3D pose estimation of objects in an unstr ctu ed bin using a novel camera design, picks up parts from the bin, and p erforms error detection and pose correction while the part is in the gri pper. Two main innovations enable our system to achieve real-time robust a nd accurate operation. First, we use a multi-flash camera that extracts rob ust depth edges. Second, we introduce an efficient shape-matching algorithm called fast directional chamfer matching (FDCM), which is used to reliabl y detect objects and estimate their poses. FDCM improves the accuracy of cham fer atching by including edge orientation. It also achieves massive improvements in matching speed using line-segment approximations of edges , a 3D distance transform, and directional integral images. We empiricall y show that these speedups, combined with the use of bounds in the spatial and h ypothesis domains, give the algorithm sublinear computational compl exity. We also apply our FDCM method to other applications in the context of deformable and articulated shape matching. In addition to significantl y improving upon the accuracy of previous chamfer matching methods in all of t he evaluated applications, FDCM is up to two orders of magnitude faster th an the previous methods. |
Feedback directed implicit parallelism | In this paper we present an automated way of using spare CPU resources within a shared memory multi-processor or multi-core machine. Our approach is (i) to profile the execution of a program, (ii) from this to identify pieces of work which are promising sources of parallelism, (iii) recompile the program with this work being performed speculatively via a work-stealing system and then (iv) to detect at run-time any attempt to perform operations that would reveal the presence of speculation.
We assess the practicality of the approach through an implementation based on GHC 6.6 along with a limit study based on the execution profiles we gathered. We support the full Concurrent Haskell language compiled with traditional optimizations and including I/O operations and synchronization as well as pure computation. We use 20 of the larger programs from the 'nofib' benchmark suite. The limit study shows that programs vary a lot in the parallelism we can identify: some have none, 16 have a potential 2x speed-up, 4 have 32x. In practice, on a 4-core processor, we get 10-80% speed-ups on 7 programs. This is mainly achieved at the addition of a second core rather than beyond this.
This approach is therefore not a replacement for manual parallelization, but rather a way of squeezing extra performance out of the threads of an already-parallel program or out of a program that has not yet been parallelized. |
An Introduction to Sensor Data Analytics | The increasing advances in hardware technology for sensor processing and mobile technology has resulted in greater access and availability of sensor data from a wide variety of applications. For example, the commodity mobile devices contain a wide variety of sensors such as GPS, accelerometers, and other kinds of data. Many other kinds of technology such as RFID-enabled sensors also produce large volumes of data over time. This has lead to a need for principled methods for efficient sensor data processing. This chapter will provide an overview of the challenges of sensor data analytics and the different areas of research in this context. We will also present the organization of the chapters in this book in this context. |
TRUST FORMATION IN NEW ORGANIZATIONAL RELATIONSHIPS | Trust is a key enabler of cooperative human actions. Three main deficiencies about our current knowledge of trust are addressed by this paper. First, due to widely divergent conceptual definitions of trust, the literature on trust is in a state of construct confusion. Second, too little is understood about how trust forms and on what trust is based. Third, little has been discussed about the role of emotion in trust formation. To address the first deficiency, this paper develops a typology of trust. The rest of the paper addresses the second and third deficiencies by proposing a model of how trust is initially formed, including the role of emotion. Dispositional, interpersonal, and impersonal (system) trust are integrated in the model. The paper also clarifies the cognitive and emotional bases on which interpersonal trust is formed in early relationships. The implications of the model are drawn for future research. |
Why Horn Formulas Matter in Computer Science: Initial Structures and Generic Examples (Extended Abstract) | We introduce the notion of generic examples as a unifying principle for various phenomena in computer science such as initial structures in the area of abstract data types and Armstrong relations in the area of data bases. Generic examples are also useful in defining the semantics of logig programming, in the formal theory of program testing and in complexity theory. We characterize initial structures in terms of their generic properties and give a syntactic characterization of first order theories admitting initial structures. The latter can be used to explain why Horn formulas have gained a predominant role in various areas of computer science. |
Reconstruction and representation of 3D objects with radial basis functions | We use polyharmonic Radial Basis Functions (RBFs) to reconstruct smooth, manifold surfaces from point-cloud data and to repair incomplete meshes. An object's surface is defined implicitly as the zero set of an RBF fitted to the given surface data. Fast methods for fitting and evaluating RBFs allow us to model large data sets, consisting of millions of surface points, by a single RBF — previously an impossible task. A greedy algorithm in the fitting process reduces the number of RBF centers required to represent a surface and results in significant compression and further computational advantages. The energy-minimisation characterisation of polyharmonic splines result in a “smoothest” interpolant. This scale-independent characterisation is well-suited to reconstructing surfaces from non-uniformly sampled data. Holes are smoothly filled and surfaces smoothly extrapolated. We use a non-interpolating approximation when the data is noisy. The functional representation is in effect a solid model, which means that gradients and surface normals can be determined analytically. This helps generate uniform meshes and we show that the RBF representation has advantages for mesh simplification and remeshing applications. Results are presented for real-world rangefinder data. |
Tectonic style of the Appalachian allochthonous zone of southern Quebec: Seismic and gravimetric evidence | Abstract An integrated interpretation of seismic reflection and gravity results yields an image of the structural units (nappes, overthrust sheets and underthrust slabs) of the Cambrian-Ordovician metasedimentary and metavolcanic sequences of the Appalachian province of southern Quebec at depth and the relative position of the underlying Precambrian crystalline basement. The tectonics related to crystalline Precambrian basement may be correlated with four interpretations and combinations of these. A decollement of Paleozoic sedimentary and volcanic piles over a rigid crystalline basement is most probable considering the interpretation of gravity and especially seismic data. The tectonic style at depth is rather unresolved from surface geological information. The elaboration of a two-dimensional model is constrained by physical properties of rocks, maximum depth extents of individual bodies and seismic reflectors in addition to surface geology. Finally, a minimum of 1500 km of shortening and Iapetan closing is suggested. |
The Effective Number of Parameters: An Analysis of Generalization and Regularization in Nonlinear Learning Systems | We present an analysis of how the generalization performance (expected test set error) relates to the expected training set error for nonlinear learning systems, such as multilayer perceptrons and radial basis functions. The principal result is the following relationship (computed to second order) between the expected test set and tlaining set errors: (1) Here, n is the size of the training sample e, u;f f is the effective noise variance in the response variable( s), ,x is a regularization or weight decay parameter, and Peff(,x) is the effective number of parameters in the nonlinear model. The expectations ( ) of training set and test set errors are taken over possible training sets e and training and test sets e' respectively. The effective number of parameters Peff(,x) usually differs from the true number of model parameters P for nonlinear or regularized models; this theoretical conclusion is supported by Monte Carlo experiments. In addition to the surprising result that Peff(,x) ;/; p, we propose an estimate of (1) called the generalized prediction error (GPE) which generalizes well established estimates of prediction risk such as Akaike's F P E and AI C, Mallows Cp, and Barron's PSE to the nonlinear setting.! lCPE and Peff(>") were previously introduced in Moody (1991). 847 |
Comparative analysis of the bronchodilator response measured by impulse oscillometry (IOS), spirometry and body plethysmography in asthmatic children. | BACKGROUND
Asthma is common among young children. The assessment of respiratory resistance by the impulse oscillometry system (IOS), based on the superimposition of respiratory flow by short-time impulses, requires no patient active collaboration.
AIM
We evaluated the baseline repeatability and bronchodilator response of IOS indices in preschool children, their correlation with spirometry and whole body plethysmography, and differences between atopic and nonatopic children.
PATIENTS AND METHODS
Thirty-three asthmatic children (3-6 yrs.) underwent IOS measurement (R5rs, R20rs and X5rs) by triplicate at the baseline, after placebo and after salbutamol inhalation. Spirometry (FEV1) and whole body plethysmography (sRaw) were made at the baseline and after salbutamol. Baseline within-test (coefficient of variation: CV%) and between-test repeatability (baseline-placebo) were addressed. Bronchodilator response was evaluated by the SD index (change in multiples of the between-test repeatability).
RESULTS
Baseline repeatability for R5rs was 4.1%. Its values decreased by 2SD after salbutamol inhalation, and correlated with FEV1 and sRaw at both, baseline (r=-0.51 and r=0.49) and post-salbutamol (r=-0.63 and r=0.54). A trend towards correlation between salbutamol-induced changes in R5rs and in sRaw (r=0.33) was observed. Atopic and non-atopic children showed no differences in lung function.
CONCLUSION
IOS was well accepted by young asthmatic children and provided reproducible and sensitive indices of lung function. Resistance values obtained by IOS at low frequency (R5rs) were reproducible and correlated with spirometry and plethysmographic values. |
Pricing Approaches for Data Markets | Currently, multiple data vendors utilize the cloud-computing paradigm for trading raw data, associated analytical services, and analytic results as a commodity good. We observe that these vendors often move the functionality of data warehouses to cloud-based platforms. On such platforms, vendors provide services for integrating and analyzing data from public and commercial data sources. We present insights from interviews with seven established vendors about their key challenges with regard to pricing strategies in different market situations and derive associated research problems for the business intelligence community. |
A Frequency-Domain Approach to Watermarking 3D Shapes | This paper presents a robust watermarking algorithm with informed detection for 3D polygonal meshes. The algorithm is based on our previous algorithm [22] that employs mesh-spectral analysis to modify mesh shapes in their transformed domain. This paper presents extensions to our previous algorithm so that (1) much larger meshes can be watermarked within a reasonable time, and that (2) the watermark is robust against connectivity alteration (e.g., mesh simplification), and that (3) the watermark is robust against attacks that combine similarity transformation with such other attacks as cropping, mesh simplification, and smoothing. Experiment showed that our new watermarks are resistant against mesh simplification and remeshing combined with resection, similarity transformation, and other operations.. |
Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network | Recently, several models based on deep neural networks have achieved great success in terms of both reconstruction accuracy and computational performance for single image super-resolution. In these methods, the low resolution (LR) input image is upscaled to the high resolution (HR) space using a single filter, commonly bicubic interpolation, before reconstruction. This means that the super-resolution (SR) operation is performed in HR space. We demonstrate that this is sub-optimal and adds computational complexity. In this paper, we present the first convolutional neural network (CNN) capable of real-time SR of 1080p videos on a single K2 GPU. To achieve this, we propose a novel CNN architecture where the feature maps are extracted in the LR space. In addition, we introduce an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output. By doing so, we effectively replace the handcrafted bicubic filter in the SR pipeline with more complex upscaling filters specifically trained for each feature map, whilst also reducing the computational complexity of the overall SR operation. We evaluate the proposed approach using images and videos from publicly available datasets and show that it performs significantly better (+0.15dB on Images and +0.39dB on Videos) and is an order of magnitude faster than previous CNN-based methods. |
Sensor Fusion-Based Vacant Parking Slot Detection and Tracking | This paper proposes a vacant parking slot detection and tracking system that fuses the sensors of an Around View Monitor (AVM) system and an ultrasonic sensor-based automatic parking system. The proposed system consists of three stages: parking slot marking detection, parking slot occupancy classification, and parking slot marking tracking. The parking slot marking detection stage recognizes various types of parking slot markings using AVM image sequences. It detects parking slots in individual AVM images by exploiting a hierarchical tree structure of parking slot markings and combines sequential detection results. The parking slot occupancy classification stage identifies vacancies of detected parking slots using ultrasonic sensor data. Parking slot occupancy is probabilistically calculated by treating each parking slot region as a single cell of the occupancy grid. The parking slot marking tracking stage continuously estimates the position of the selected parking slot while the ego-vehicle is moving into it. During tracking, AVM images and motion sensor-based odometry are fused together in the chamfer score level to achieve robustness against inevitable occlusions caused by the ego-vehicle. In the experiments, it is shown that the proposed method can recognize the positions and occupancies of various types of parking slot markings and stably track them under practical situations in a real-time manner. The proposed system is expected to help drivers conveniently select one of the available parking slots and support the parking control system by continuously updating the designated target positions. |
A double-blind placebo-controlled study of lamotrigine monotherapy in outpatients with bipolar I depression. Lamictal 602 Study Group. | BACKGROUND
More treatment options for bipolar depression are needed. Currently available antidepressants may increase the risk of mania and rapid cycling, and mood stabilizers appear to be less effective in treating depression than mania. Preliminary data suggest that lamotrigine, an established antiepileptic drug, may be effective for both the depression and mania associated with bipolar disorder. This is the first controlled multicenter study evaluating lamotrigine monotherapy in the treatment of bipolar I depression.
METHODS
Outpatients with bipolar I disorder experiencing a major depressive episode (DSM-IV, N = 195) received lamotrigine (50 or 200 mg/day) or placebo as monotherapy for 7 weeks. Psychiatric evaluations, including the Hamilton Rating Scale for Depression (HAM-D), the Montgomery-Asberg Depression Rating Scale (MADRS), Mania Rating Scale, and the Clinical Global Impressions scale for Severity (CGI-S) and Improvement (CGI-I) were completed at each weekly visit.
RESULTS
Lamotrigine 200 mg/day demonstrated significant antidepressant efficacy on the 17-item HAM-D, HAM-D Item 1, MADRS, CGI-S, and CGI-I compared with placebo. Improvements were seen as early as week 3. Lamotrigine 50 mg/day also demonstrated efficacy compared with placebo on several measures. The proportions of patients exhibiting a response on CGI-I were 51%, 41%, and 26% for lamotrigine 200 mg/day, lamotrigine 50 mg/day, and placebo groups, respectively. Adverse events and other safety results were similar across treatment groups, except for a higher rate of headache in the lamotrigine groups.
CONCLUSION
Lamotrigine monotherapy is an effective and well-tolerated treatment for bipolar depression. |
Closed-Form Inverse Kinematics Solver for Reconfigurable Robots | |A closed-form inverse kinematics solver for non-redundant recon gurable robots is developed based on the Product-of-Exponentials (POE) formula. Its novelty lies in the use of POE reduction techniques and subproblems to obtain inverse kinematics solutions of a large number of possible con gurations in a systematic and convenient way. Three reduction techniques are introduced to simplify the POE equations. Eleven types of subproblems containing geometric solutions of those simpli ed equations are identi ed and solved. Based on the sequence and types of robot joints, the solved subproblems can be re-used for inverse kinematics of different robot con gurations. This solver can cope with closed-form inverse kinematics of all robots with DOFs of 4 or less, 90 percent of the 5-DOF robots and 50 percent of the 6-DOF robots, as well as frequently used industrial robots with both prismatic and revolute joints. The solver is implemented as a C++ software package and is demonstrated through an example. |
Recovering Risk Aversion from Option Prices and Realized Returns | A relationship exists between aggregate risk-neutral and subjective probability distributions and risk aversion functions. We empirically derive risk aversion functions implied by option prices and realized returns on the S&P500 index simultaneously. These risk aversion functions dramatically change shapes around the 1987 crash: Precrash, they are positive and decreasing in wealth and largely consistent with standard assumptions made in economic theory. Postcrash, they are partially negative and partially increasing and irreconcilable with those assumptions. Mispricing in the option market is the most likely cause. Simulated trading strategies exploiting this mispricing show excess returns, even after accounting for the possibility of further crashes, transaction costs, and hedges against the downside risk. |
Low-Voltage DC Distribution—Utilization Potential in a Large Distribution Network Company | Low-voltage direct-current (LVDC) distribution is a promising solution whose benefits are large power transfer capacity with low voltage, high cost savings potential, and improvements to reliability and voltage quality. Tests by the pilot implementation in the distribution system operator (DSO) Elenia Oy have given promising results. The power transfer capacity of the system has been calculated in this paper using voltage drop and maximum load of cable as boundaries. The branches of the medium-voltage network that can be replaced by LVDC distribution are determined based on the calculations and mass computation of the entire distribution area of Elenia Oy. Based on the electrotechnical and customer outage costs (COC) analyses made, it can be inferred that LVDC distribution has good utilization potential. Based on the power transfer capacity calculations, it is technically possible to replace branch lines up to 8 km long by LVDC distribution which means about 20% of the total medium-voltage network length in the distribution area of Elenia Oy. This means also huge potential in improving the overall reliability of electricity supply and in reducing outage costs of customers which are these days taken into account in the regulation of network business. |
Influence and passivity in social media | The ever-increasing amount of information flowing through Social Media forces the members of these networks to compete for attention and influence by relying on other people to spread their message. A large study of information propagation within Twitter reveals that the majority of users act as passive information consumers and do not forward the content to the network. Therefore, in order for individuals to become influential they must not only obtain attention and thus be popular, but also overcome user passivity. We propose an algorithm that determines the influence and passivity of users based on their information forwarding activity. An evaluation performed with a 2.5 million user dataset shows that our influence measure is a good predictor of URL clicks, outperforming several other measures that do not explicitly take user passivity into account. We demonstrate that high popularity does not necessarily imply high influence and vice-versa. |
MetaExp: Interactive Explanation and Exploration of Large Knowledge Graphs | We present MetaExp, a system that assists the user during the exploration of large knowledge graphs, given two sets of initial nodes. At its core,MetaExp presents a small set of meta-paths to the user, which are sequences of relationships among node types. Such meta-paths do not overwhelm the user with complex structures, yet they preserve semantically-rich relationships in a graph. MetaExp engages the user in an interactive procedure, which involves simple meta-paths evaluations to infer a user-specific similarity measure. This similarity measure incorporates the domain knowledge and the preferences of the user, overcoming the fundamental limitations of previous methods based on local node neighborhoods or statically determined similarity scores. Our system provides a userfriendly interface for searching initial nodes and guides the user towards progressive refinements of the meta-paths. The system is demonstrated on three datasets, Freebase, a movie database, and a biological network. ACM Reference Format: Freya Behrens, Sebastian Bischoff, Pius Ladenburger, Julius Rückin, Laurenz Seidel, Fabian Stolp, Michael Vaichenker, Adrian Ziegler, Davide Mottin, Fatemeh Aghaei, Emmanuel Müller, Martin Preusse, Nikola Müller, Michael Hunger. 2018.MetaExp: Interactive Explanation and Exploration of Large Knowledge Graphs. InWWW ’18 Companion: The 2018 Web Conference Companion, April 23–27, 2018, Lyon, France. ACM, New York, NY, USA, 4 pages. https://doi.org/10.1145/3184558.3186978 |
Predicting antimicrobial peptides from eukaryotic genomes: In silico strategies to develop antibiotics | A remarkable and intriguing challenge for the modern medicine consists in the development of alternative therapies to avoid the problem of microbial resistance. The cationic antimicrobial peptides present a promise to be used to develop more efficient drugs applied to human health. The in silico analysis of genomic databases is a strategy utilized to predict peptides of therapeutic interest. Once the main antimicrobial peptides' physical-chemical properties are already known, the correlation of those features to search on these databases is a tool to shorten identifying new antibiotics. This study reports the identification of antimicrobial peptides by theoretical analyses by scanning the Paracoccidioides brasiliensis transcriptome and the human genome databases. The identified sequences were synthesized and investigated for hemocompatibility and also antimicrobial activity. Two peptides presented antifungal activity against Candida albicans. Furthermore, three peptides exhibited antibacterial effects against Staphylococcus aureus and Escherichia coli; finally one of them presented high potential to kill both pathogens with superior activity in comparison to chloramphenicol. None of them showed toxicity to mammalian cells. In silico structural analyses were performed in order to better understand function-structure relation, clearly demonstrating the necessity of cationic peptide surfaces and the exposition of hydrophobic amino acid residues. In summary, our results suggest that the use of computational programs in order to identify and evaluate antimicrobial peptides from genomic databases is a remarkable tool that could be used to abbreviate the search of peptides with biotechnological potential from natural resources. |
Learning to Reweight Examples for Robust Deep Learning | Deep neural networks have been shown to be very powerful modeling tools for many supervised learning tasks involving complex input patterns. However, they can also easily overfit to training set biases and label noises. In addition to various regularizers, example reweighting algorithms are popular solutions to these problems, but they require careful tuning of additional hyperparameters, such as example mining schedules and regularization hyperparameters. In contrast to past reweighting methods, which typically consist of functions of the cost value of each example, in this work we propose a novel meta-learning algorithm that learns to assign weights to training examples based on their gradient directions. To determine the example weights, our method performs a meta gradient descent step on the current mini-batch example weights (which are initialized from zero) to minimize the loss on a clean unbiased validation set. Our proposed method can be easily implemented on any type of deep network, does not require any additional hyperparameter tuning, and achieves impressive performance on class imbalance and corrupted label problems where only a small amount of clean validation data is available. |
Towards an Empirically Grounded Theory of Action for Improving the Quality of Mathematics Teaching at Scale | Our purpose in this article is to propose a comprehensive, empirically grounded theory of action for improving the quality of mathematics teaching at scale. In doing so, we summarise current research findings that can inform efforts to improve the quality of mathematics instruction on a large scale, and identify questions that are yet to be addressed. We draw on an ongoing collaboration with mathematics teachers, school leaders, and district leaders in four urban school districts in the US. The provisional theory of action that we report encompasses a coherent system of supports for ambitious instruction that includes both formal and job-embedded teacher professional development, teacher networks, mathematics coaches’ practices in providing job-embedded support for teachers’ learning, school leaders’ practices as instructional leaders in mathematics, and district leaders’ practices in supporting the development of school-level capacity for instructional improvement. |
Collaborative personalized tweet recommendation | Twitter has rapidly grown to a popular social network in recent years and provides a large number of real-time messages for users. Tweets are presented in chronological order and users scan the followees' timelines to find what they are interested in. However, an information overload problem has troubled many users, especially those with many followees and thousands of tweets arriving every day. In this paper, we focus on recommending useful tweets that users are really interested in personally to reduce the users' effort to find useful information. Many kinds of information on Twitter are available for helping recommendation, including the user's own tweet history, retweet history and social relations between users. We propose a method of making tweet recommendations based on collaborative ranking to capture personal interests. It can also conveniently integrate the other useful contextual information. Our final method considers three major elements on Twitter: tweet topic level factors, user social relation factors and explicit features such as authority of the publisher and quality of the tweet. The experiments show that all the proposed elements are important and our method greatly outperforms several baseline methods. |
Comparison of the clinical hypnotic effects of Zopiclone and Triazolam | In a double blind crossover therapeutic trial the hypnotic effect of Zopiclone 7.5 mg and Triazolam 0.5 mg given orally at bedtime for 7 consecutive days have been compared. 5 of the items in Spiegel's questionnaire and 12 of the 18 items in Norris' visual analogue scale were significantly more improved by Zopiclone than by Triazolam. Both drugs caused few side-effects. |
Topology and Geometry of Half-Rectified Network Optimization | The loss surface of deep neural networks has recently attracted interest in the optimization and machine learning communities as a prime example of highdimensional non-convex problem. Some insights were recently gained using spin glass models and mean-field approximations, but at the expense of strongly simplifying the nonlinear nature of the model. In this work, we do not make any such assumption and study conditions on the data distribution and model architecture that prevent the existence of bad local minima. Our theoretical work quantifies and formalizes two important folklore facts: (i) the landscape of deep linear networks has a radically different topology from that of deep half-rectified ones, and (ii) that the energy landscape in the non-linear case is fundamentally controlled by the interplay between the smoothness of the data distribution and model over-parametrization. Our main theoretical contribution is to prove that half-rectified single layer networks are asymptotically connected, and we provide explicit bounds that reveal the aforementioned interplay. The conditioning of gradient descent is the next challenge we address. We study this question through the geometry of the level sets, and we introduce an algorithm to efficiently estimate the regularity of such sets on large-scale networks. Our empirical results show that these level sets remain connected throughout all the learning phase, suggesting a near convex behavior, but they become exponentially more curvy as the energy level decays, in accordance to what is observed in practice with very low curvature attractors. |
Hex-Layer: Layered All-Hex Mesh Generation on Thin Section Solids via Chordal Surface Transformation | This paper proposes chordal surface transform for representation and discretization of thin section solids, such as automobile bodies, plastic injection mold components and sheet metal parts. A multiple-layered all-hex mesh with a high aspect ratio is a typical requirement for mold flow simulation of thin section objects. The chordal surface transform reduces the problem of 3D hex meshing to 2D quad meshing on the chordal surface. The chordal surface is generated by cutting a tet mesh of the input CAD model at its mid plane. Radius function and curvature of the chordal surface are used to provide sizing function for quad meshing. Two-way mapping between the chordal surface and the boundary is used to sweep the quad elements from the chordal surface onto the boundary, resulting in a layered all-hex mesh. The algorithm has been tested on industrial models, whose chordal surface is 2-manifold. The graphical results of the chordal surface and the multiple-layered all-hex mesh are presented along with the quality measures. The results show geometrically adaptive high aspect ratio all-hex mesh, whose average scaled Jacobean, is close to 1.0. |
REGULARIZATION TOOLS: A Matlab package for analysis and solution of discrete ill-posed problems | The package REGULARIZATION TOOLS consists of 54 Matlab routines for analysis and solution of discrete ill-posed problems, i.e., systems of linear equations whose coefficient matrix has the properties that its condition number is very large, and its singular values decay gradually to zero. Such problems typically arise in connection with discretization of Fredholm integral equations of the first kind, and similar ill-posed problems. Some form of regularization is always required in order to compute a stabilized solution to discrete ill-posed problems. The purpose of REGULARIZATION TOOLS is to provide the user with easy-to-use routines, based on numerical robust and efficient algorithms, for doing experiments with regularization of discrete ill-posed problems. By means of this package, the user can experiment with different regularization strategies, compare them, and draw conclusions from these experiments that would otherwise require a major programming effert. For discrete ill-posed problems, which are indeed difficult to treat numerically, such an approach is certainly superior to a single black-box routine. This paper describes the underlying theory gives an overview of the package; a complete manual is also available. |
Hierarchical control and learning for markov decision processes | This dissertation investigates the use of hierarchy and problem decomposition as a means of solving large, stochastic, sequential decision problems. These problems are framed as Markov decision problems (MDPs). The new technical content of this dissertation begins with a discussion of the concept of temporal abstraction. Temporal abstraction is shown to be equivalent to the transformation of a policy deened over a region of an MDP to an action in a semi-Markov decision problem (SMDP). Several algorithms are presented for performing this transformation eeciently. This dissertation introduces the HAM method for generating hierarchical, temporally abstract actions. This method permits the partial speciication of abstract actions in a way that corresponds to an abstract plan or strategy. Abstract actions speciied as HAMs can be optimally reened for new tasks by solving a reduced SMDP. The formal results show that traditional MDP algorithms can be used to optimally reene HAMs for new tasks. This can be achieved in much less time than it would take to learn a new policy for the task from scratch. HAMs complement some novel decomposition algorithms that are presented in this dissertation. These algorithms work by constructing a cache of policies for diierent regions of the MDP and then optimally combining the cached solution to produce a global solution that is within provable bounds of the optimal solution. Together, the methods developed in this dissertation provide important tools for 2 producing good policies for large MDPs. Unlike some ad-hoc methods, these methods provide strong formal guarantees. They use prior knowledge in a principled way, and they reduce larger MDPs into smaller ones while maintaining a well-deened relationship between the smaller problem and the larger problem. |
A Survey on Eye-Gaze Tracking Techniques | Study of eye-movement is being employed in Human Computer Interaction (HCI) research. Eye gaze tracking is one of the most challenging problems in the area of computer vision. The goal of this paper is to present a review of latest research in this continued growth of remote eye-gaze tracking. This overview includes the basic definitions and terminologies, recent advances in the field and finally the need of future development in the field. |
Radio-Over-Fiber Architectures | Future multigigabit wireless systems (MGWS) dedicated to home-area networking are emerging or are under development. They achieve high data rate up to 5-6 Gb/s over short range. To expand the radio coverage, we propose radio-over-fiber (RoF) architectures, from the most basic (point-to-point) to the most innovative (multipoint-to-multipoint). These RoF infrastructures combine radio and optical links. The simple way to implement RoF technology is to manage the optical media, either using information from the physical (PHY) layer or using information from the media access control (MAC) layer of the radio system. |
Five generations in the nursing workforce: implications for nursing professional development. | Positive patient outcomes require effective teamwork, communication, and technological literacy. These skills vary among the unprecedented five generations in the nursing workforce, spanning the "Silent Generation" nurses deferring retirement to the newest "iGeneration." Nursing professional development educators must understand generational differences; address communication, information technology, and team-building competencies across generations; and promote integration of learner-centered strategies into professional development activities. |
Things Fall Apart: Topology Change From Winding Tachyons | We argue that closed string tachyons drive two spacetime topology changing transitions – loss of genus in a Riemann surface and separation of a Riemann surface into two components. The tachyons of interest are localized versions of Scherk-Schwarz winding string tachyons arising on Riemann surfaces in regions of moduli space where string-scale tubes develop. Spacetime and world-sheet renormalization group analyses provide strong evidence that the decay of these tachyons removes a portion of the spacetime, splitting the tube into two pieces. We address the fate of the gauge fields and charges lost in the process, generalize it to situations with weak flux backgrounds, and use this process to study the type 0 tachyon, providing further evidence that its decay drives the theory sub-critical. Finally, we discuss the time-dependent dynamics of this topology-changing transition and find that it can occur more efficiently than analogous transitions on extended supersymmetric moduli spaces, which are limited by moduli trapping. |
Comparative study of using different electric motors in the electric vehicles | In this paper, different electric motors are studied and compared to see the benefits of each motor and the one that is more suitable to be used in the electric vehicle (EV) applications. There are five main electric motor types, DC, induction, permanent magnet synchronous, switched reluctance and brushless DC motors are studied. It is concluded that although the induction motors technology is more mature than others, for the EV applications the brushless DC and permanent magnet motors are more suitable than others. The use of these motors will result in less pollution, less fuel consumption and higher power to volume ratio. The reducing prices of the permanent magnet materials and the trend of increasing efficiency in the permanent magnet and brushless DC motors make them more and more attractive for the EV applications. |
Ethnic Literature of the U.S. - Internment and Post-War Japanese American Literature: Toward a Theory of Divine Citizenship | World War II presented American men with an opportunity to serve their country and thereby demonstrate their patriotism. This avenue to proving one's national loyalty became a double-edged sword for Japanese American males and radically impacted how this group envisioned and reconceptualized their relationship to the nation during and after the crisis of citizenship and national identity forced by the war. (1) The brave and groundbreaking services of the Women's Auxiliary Army Corps (WAAC) notwithstanding, American women of Japanese ancestry endured their own gender-specific challenges to negotiating a new understanding of Japanese American female citizenship that did not involve military service to the nation. Two works--one by a Japanese American woman and one by a Japanese American man--illustrate this essay's argument: that although socially constructed ideologies of gender determined the specific trials and tests Japanese Americans faced during World War II, responses to the marginalization of Japanese American citizenship can be characterized as exemplars of what I term "divine citizenship." Textual performances of divine citizenship bear witness to wrongs committed by the state against its subject while also modeling a renovated relationship between the state and the (wronged) citizen--a relationship in which reconciliation and forgiveness might occur. Although disparate in style, tone, narrative structure, and time frame, Mine Okubo's Citizen 13660 (1946) and John Okada's No-No Boy (1957) challenge the default notion of citizenship as a universal category. (2) These books can also serve as test cases for inquiry into how gender affects citizenship formation and revision. The title of Okada's work implies a gender specificity to the act of citizenship, while the title of Okubo's work obscures gender under national identity, a universalizing move that renders less visible her focus on how gender impacts citizenship. Begun while Okubo was in camp, Citizen 13660 reflects needs paramount during internment: to endure and negotiate the crisis. Writing post-war, Okada may have had sufficient critical distance to engage with issues of reconstruction. In the internment camp experience Okubo represents, females enjoy a latitude in gender norms, though they are still constrained in their ability to realize fully their identity as citizens. (3) Okada's novel takes up the question of the Japanese American's post-war relationship to the state and locates the particular challenges for men in a masculine context of military service and patriotism. At the same time, the novel treats female Japanese Americans as tools for personal growth, reifying their status as passive and maternal, paying little attention to their struggles to reframe citizenship. Both books underscore the existential angst that followed the government's actions and the way these actions forced Japanese Americans to question their identity and their relationship to the state. In this sense, Okubo's experience in the camps and Okada's treatment of the period after their dissolution offer parallel portrayals of citizenship reformulation during and after crisis. As works of Asian American literature, these books speak to specific historical events that loom large in Japanese American consciousness. As American texts, they offer a glimpse at how relationships among the nation, particular ethnic groups, and individuals are shaped, challenged, and reconfigured. Finally, as texts in which gender issues are linked to the internment, these books demonstrate how ties between a nation and its citizens are structured by ideologies of gender. This last point responds to those critics--most notably Shirley Geok-lin Lira, Sau-ling Cynthia Wong, and Jeffrey J. Santa Ana--who have called upon scholars to consider how Asian American texts link race and gender. (4) Wong and Santa Ana, in particular, underscore the need for critics to produce "historicized critical analyses of Asian American gender and sexuality" (214) in order to theorize American national identity. … |
NEEDS, RATIONALE AND CONCEPTUAL ALTERNATIVES FOR COMBATTING AIRCRAFT POLLUTIONS | Depletion of the ozone layer in the upper atmosphere has the potential to change its chemistry and the Earth's environment. Various pollutants affect this depletion, which started as early as in the 1930's. High altitude aircraft bring pollution to a fairly quiet range of the atmosphere, closer to the ozone layer than ground pollution. The aircraft pollution problem represents a large scale integrated system engineering problem where many requirements conflict and must be traded off. The system engineering principles have been applied to a conceptual study of the aircraft transportation system, its interface with the global environment and the social structures in the society and to the analysis and conceptual design of plans to combat the pollution effects from civil jet aircraft. |
Plasma Cathepsin S and Cystatin C Levels and Risk of Abdominal Aortic Aneurysm: A Randomized Population–Based Study | BACKGROUND
Human abdominal aortic aneurysm (AAA) lesions contain high levels of cathepsin S (CatS), but are deficient in its inhibitor, cystatin C. Whether plasma CatS and cystatin C levels are also altered in AAA patients remains unknown.
METHODS AND RESULTS
Plasma samples were collected from 476 male AAA patients and 200 age-matched male controls to determine CatS and cystatin C levels by ELISA. Student's t test demonstrated higher plasma levels of total, active, and pro-CatS in AAA patients than in controls (P<0.001). ROC curve analysis confirmed higher plasma total, active, and pro-CatS levels in AAA patients than in controls (P<0.001). Logistic regression suggested that plasma total (odds ratio [OR] = 1.332), active (OR = 1.21), and pro-CatS (OR = 1.25) levels were independent AAA risk factors that associated positively with AAA (P<0.001). Plasma cystatin C levels associated significantly, but negatively, with AAA (OR = 0.356, P<0.001). Univariate correlation demonstrated that plasma total and active CatS levels correlated positively with body-mass index, diastolic blood pressure, and aortic diameter, but negatively with the lowest ankle-brachial index (ABI). Plasma cystatin C levels also correlated negatively with the lowest ABI. Multivariate linear regression showed that plasma total, active, and pro-CatS levels correlated positively with aortic diameter and negatively with the lowest ABI, whereas plasma cystatin C levels correlated negatively with aortic diameter and the lowest ABI, after adjusting for common AAA risk factors.
CONCLUSIONS
Correlation of plasma CatS and cystatin C with aortic diameter and the lowest ABI suggest these serological parameters as biomarkers for human peripheral arterial diseases and AAA. |
Engaging citizens: The role of power-sharing institutions | Drawing on established theories of comparative political institutions, we argue that democratic institutions carry important messages that influence mass attitudes and behaviors. Power-sharing political institutions signal to citizens that inclusiveness is an important principle of a country’s democracy and can encourage citizens to participate in politics. Applying multilevel modeling to data from the World Values Survey, we test whether democratic institutions influence political engagement in 34 countries. Further, we examine whether underrepresented groups, specifically women, are differentially affected by the use of power-sharing institutions such that they are more engaged in politics than women in countries with power-concentrating institutions. We find that disproportional electoral rules dampen engagement overall and that gender gaps in political engagement tend to be smaller in more proportional electoral systems, even after controlling for a host of other factors. Power-sharing institutions can be critical for explaining gender differences in political engagement. |
How do Mixture Density RNNs Predict the Future? | Gaining a better understanding of how and what machine learning systems learn is important to increase confidence in their decisions and catalyze further research. In this paper, we analyze the predictions made by a specific type of recurrent neural network, mixture density RNNs (MD-RNNs). These networks learn to model predictions as a combination of multiple Gaussian distributions, making them particularly interesting for problems where a sequence of inputs may lead to several distinct future possibilities. An example is learning internal models of an environment, where different events may or may not occur, but where the average over different events is not meaningful. By analyzing the predictions made by trained MD-RNNs, we find that their different Gaussian components have two complementary roles: 1) Separately modeling different stochastic events and 2) Separately modeling scenarios governed by different rules. These findings increase our understanding of what is learned by predictive MD-RNNs, and open up new research directions for further understanding how we can benefit from their self-organizing model decomposition. |
Prolonged intrathecal chemotherapy replacing cranial irradiation in high-risk acute lymphatic leukaemia: Long-term follow up with cerebral computed tomography scans and endocrinological studies | Cranial irradiation in children with acute lymphatic leukaemia (ALL) decreases the risk of CNS relapse but is associated with serious long-term side-effects. We present the long-term outcome of 21 children with high-risk ALL who received prolonged intrathecal chemotherapy instead of the recommended cranial irradiation. Intrathecal triple therapy (methotrexate, hydrocortisone, and cytarabine) was administered every 2nd month throughout the maintenance phase. The average number of courses of intrathecal methotrexate was 8.7 and of triple 9.0. The 5-year event-free survival was 79%. No CNS relapses occurred. CT scan was performed at diagnosis, at cessation of therapy, and 3 years thereafter. No density abnormalities, pathological contrast enhancement, ventricular dilatation, or calcifications were found. One child showed cortical atrophy both at diagnosis and at cessation of therapy. There was a slight decrease in height SDS with time but no change in weight SDS. Delayed bone age was found in 5 children. No abnormalities of growth hormone, thyroid, adrenal, or gonadal function were observed. The study indicates that extended intrathecal chemotherapy in children with high-risk ALL may provide an effective protection from CNS relapses and is associated with a low risk of long-term side-effects. |
GENETIC ALGORITHM APPLIED TO OPTIMIZATION OF THE SHIP HULL FORM WITH RESPECT TO SEAKEEPING PERFORMANCE | Hull form optimization from a hydrodynamic performance point of view is an important aspect in preliminary ship design. This study presents a computational method to estimate the ship seakeeping in regular head waves. In the optimization process, the genetic algorithm (GA) is linked to the computational method to obtain an optimum hull form by taking into account the displacement as a design constraint. New hull forms are obtained from the wellknown S60 hull and the classical Wigley hull taken as initial hulls in the optimization process at two Froude numbers (Fn=0.2 and Fn=0.3). The optimization variables are a combination of ship hull offsets and main dimensions. The objective function of the optimization procedure includes the peak values for vertical absolute motion at the centre of gravity (CG) and the bow point (0.15Lwl) behind the forward perpendicular (FP). |
Differences in human meibum lipid composition with meibomian gland dysfunction using NMR and principal component analysis. | PURPOSE
Nuclear magnetic resonance (NMR) spectroscopy has been used to quantify lipid wax, cholesterol ester terpenoid and glyceride composition, saturation, oxidation, and CH₂ and CH₃ moiety distribution. This tool was used to measure changes in human meibum composition with meibomian gland dysfunction (MGD).
METHODS
(1)H-NMR spectra of meibum from 39 donors with meibomian gland dysfunction (Md) were compared to meibum from 33 normal donors (Mn).
RESULTS
Principal component analysis (PCA) was applied to the CH₂/CH₃ regions of a set of training NMR spectra of human meibum. PCA discriminated between Mn and Md with an accuracy of 86%. There was a bias toward more accurately predicting normal samples (92%) compared with predicting MGD samples (78%). When the NMR spectra of Md were compared with those of Mn, three statistically significant decreases were observed in the relative amounts of CH₃ moieties at 1.26 ppm, the products of lipid oxidation above 7 ppm, and the =CH moieties at 5.2 ppm associated with terpenoids.
CONCLUSIONS
Loss of the terpenoids could be deleterious to meibum since they exhibit a plethora of mostly positive biological functions and could account for the lower level of cholesterol esters observed in Md compared with Mn. All three changes could account for the higher degree of lipid order of Md compared with age-matched Mn. In addition to the power of NMR spectroscopy to detect differences in the composition of meibum, it is promising that NMR can be used as a diagnostic tool. |
Smart Homes for Elderly Healthcare—Recent Advances and Research Challenges | Advancements in medical science and technology, medicine and public health coupled with increased consciousness about nutrition and environmental and personal hygiene have paved the way for the dramatic increase in life expectancy globally in the past several decades. However, increased life expectancy has given rise to an increasing aging population, thus jeopardizing the socio-economic structure of many countries in terms of costs associated with elderly healthcare and wellbeing. In order to cope with the growing need for elderly healthcare services, it is essential to develop affordable, unobtrusive and easy-to-use healthcare solutions. Smart homes, which incorporate environmental and wearable medical sensors, actuators, and modern communication and information technologies, can enable continuous and remote monitoring of elderly health and wellbeing at a low cost. Smart homes may allow the elderly to stay in their comfortable home environments instead of expensive and limited healthcare facilities. Healthcare personnel can also keep track of the overall health condition of the elderly in real-time and provide feedback and support from distant facilities. In this paper, we have presented a comprehensive review on the state-of-the-art research and development in smart home based remote healthcare technologies. |
The association of cyclin D1 G870A and E-cadherin C-160A polymorphisms with the risk of colorectal cancer in a case control study and meta-analysis. | Cyclin D1 (CCND1) and E-cadherin (CDH1) have been shown to be important genes of the beta-catenin/LEF pathway that is involved in colorectal carcinogenesis. However, epidemiological studies on relationship between genetic variants of these two genes and colorectal cancer (CRC) have shown inconsistent results. In a population-based case-control study (498 cases and 600 controls), we assessed the association of CCND1 G870A and CDH1 C-160A polymorphisms with CRC risk. Multivariable logistic regression analysis was used to estimate the association between genotypes, environmental exposures and CRC risk, adjusting for potential confounders. Compared to common homozygotes, the OR for heterozygous and homozygote variant genotype was 1.08 (95% CI, 0.80-1.46) in CCND1 and 0.97 (95% CI, 0.75-1.25) in CDH1. Neither tumor stage nor location showed an association with genetic susceptibility. However, a significant interaction between hormone replacement therapy (HRT) and CCND1 genotypes in CRC risk was found among postmenopausal women (p(interaction) = 0.02). The risk reduction associated with HRT was substantial (OR, 0.09; 95% CI, 0.02-0.35) in women who were GG homozygous. A meta-analyses including 11 published studies on CCND1 G870A in addition to our study showed a slightly increased risk of CRC for carriers of the A allele (OR, 1.19; 95% CI, 1.06-1.34); however, there was some indication of publication bias. We conclude that the CCND1 G870A and CDH1 C-160A polymorphisms are not associated with the risk of CRC in the German population. However, the CCND1 G870A polymorphism may modify the protective effect of postmenopausal hormone use on the development of CRC. |
Effects of oral alendronate in elderly patients with osteoporosis and mild primary hyperparathyroidism. | In a large proportion of the patients with primary hyperparathyroidism (PHPT), a variable degree of osteopenia is the only relevant manifestation of the disease. Low bone mineral density (BMD) in patients with PHPT is an indication for surgical intervention because successful parathyroidectomy results in a dramatic increase in BMD. However, low BMD values are almost an invariable finding in elderly women with PHPT, who are often either unwilling or considered unfit for surgery. Bisphosphonates are capable of suppressing parathyroid hormone (PTH)-mediated bone resorption and are useful for the prevention and treatment of postmenopausal osteoporosis. In this pilot-controlled study, we investigated the effects of oral treatment with alendronate on BMD and biochemical markers of calcium and bone metabolism in elderly women presenting osteoporosis and mild PHPT. Twenty-six elderly patients aged 67-81 years were randomized for treatment with either oral 10 mg alendronate on alternate-day treatment or no treatment for 2 years. In the control untreated patients a slight significant decrease was observed for total body and femoral neck BMD, without significant changes in biochemical markers of calcium and bone metabolism during the 2 years of observation. Urine deoxypyridinoline (Dpyr) excretion significantly fell within the first month of treatment with alendronate, while serum markers of bone formation alkaline phosphatase and osteocalcin fell more gradually and the decrease became significant only after 3 months of treatment; thereafter all bone turnover markers remained consistently suppressed during alendronate treatment. After 2 years in this group we observed statistically significant increases in BMD at lumbar spine, total hip, and total body (+8.6 +/- 3.0%, +4.8 +/- 3.9%, and +1.2 +/- 1.4% changes vs. baseline mean +/- SD) versus both baseline and control patients. Serum calcium, serum phosphate, and urinary calcium excretion significantly decreased during the first 3-6 months but rose back to the baseline values afterward. Increase in serum PTH level was statistically significant during the first year of treatment. These preliminary results may make alendronate a candidate as a supportive therapy in patients with mild PHPT who are unwilling or are unsuitable for surgery, and for whom osteoporosis is a reason of concern. |
A randomized clinical trial of an individualized home-based exercise programme for women with fibromyalgia. | OBJECTIVE
To determine the efficacy of a 12-week individualized home-based exercise programme on physical functioning, pain severity and psychological distress for women with fibromyalgia (FM).
METHODS
Seventy-nine women with a primary diagnosis of FM were randomized to a 12-week individualized home-based moderate-intensity exercise programme or to a usual care control group. Outcomes were functional capacity (Fibromyalgia Impact Questionnaire), pain severity and psychological distress. Outcomes were measured at study entry, at the end of the 12-week intervention, and at 3 and 9 months following completion of the intervention.
RESULTS
On the basis of intention-to-treat analyses, a significant improvement in functional capacity at 3 and 9 months following treatment for participants in the exercise group who were more functionally disabled at study entry was observed. At both 3 and 9 months post-treatment, the mean estimated benefit of the intervention was more than 10 points [-12.3 (95% CI, -21.9 to -2.8); -10.8 (95% CI, -21.5 to -0.2)]. Compared with the control group, statistically significant improvements in upper body pain were evident in the exercise group at post-treatment. These between-group differences in upper body pain were maintained at 3 and 9 months post-treatment. No statistically significant group differences on lower body pain and psychological distress were found.
CONCLUSIONS
Home-based exercise, a relatively low-cost treatment modality, has the potential to improve important health outcomes in FM. |
Self-Organising management of Grid environments | This paper presents basic concepts, architectural principles and algorithms for efficient resource and security management in cluster computing environments and the Grid. The work presented in this paper is funded by BTExacT and the EPSRC project SO-GRM (GR/S21939). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.