title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Data mining for feature selection in gene expression autism data | The paper presents application of data mining methods for recognizing the most significant genes and gene sequences (treated as features) stored in a dataset of gene expression microarray. The investigations are performed for autism data. Few chosen methods of feature selection have been applied and their results integrated in the final outcome. In this way we find the contents of small set of the most important genes associated with autism. They have been applied in the classification procedure aimed on recognition of autism from reference group members. The results of numerical experiments concerning selection of the most important genes and classification of the cases on the basis of the selected genes will be discussed. The main contribution of the paper is in developing the fusion system of the results of many selection approaches into the final set, most closely associated with autism. We have also proposed special procedure of estimating the number of highest rank genes used in classification procedure. 2014 Elsevier Ltd. All rights reserved. |
Low extraversion and high neuroticism as indices of genetic and environmental risk for social phobia, agoraphobia, and animal phobia. | OBJECTIVE
The authors examined the extent to which two major personality dimensions (extraversion and neuroticism) index the genetic and environmental risk for three phobias (social phobia, agoraphobia, and animal phobia) in twins ascertained from a large, population-based registry.
METHOD
Lifetime phobias and personality traits were assessed through diagnostic interview and self-report questionnaire, respectively, in 7,800 twins from female-female, male-male, and opposite-sex pairs. Sex-limited trivariate Cholesky structural equation models were used to decompose the correlations among extraversion, neuroticism, and each phobia.
RESULTS
In the best-fitting models, genetic correlations were moderate and negative between extraversion and both social phobia and agoraphobia, and that between extraversion and animal phobia was effectively zero. Genetic correlations were high and positive between neuroticism and both social phobia and agoraphobia, and that between neuroticism and animal phobia was moderate. All of the genetic risk factors for social phobia and agoraphobia were shared with those that influence extraversion and neuroticism; in contrast, only a small proportion of the genetic risk factors for animal phobia (16%) was shared with those that influence personality. Shared environmental experiences were not a source of correlations between personality traits and phobias, and unique environmental correlations were relatively modest.
CONCLUSION
Genetic factors that influence individual variation in extraversion and neuroticism appear to account entirely for the genetic liability to social phobia and agoraphobia, but not animal phobia. These findings underline the importance of both introversion (low extraversion) and neuroticism in some psychiatric disorders. |
A Framework for Blockchain-Based Applications | Blockchains have recently generated explosive interest from both academia and industry, with many proposed applications. But descriptions of many these proposals are more visionary projections than realizable proposals, and even basic definitions are often missing. We define “blockchain” and “blockchain network”, and then discuss two very different, well known classes of blockchain networks: cryptocurrencies and Git repositories. We identify common primitive elements of both and use them to construct a framework for explicitly articulating what characterizes blockchain networks. The framework consists of a set of questions that every blockchain initiative should address at the very outset. It is intended to help one decide whether or not blockchain is an appropriate approach to a particular application, and if it is, to assist in its initial design stage. |
Bendtroller: : An Exploration of In-Game Action Mappings with a Deformable Game Controller | We explore controller input mappings for games using a deformable prototype that combines deformation gestures with standard button input. In study one, we tested discrete gestures using three simple games. We categorized the control schemes as binary (button only), action, and navigation, the latter two named based on the game mechanics mapped to the gestures. We found that the binary scheme performed the best, but gesture-based control schemes are stimulating and appealing. Results also suggest that the deformation gestures are best mapped to simple and natural tasks. In study two, we tested continuous gestures in a 3D racing game using the same control scheme categorization. Results were mostly consistent with study one but showed an improvement in performance and preference for the action control scheme. |
Organic matter interactions with natural manganese oxide and synthetic birnessite. | Redox reactions of inorganic and organic contaminants on manganese oxides have been widely studied. However, these reactions are strongly affected by the presence of natural organic matter (NOM) at the surface of the manganese oxide. Interestingly, the mechanism behind NOM adsorption onto manganese oxides remains unclear. Therefore, in this study, the adsorption kinetics and equilibrium of different NOM isolates to synthetic manganese oxide (birnessite) and natural manganese oxide (Mn sand) were investigated. Natural manganese oxide is composed of both amorphous and well-crystallised Mn phases (i.e., lithiophorite, birnessite, and cryptomelane). NOM adsorption on both manganese oxides increased with decreasing pH (from pH7 to 5), in agreement with surface complexation and ligand exchange mechanisms. The presence of calcium enhanced the rate of NOM adsorption by decreasing the electrostatic repulsion between NOM and Mn sand. Also, the adsorption was limited by the diffusion of NOM macromolecules through the Mn sand pores. At equilibrium, a preferential adsorption of high molecular weight molecules enriched in aromatic moieties was observed for both the synthetic and natural manganese oxide. Hydrophobic interactions may explain the adsorption of organic matter on manganese oxides. The formation of low molecular weight UV absorbing molecules was detected with the synthetic birnessite, suggesting oxidation and reduction processes occurring during NOM adsorption. This study provides a deep insight for both environmental and engineered systems to better understand the impact of NOM adsorption on the biogeochemical cycle of manganese. |
The microsoft 2016 conversational speech recognition system | We describe Microsoft's conversational speech recognition system, in which we combine recent developments in neural-network-based acoustic and language modeling to advance the state of the art on the Switchboard recognition task. Inspired by machine learning ensemble techniques, the system uses a range of convolutional and recurrent neural networks. I-vector modeling and lattice-free MMI training provide significant gains for all acoustic model architectures. Language model rescoring with multiple forward and backward running RNNLMs, and word posterior-based system combination provide a 20% boost. The best single system uses a ResNet architecture acoustic model with RNNLM rescoring, and achieves a word error rate of 6.9% on the NIST 2000 Switchboard task. The combined system has an error rate of 6.2%, representing an improvement over previously reported results on this benchmark task. |
Zigzag machining surface roughness modelling using evolutionary approach | Milling is one of the common machining methods that cannot be abandoned especially for machining of metallic materials. The cutters with appropriate cutting parameters remove material from the workpiece. Surface roughness has the major influence on both obtaining dimensional accuracy and quality of the product. A number of cutter path strategies are employed to obtain the required surface quality. Zigzag machining is one of the mostly appealing cutting processes. Modeling of surface roughness with traditional methods often results in inadequate solutions and can be very costly in terms of the efforts and the time spent. In this research Genetic Programming (GP) has employed to predict a surface roughness model based on the experimental data. The model has produced an accuracy of 86.43%. In order to compare GP performance, Artificial Neural Network (ANN) and Adaptive Neuro Fuzzy Inference System (ANFIS) techniques were utilized. It was seen that the surface roughness model produced by GP not only outperforms but also enables to produce more explicit models than of the other techniques. The effective parameters can easily be investigated based on the appearances in the model and they can be used in prediction of surface roughness in zigzag machining process. |
Factors associated with the incidence of serious adverse events in patients admitted with COPD acute exacerbation | BackgroundAcute exacerbation of chronic obstructive pulmonary disease (AECOPD) is a common cause of hospitalization. Patient outcome and prognosis following AECOPD are variable. The aim of this study is to identify the factors associated with the incidence of serious adverse events (SAE), defined as need for ICU admission, noninvasive ventilation, death during hospitalization or early readmission, in those patients admitted with AECOPD.MethodsWe conducted a retrospective study by reviewing the medical records of all patients admitted with AECOPD in the University Hospital Complex of Santiago de Compostela in 2007 and 2008. To identify variables independently associated with SAE incidence, we conducted a logistic regression including those variables which proved to be significant in the univariate analysis.Results757 patients were assessed (mean age 74.8 years, SD 11.26), 77.2 % male, and 186 (24.6 %) of the patients assessed experienced an SAE. Factors associated with SAE in multivariate analysis were anticholinergic therapy (OR 3.19; CI 95 %: 1.16; 8.82), oxygen therapy at home (OR 3.72; CI 95 %: 1.62; 8.57), oxygen saturation at admission (OR 0.93; CI 95 %: 0.88; 0.99) and serum albumin (OR 0.26; CI 95 %: 0.1; 0.66).ConclusionOxygen therapy at home, anticholinergic therapy as baseline treatment, lower oxygen saturation at admission and lower serum albumin level seem to be associated with higher incidence of SAE in patients with AECOPD. |
Meningococcal purpura fulminans in children: I. Initial orthopedic management. | BACKGROUND
Purpura fulminans is a rare and extremely severe infection, mostly due to Neisseria meningitidis frequently causing early orthopedic lesions. Few studies have reported on the initial surgical management of acute purpura fulminans. The aim of this study is to look at the predictive factors in orthopedic outcome in light of the initial surgical management in children surviving initial resuscitation.
METHODS
Nineteen patients referred to our institution between 1987 and 2005 were taken care of at the very beginning of the purpura fulminans. All cases were retrospectively reviewed so as to collect information on the total skin necrosis, vascular insufficiency, gangrene, and total duration of vasopressive treatment.
RESULTS
All patients had multiorgan failure; only one never developed any skin necrosis or ischemia. Eighteen patients lost tissue, leading to 22 skin grafts, including two total skin grafts. There was only one graft failure. Thirteen patients were concerned by an amputation, representing, in total, 54 fingers, 36 toes, two transmetatarsal, and ten transtibial below-knee amputations, with a mean delay of 4 weeks after onset of the disease. Necrosis seems to affect mainly the lower limbs, but there is no predictive factor that impacted on the orthopedic outcome. We did not perform any fasciotomy or compartment pressure measurement to avoid non-perfusion worsening; nonetheless, our outcome in this series is comparable to existing series in the literature. V.A.C.(®) therapy could be promising regarding the management of skin necrosis in this particular context. While suffering from general multiorgan failure, great care should be observed not to miss any additional osseous or articular infection, as some patients also develop local osteitis and osteomyelitis that are often not diagnosed.
CONCLUSIONS
We do not advocate very early surgery during the acute phase of purpura fulminans, as it does not change the orthopedic outcome in these children. By performing amputations and skin coverage some time after the acute phase, we obtained similar results to those found in the literature. |
Predictive control for agile semi-autonomous ground vehicles using motion primitives | This paper presents a hierarchical control framework for the obstacle avoidance of autonomous and semi-autonomous ground vehicles. The high-level planner is based on motion primitives created from a four-wheel nonlinear dynamic model. Parameterized clothoids and drifting maneuvers are used to improve vehicle agility. The low-level tracks the planned trajectory with a nonlinear Model Predictive Controller. The first part of the paper describes the proposed control architecture and methodology. The second part presents simulative and experimental results with an autonomous and semi-autonomous ground vehicle traveling at high speed on an icy surface. |
FaceForensics: A Large-scale Video Dataset for Forgery Detection in Human Faces | With recent advances in computer vision and graphics, it is now possible to generate videos with extremely realistic synthetic faces, even in real time. Countless applications are possible, some of which raise a legitimate alarm, calling for reliable detectors of fake videos. In fact, distinguishing between original and manipulated video can be a challenge for humans and computers alike, especially when the videos are compressed or have low resolution, as it often happens on social networks. Research on the detection of face manipulations has been seriously hampered by the lack of adequate datasets. To this end, we introduce a novel face manipulation dataset of about half a million edited images (from over 1000 videos). The manipulations have been generated with a state-of-the-art face editing approach. It exceeds all existing video manipulation datasets by at least an order of magnitude. Using our new dataset, we introduce benchmarks for classical image forensic tasks, including classification and segmentation, considering videos compressed at various quality levels. In addition, we introduce a benchmark evaluation for creating indistinguishable forgeries with known ground truth; for instance with generative refinement models. |
Further international adaptation and validation of the Rheumatoid Arthritis Quality of Life (RAQoL) questionnaire | The Rheumatoid Arthritis Quality of Life (RAQoL) questionnaire was developed directly from rheumatoid arthritis (RA) patients in the United Kingdom and the Netherlands to measure quality of life (QoL). Since then, it has become widely used in clinical studies and trials and has been adapted for use in 24 languages. The objective was to develop and validate 11 additional language versions of the RAQoL in US English, Mexican Spanish, Argentinean Spanish, Belgian French, Belgian Flemish, French, Romanian, Czech, Slovakian, Polish and Russian. The language adaptation and validation required three stages: translation, cognitive debriefing interviews and validation survey. The translation process involved a dual-panel methodology (bilingual panel followed by a lay panel). The validation survey tested the psychometric properties of the new scales and included either the Nottingham Health Profile (NHP) or the Health Assessment Questionnaire (HAQ) as comparators. Internal consistency of the new language versions ranged from 0.90 to 0.97 and test–retest reliability from 0.85 to 0.99. RAQoL scores correlated as expected with the HAQ. Correlations with NHP sections were as expected: highest with energy level, pain and physical mobility and lowest with emotional reactions, sleep disturbance, and social isolation. The adaptations exhibited construct validity in their ability to distinguish subgroups of RA patients varying by perceived disease severity and general health. The new language versions of the RAQoL meet the high psychometric standards of the original UK English version. The new adaptations represent valid and reliable tools for measuring QoL in international clinical trials involving RA patients. |
Emerging frameworks for tangible user interfaces | For more than thirty years, people have relied primarily on screen-based text and graphics to interact with computers. Whether the screen is placed on a desk, held in one’s hand, worn on one’s head, or embedded in the physical environment, the screen has cultivated a predominantly visual paradigm of humancomputer interaction. In this chapter, we discuss a growing space of interfaces in which physical objects play a central role as both physical representations and controls for digital information. We present an interaction model and key characteristics for such “tangible user interfaces,” and explore these characteristics in a number of interface examples. This discussion supports a newly integrated view of both recent and previous work, and points the way towards new kinds of computationally-mediated interfaces that more seamlessly weave together the physical and digital worlds. |
Image Features Detection, Description and Matching | Feature detection, description and matching are essential components of various computer vision applications, thus they have received a considerable attention in the last decades. Several feature detectors anddescriptors have beenproposed in the literaturewith a variety of definitions forwhat kind of points in an image is potentially interesting (i.e., a distinctive attribute). This chapter introduces basic notation and mathematical concepts for detecting and describing image features. Then, it discusses properties of perfect features and gives an overview of various existing detection and description methods. Furthermore, it explains some approaches to feature matching. Finally, the chapter discusses the most used techniques for performance evaluation of detection and description algorithms. |
Eraser: Your Data Won't Be Back | Secure deletion of data from non-volatile storage is a well-recognized problem. While numerous solutions have been proposed, advances in storage technologies have stymied efforts to solve the problem. For instance, SSDs make use of techniques such as wear leveling that involve replication of data; this is in direct opposition to efforts to securely delete sensitive data from storage. We present a technique to provide secure deletion guarantees at file granularity, independent of the characteristics of the underlying storage medium. The approach builds on prior seminal work on cryptographic erasure, encrypting every file on an insecure medium with a unique key that can later be discarded to cryptographically render the data irrecoverable. To make the approach scalable and, therefore, usable on commodity systems, keys are organized in an efficient tree structure where a single master key is confined to a secure store. We describe an implementation of this scheme as a fileaware stackable block device, deployed as a standalone Linux kernel module that does not require modifications to the operating system. Our prototype demonstrates that secure deletion independent of the underlying storage medium can be achieved with comparable overhead to existing full disk encryption implementations. |
Gastrectomy as a secondary surgery for stage IV gastric cancer patients who underwent S-1-based chemotherapy: a multi-institute retrospective study | Current advances in chemotherapy provide opportunities for stage IV gastric cancer patients with distant metastasis to undergo potentially curable resection. There are, however, few data on gastrectomy as a secondary surgery aimed at rendering such patients cancer-free. We investigated stage IV gastric cancer patients who underwent surgery with curative intent after S-1-based chemotherapy between 2000 and 2008. Twenty-eight patients from 12 hospitals were enrolled in this study. Factors indicating that the tumors were incurable included clinical stage T4 in 9 patients, para-aortic node metastasis in 15, peritoneal metastasis in 7, and liver metastasis in 4. Of the 28 laparotomy patients, 26 underwent complete resection with no residual tumor, obtaining a complete resection rate of 92.9%. There were no in-hospital deaths or reoperations. In four patients, the primary tumor showed pathological complete response. The 1-, 3-, and 5-year overall survival rates after secondary gastrectomy were 82.1, 45.9, and 34.4%, respectively, with a median survival time of 29 months. Univariate analysis revealed histological tumor length, clinical depth of tumor invasion, number of metastatic nodes, pathological depth of tumor invasion, and pathological response to be the factors influencing patient survival after secondary surgery. On multivariate analysis, histological tumor length (5.0 cm or larger) was the only significant prognostic factor (relative risk 3.23, P = 0.028). Secondary gastrectomy following S-1-based chemotherapy was a safe and effective treatment for stage IV gastric cancer. Primary tumor size is an indicator for the appropriate selection of patients for this treatment. |
Describing Multimedia Content Using Attention-Based Encoder-Decoder Networks | Whereas deep neural networks were first mostly used for classification tasks, they are rapidly expanding in the realm of structured output problems, where the observed target is composed of multiple random variables that have a rich joint distribution, given the input. In this paper we focus on the case where the input also has a rich structure and the input and output structures are somehow related. We describe systems that learn to attend to different places in the input, for each element of the output, for a variety of tasks: machine translation, image caption generation, video clip description, and speech recognition. All these systems are based on a shared set of building blocks: gated recurrent neural networks and convolutional neural networks, along with trained attention mechanisms. We report on experimental results with these systems, showing impressively good performance and the advantage of the attention mechanism. |
Discrete Wavelet Transform-Based Satellite Image Resolution Enhancement | Satellite images are being used in many fields of research. One of the major issues of these types of images is their resolution. In this paper, we propose a new satellite image resolution enhancement technique based on the interpolation of the high-frequency subbands obtained by discrete wavelet transform (DWT) and the input image. The proposed resolution enhancement technique uses DWT to decompose the input image into different subbands. Then, the high-frequency subband images and the input low-resolution image have been interpolated, followed by combining all these images to generate a new resolution-enhanced image by using inverse DWT. In order to achieve a sharper image, an intermediate stage for estimating the high-frequency subbands has been proposed. The proposed technique has been tested on satellite benchmark images. The quantitative (peak signal-to-noise ratio and root mean square error) and visual results show the superiority of the proposed technique over the conventional and state-of-art image resolution enhancement techniques. |
The (co-)occurrence of problematic video gaming, substance use, and psychosocial problems in adolescents | AIMS
The current study explored the nature of problematic (addictive) video gaming (PVG) and the association with game type, psychosocial health, and substance use.
METHODS
Data were collected using a paper and pencil survey in the classroom setting. Three samples were aggregated to achieve a total sample of 8478 unique adolescents. Scales included measures of game use, game type, the Video game Addiction Test (VAT), depressive mood, negative self-esteem, loneliness, social anxiety, education performance, and use of cannabis, alcohol and nicotine (smoking).
RESULTS
Findings confirmed problematic gaming is most common amongst adolescent gamers who play multiplayer online games. Boys (60%) were more likely to play online games than girls (14%) and problematic gamers were more likely to be boys (5%) than girls (1%). High problematic gamers showed higher scores on depressive mood, loneliness, social anxiety, negative self-esteem, and self-reported lower school performance. Nicotine, alcohol, and cannabis using boys were almost twice more likely to report high PVG than non-users.
CONCLUSIONS
It appears that online gaming in general is not necessarily associated with problems. However, problematic gamers do seem to play online games more often, and a small subgroup of gamers - specifically boys - showed lower psychosocial functioning and lower grades. Moreover, associations with alcohol, nicotine, and cannabis use are found. It would appear that problematic gaming is an undesirable problem for a small subgroup of gamers. The findings encourage further exploration of the role of psychoactive substance use in problematic gaming. |
Smoking prevalence among women in the European community 1950-1990. | The paper reviews trends in tobacco use among women in the European Community (EC) between 1950 and 1990. The data suggest that EC countries occupy different points on what appears to be a common prevalence curve. Southern EC countries are represented in the early phases of this curve, marked out by sharply rising prevalence. In northern EC countries, female smoking prevalence appears to have peaked. Across the EC, the commodification of tobacco use, and the production and promotion of manufactured cigarettes in particular, underlies this prevalence curve. Young women in higher socio-economic groups have led the way into cigarette smoking in both northern and southern Europe, with smoking prevalence declining first among women who are privileged in terms of their education, occupation and income. Because the decline in prevalence has yet to be repeated among women in more disadvantaged circumstances, cigarette smoking among women in the EC is likely to become a habit increasingly linked to low socio-economic status. |
Design and Control of Mobile Manipulation System for Human Symbiotic Humanoid: Hadaly-2 | The objective of this study is to investigate design and control strategies for realizing human-robot symbiosis, and to develop a human symbiotic humanoid robot, Hadaly-2, which can communicate and collaborate with human. In this paper, mechanism design methodologies, specifications, and control strategies of a mobile manipulation system of the Hadaly-2 will be described. First, mechanism design concepts of the manipulation system for the human symbiotic robot are proposed. Next, two force controlled anthropomorphic manipulators (WAMIOR, and L) and a body-vehicle mechanism are developed in consideration of the design concepts. Then, the communication and collaboration abilities of Hadaly-2 are evaluated by means of several behaviors, such as gesture motion, shaking hands with human, and block carrying tasks. From the results of evaluation experiments, it is confirmed that the Hadaly-2 can realize efficient interaction and collaboration with human. |
Two versions of continental holism Derrida and structuralism | The difficulty to pin down the philosophical content of structuralism depends on the fact that it operates on an implicit metaphysics; such a metaphysics can be best unfolded by examining Jacques Derrida’s deconstructionist critique of it. The essay argues that both structuralism and Derrida’s critique rely on holistic premises. From an initial externalist definition of structure, structuralism’s metaphysics emerges as a kind of ‘immanent’ holism, similar to the one pursued, in the contemporary analytic panorama, by Donald Davidson. By contrast, Derrida’s deconstructionist critique appears engaged in a ‘quasi-transcendental’ version of holism, which the author analyzes in connection with Martin Heidegger’s notion of Verwindung, or twisted overcoming. |
Neural Networks for Open Domain Targeted Sentiment | Open domain targeted sentiment is the joint information extraction task that finds target mentions together with the sentiment towards each mention from a text corpus. The task is typically modeled as a sequence labeling problem, and solved using state-of-the-art labelers such as CRF. We empirically study the effect of word embeddings and automatic feature combinations on the task by extending a CRF baseline using neural networks, which have demonstrated large potentials for sentiment analysis. Results show that the neural model can give better results by significantly increasing the recall. In addition, we propose a novel integration of neural and discrete features, which combines their relative advantages, leading to significantly higher results compared to both baselines. |
Deformed spaces and loop cosmology | The non-singular bouncing solution of loop quantum cosmology is reproduced by a deformed minisuperspace Heisenberg algebra. This algebra is a realization of the Snyder space, is almost unique and is related to the $\kappa$-Poincar\'e one. Since the sign of the deformation parameter it is not fixed, the Friedmann equation of braneworlds theory can also be obtained. Moreover, the sign is the only freedom in the picture and these frameworks are the only ones which can be reproduced by our deformed scheme. A generalized uncertainty principle for loop quantum cosmology is also proposed. |
Burden on the families of patients with schizophrenia: results of the BIOMED I study | The burden, the coping strategies and the social network of a sample of 236 relatives of patients with schizophrenia, living in five European countries, were explored by well-validated assessment instruments. In all centres, relatives experienced higher levels of burden when they had poor coping resources and reduced social support. Relatives in Mediterranean centres, who reported lower levels of social support, were more resigned, and more often used spiritual help as a coping strategy. These data indicate that family burden and coping strategies can be influenced by cultural factors, and suggest that family interventions should have also a social focus, aiming to increase the family social network and to reduce stigma. |
Q # : A Quantum Programming Language by Microsoft | The multi-paradigm quantum programming language Q# was analysed and used to study and create novel programs that were able to go beyond the capabilities of any classical program. This language was built by Microsoft to succeed LIQUi|> and is a Domain-Specific Language (DSL) that can be used within Microsoft’s Quantum Development Kit (QDK). The quantum programs are run on a quantum simulator which possesses properties that a real quantum computer would from a behavioural aspect. It uses the .NET Code SDK, allowing for easy creation, building and running of quantum projects via the command line. Initially, the main features and libraries available were studied and experimented with by analysing implementations of the Quantum Teleportation and Deutsch-Jozsa algorithms. Thereafter, an algorithm to solve an arbitrary 2 × 2 matrix system of linear equations was implemented, demonstrating a theoretical quantum speed-up with time complexities of order O(logN) compared to the classical O(N) for sparse matrices. Running the algorithm for a particular matrix achieved results that were within the range of theoretical predictions. Subsequently, as an extension to the project, concepts within Quantum Game Theory were explored. This led to the Mermin-Peres Magic Square game successfully being simulated in Q#; a game where no classical winning strategy exists, yet a quantum strategy is able to win in all possible cases of the game. |
Measured rates of fluoride/metal association correlate with rates of superoxide/metal reactions for Fe(III)EDTA(H2O)- and related complexes. | The effects of 10 paramagnetic metal complexes (Fe(III)EDTA(H2O)-, Fe(III)EDTA(OH)2-, Fe(III)PDTA-, Fe(III)DTPA2-, Fe(III)2O(TTHA)2-, Fe(III)(CN)6(3-), Mn(II)EDTA(H2O)2-, Mn(II)PDTA2-, Mn(II)beta-EDDADP2-, and Mn(II)PO4(-)) on F- ion 19F NMR transverse relaxation rates (R2 = 1/T2) were studied in aqueous solutions as a function of temperature. Consistent with efficient relaxation requiring formation of a metal/F- bond, only the substitution inert complexes Fe(III)(CN)6(3-) and Fe(III)EDTA(OH)2- had no measured effect on T2 relaxation of the F- 19F resonance. For the remaining eight complexes, kinetic parameters (apparent second-order rate constants and activation enthalpies) for metal/F- association were determined from the dependence of the observed relaxation enhancements on complex concentration and temperature. Apparent metal/F- association rate constants for these complexes (k(app,F-)) spanned 5 orders of magnitude. In addition, we measured the rates at which O2*- reacts with Fe(III)PDTA-, Mn(II)EDTA(H2O)2-, Mn(II)PDTA2-, and Mn(II)beta-EDDADP2- by pulse radiolysis. Although no intermediate is observed during the reduction of Fe(III)PDTA- by O2*-, each of the Mn(II) complexes reacts with formation of a transient intermediate presumed to form via ligand exchange. These reactivity patterns are consistent with literature precedents for similar complexes. With these data, both k(app,O2-) and k(app,F-) are available for each of the eight reactive complexes. A plot of log(k(app,O2-)) versus log(k(app,F-)) for these eight showed a linear correlation with a slope approximately 1. This correlation suggests that rapid metal/O2*- reactions of these complexes occur via an inner-sphere mechanism whereas formation of an intermediate coordination complex limits the overall rate. This hypothesis is also supported by the very low rates at which the substitution inert complexes (Fe(III)(CN)6(3-) and Fe(III)EDTA(OH)2-) are reduced by O2*-. These results suggest that F- 19F NMR relaxation can be used to predict the reactivities of other Fe(III) complexes toward reduction by O2*-, a key step in the biological production of reactive oxygen species. |
Deep Speech 2 : End-to-End Speech Recognition in English and Mandarin | We show that an end-to-end deep learning approach can be used to recognize either English or Mandarin Chinese speech—two vastly different languages. Because it replaces entire pipelines of hand-engineered components with neural networks, end-to-end learning allows us to handle a diverse variety of speech including noisy environments, accents and different languages. Key to our approach is our application of HPC techniques, resulting in a 7x speedup over our previous system [26]. Because of this efficiency, experiments that previously took weeks now run in days. This enables us to iterate more quickly to identify superior architectures and algorithms. As a result, in several cases, our system is competitive with the transcription of human workers when benchmarked on standard datasets. Finally, using a technique called Batch Dispatch with GPUs in the data center, we show that our system can be inexpensively deployed in an online setting, delivering low latency when serving users at scale. |
The effects of poverty on children. | Although hundreds of studies have documented the association between family poverty and children's health, achievement, and behavior, few measure the effects of the timing, depth, and duration of poverty on children, and many fail to adjust for other family characteristics (for example, female headship, mother's age, and schooling) that may account for much of the observed correlation between poverty and child outcomes. This article focuses on a recent set of studies that explore the relationship between poverty and child outcomes in depth. By and large, this research supports the conclusion that family income has selective but, in some instances, quite substantial effects on child and adolescent well-being. Family income appears to be more strongly related to children's ability and achievement than to their emotional outcomes. Children who live in extreme poverty or who live below the poverty line for multiple years appear, all other things being equal, to suffer the worst outcomes. The timing of poverty also seems to be important for certain child outcomes. Children who experience poverty during their preschool and early school years have lower rates of school completion than children and adolescents who experience poverty only in later years. Although more research is needed on the significance of the timing of poverty on child outcomes, findings to date suggest that interventions during early childhood may be most important in reducing poverty's impact on children. |
Review of Fall Detection Techniques: A Data Availability Perspective | A fall is an abnormal activity that occurs rarely; however, missing to identify falls can have serious health and safety implications on an individual. Due to the rarity of occurrence of falls, there may be insufficient or no training data available for them. Therefore, standard supervised machine learning methods may not be directly applied to handle this problem. In this paper, we present a taxonomy for the study of fall detection from the perspective of availability of fall data. The proposed taxonomy is independent of the type of sensors used and specific feature extraction/selection methods. The taxonomy identifies different categories of classification methods for the study of fall detection based on the availability of their data during training the classifiers. Then, we present a comprehensive literature review within those categories and identify the approach of treating a fall as an abnormal activity to be a plausible research direction. We conclude our paper by discussing several open research problems in the field and pointers for future research. |
Generalized expectancies for internal versus external control of reinforcement. | The effects of reward or reinforcement on preceding behavior depend in part on whether the person perceives the reward as contingent on his own behavior or independent of it. Acquisition and performance differ in situations perceived as determined by skill versus chance. Persons may also differ in generalized expectancies for internal versus external control of reinforcement. This report summarizes several experiments which define group differences in behavior when Ss perceive reinforcement as contingent on their behavior versus chance or experimenter control. The report also describes the development of tests of individual differences in a generalized belief in internal-external control and provides reliability, discriminant validity and normative data for 1 test, along with a description of the results of several studies of construct validity. |
Ensemble of Generative and Discriminative Techniques for Sentiment Analysis of Movie Reviews | Sentiment analysis is a common task in natural language processing that aims to detect polarity of a text document (typically a consumer review). In the simplest settings, we discriminate only between positive and negative sentiment, turning the task into a standard binary classification problem. We compare several machine learning approaches to this problem, and combine them to achieve a new state of the art. We show how to use for this task the standard generative language models, which are slightly complementary to the state of the art techniques. We achieve strong results on a well-known dataset of IMDB movie reviews. Our results are easily reproducible, as we publish also the code needed to repeat the experiments. This should simplify further advance of the state of the art, as other researchers can combine their techniques with ours with little effort. |
Reinforcement Learning with LSTM in Non-MarkovianTasks with Long-Term | This paper presents reinforcement learning with a Long Short-Term Memory recurrent neural network: RL-LSTM. Model-free RL-LSTM using Advantage() learning and directed exploration can solve non-Markovian tasks with long-term dependencies between relevant events. This is demonstrated in a T-maze task, as well as in a diicult variation of the pole balancing task. |
Eliciting patients' preferences for elastic compression stocking therapy after deep vein thrombosis: potential for improving compliance. | UNLABELLED
ESSENTIALS: Elastic compression stocking (ECS) therapy is used to prevent post-thrombotic syndrome (PTS). We aimed to elicit patient preferences regarding ECS therapy after deep vein thrombosis. The most valued attributes were PTS risk reduction and the ability to put on the ECS independently. Heterogeneous results with respect to education level stress the importance of proper counselling.
SUMMARY
BACKGROUND
Elastic compression stocking (ECS) therapy is used for prevention of post-thrombotic syndrome (PTS) after deep vein thrombosis (DVT). Current evidence on its effectiveness is conflicting. Compliance, a major determinant of the effectiveness of ECS therapy, remained largely ignored in former studies.
OBJECTIVES
To gain insight into preferences regarding ECS therapy in patients after DVT.
PATIENTS/METHODS
A discrete choice experiment was conducted 3 months after DVT in patients enrolled in the IDEAL DVT study, a randomized controlled trial comparing 2 years of ECS therapy with individually tailored duration of ECS therapy for the prevention of PTS. Nine unlabeled, forced-choice sets of two hypothetical types of ECS were presented to each patient. Data were analyzed with multinomial logit models.
RESULTS
The respondent sample consisted of 81% (300/369) of invited patients. The most important determinants of preference were PTS risk reduction and putting on the ECS. Patients were willing to increase the duration of therapy by 1 year if this increases the PTS risk reduction with 10%. Patients accepted an increase in the risk of PTS of 29% if they were able to put on the ECS themselves. Preferences were heterogeneous with respect to education level.
CONCLUSIONS
Reduction of the risk of PTS and the ability to put on the ECS without help are the most important characteristics of ECS therapy. Physicians should pay considerable attention to patient education regarding PTS. In addition, patients should be supported in their ability to put on and take off the ECS independently. These rather simple interventions could improve compliance. |
Oscillometric measurement of systolic and diastolic blood pressures validated in a physiologic mathematical model | BACKGROUND
The oscillometric method of measuring blood pressure with an automated cuff yields valid estimates of mean pressure but questionable estimates of systolic and diastolic pressures. Existing algorithms are sensitive to differences in pulse pressure and artery stiffness. Some are closely guarded trade secrets. Accurate extraction of systolic and diastolic pressures from the envelope of cuff pressure oscillations remains an open problem in biomedical engineering.
METHODS
A new analysis of relevant anatomy, physiology and physics reveals the mechanisms underlying the production of cuff pressure oscillations as well as a way to extract systolic and diastolic pressures from the envelope of oscillations in any individual subject. Stiffness characteristics of the compressed artery segment can be extracted from the envelope shape to create an individualized mathematical model. The model is tested with a matrix of possible systolic and diastolic pressure values, and the minimum least squares difference between observed and predicted envelope functions indicates the best fit choices of systolic and diastolic pressure within the test matrix.
RESULTS
The model reproduces realistic cuff pressure oscillations. The regression procedure extracts systolic and diastolic pressures accurately in the face of varying pulse pressure and arterial stiffness. The root mean squared error in extracted systolic and diastolic pressures over a range of challenging test scenarios is 0.3 mmHg.
CONCLUSIONS
A new algorithm based on physics and physiology allows accurate extraction of systolic and diastolic pressures from cuff pressure oscillations in a way that can be validated, criticized, and updated in the public domain. |
Face classification using electronic synapses | Conventional hardware platforms consume huge amount of energy for cognitive learning due to the data movement between the processor and the off-chip memory. Brain-inspired device technologies using analogue weight storage allow to complete cognitive tasks more efficiently. Here we present an analogue non-volatile resistive memory (an electronic synapse) with foundry friendly materials. The device shows bidirectional continuous weight modulation behaviour. Grey-scale face classification is experimentally demonstrated using an integrated 1024-cell array with parallel online training. The energy consumption within the analogue synapses for each iteration is 1,000 × (20 ×) lower compared to an implementation using Intel Xeon Phi processor with off-chip memory (with hypothetical on-chip digital resistive random access memory). The accuracy on test sets is close to the result using a central processing unit. These experimental results consolidate the feasibility of analogue synaptic array and pave the way toward building an energy efficient and large-scale neuromorphic system. |
Mistaking minds and machines: How speech affects dehumanization and anthropomorphism. | Treating a human mind like a machine is an essential component of dehumanization, whereas attributing a humanlike mind to a machine is an essential component of anthropomorphism. Here we tested how a cue closely connected to a person's actual mental experience-a humanlike voice-affects the likelihood of mistaking a person for a machine, or a machine for a person. We predicted that paralinguistic cues in speech are particularly likely to convey the presence of a humanlike mind, such that removing voice from communication (leaving only text) would increase the likelihood of mistaking the text's creator for a machine. Conversely, adding voice to a computer-generated script (resulting in speech) would increase the likelihood of mistaking the text's creator for a human. Four experiments confirmed these hypotheses, demonstrating that people are more likely to infer a human (vs. computer) creator when they hear a voice expressing thoughts than when they read the same thoughts in text. Adding human visual cues to text (i.e., seeing a person perform a script in a subtitled video clip), did not increase the likelihood of inferring a human creator compared with only reading text, suggesting that defining features of personhood may be conveyed more clearly in speech (Experiments 1 and 2). Removing the naturalistic paralinguistic cues that convey humanlike capacity for thinking and feeling, such as varied pace and intonation, eliminates the humanizing effect of speech (Experiment 4). We discuss implications for dehumanizing others through text-based media, and for anthropomorphizing machines through speech-based media. (PsycINFO Database Record |
CCG Supertagging with a Recurrent Neural Network | Recent work on supertagging using a feedforward neural network achieved significant improvements for CCG supertagging and parsing (Lewis and Steedman, 2014). However, their architecture is limited to considering local contexts and does not naturally model sequences of arbitrary length. In this paper, we show how directly capturing sequence information using a recurrent neural network leads to further accuracy improvements for both supertagging (up to 1.9%) and parsing (up to 1% F1), on CCGBank, Wikipedia and biomedical text. |
Webbiometrics: User Verification Via Web Interaction | We introduce a biometric trait based on the user behavior extracted from his interaction with a Web page. We propose the integration of this soft biometric trait in a conventional login Internet page to enhance the security of the system. We call this security layer WebBiometrics. This layer monitors the user mouse movements while he clicks his PIN code numbers. The proposed biometric method provides a non-intrusive soft behavioral biometric add-on to enhance on-line security. We describe the functionality of the system, the set of algorithms developed for the verification framework and preliminary experimental results. We also present quantitative measures of security enhancement offered by the introduction of this soft biometric compared to a PIN only based Web access. |
Three-year stability of open-bite correction by 1-piece maxillary osteotomy. | INTRODUCTION
The purpose of this retrospective cephalometric study was to evaluate the long-term vertical stability of anterior open-bite correction by 1-piece Le Fort I osteotomy and rigid fixation.
METHODS
The sample comprised 40 consecutively treated patients from the files of the Department of Orthodontics, University of Oslo, Norway. All subjects had received a 1-piece Le Fort I osteotomy as the only surgical procedure from 1990 through 1998 and were followed for 3 years according to a protocol for data collection. Lateral cephalograms were obtained before surgery and at 5 occasions after surgery.
RESULTS
The mean open bite before surgery was 2.6 mm; at the 3-year follow-up, 35 patients had a positive overbite, and the remaining 5 patients had an open bite between 0.2 and 0.9 mm. Impaction of the posterior maxilla >or=2 mm relapsed on average by 31%, and inferior repositioning of the anterior maxilla >or=2 mm relapsed by 62%. Maxillary vertical skeletal changes during the postsurgery period were compensated for by orthodontic dentoalveolar adaptation. Most of the skeletal relapse occurred during the first 6 months after surgery and always in the direction opposite to the surgical movement. The relative contribution of mandibular and maxillary changes in anterior open-bite closure was approximately 3:1.
CONCLUSIONS
Surgical correction of anterior open bite was generally stable over a 3-year period, and skeletal relapse was counteracted by dentoalveolar compensation. |
Iterative Residual Network for Deep Joint Image Demosaicking and Denoising | Modern digital cameras rely on sequential execution of separate image processing steps to produce realistic images. The first two steps are usually related to denoising and demosaicking where the former aims to reduce noise from the sensor and the latter converts a series of light intensity readings to color images. Modern approaches try to jointly solve these problems, i.e joint denoising-demosaicking which is an inherently ill-posed problem given that two-thirds of the intensity information are missing and the rest are perturbed by noise. While there are several machine learning systems that have been recently introduced to tackle this problem, in this work we propose a novel algorithm which is inspired by powerful classical image regularization methods, large-scale optimization and deep learning techniques. Consequently, our derived iterative neural network has a transparent and clear interpretation compared to other black-box data driven approaches. The extensive comparisons that we report demonstrate the superiority of our proposed network, which outperforms any previous approaches on both noisy and noise-free data across many different datasets using less training samples. This improvement in reconstruction quality is attributed to the principled way we design and train our network architecture, which as a result requires fewer trainable parameters than the current state-of-the-art solution. |
Learning the Parameters of Determinantal Point Process Kernels | Determinantal point processes (DPPs) are wellsuited for modeling repulsion and have proven useful in applications where diversity is desired. While DPPs have many appealing properties, learning the parameters of a DPP is difficult, as the likelihood is non-convex and is infeasible to compute in many scenarios. Here we propose Bayesian methods for learning the DPP kernel parameters. These methods are applicable in largescale discrete and continuous DPP settings, even when the likelihood can only be bounded. We demonstrate the utility of our DPP learning methods in studying the progression of diabetic neuropathy based on the spatial distribution of nerve fibers, and in studying human perception of diversity in images. |
A Scalable Formulation of Probabilistic Linear Discriminant Analysis: Applied to Face Recognition | In this paper, we present a scalable and exact solution for probabilistic linear discriminant analysis (PLDA). PLDA is a probabilistic model that has been shown to provide state-of-the-art performance for both face and speaker recognition. However, it has one major drawback: At training time estimating the latent variables requires the inversion and storage of a matrix whose size grows quadratically with the number of samples for the identity (class). To date, two approaches have been taken to deal with this problem, to 1) use an exact solution that calculates this large matrix and is obviously not scalable with the number of samples or 2) derive a variational approximation to the problem. We present a scalable derivation which is theoretically equivalent to the previous nonscalable solution and thus obviates the need for a variational approximation. Experimentally, we demonstrate the efficacy of our approach in two ways. First, on labeled faces in the wild, we illustrate the equivalence of our scalable implementation with previously published work. Second, on the large Multi-PIE database, we illustrate the gain in performance when using more training samples per identity (class), which is made possible by the proposed scalable formulation of PLDA. |
Mak-Messenger and Finger-Chat, communications technologies to assist in teaching of signed languages to the deaf and hearing | Communications-based learning technology is an area of educational research that seeks to use advanced communication systems to enhance traditional learning environments with tools that allow students and teachers to formulate and exchange ideas. Increasingly technologies such as instant messaging, and chat systems are being used for teaching. We present two communications-based learning systems, Mak-Messenger and Finger-Chat, specifically designed to assist in the teaching of signed languages to deaf and hearing students. By providing these examples of learning technologies specifically tailored to the deaf community, we hope to illustrate how advanced learning technologies may be used to address issues relating to the educational needs of the deaf. |
Multivariate Compressive Sensing for Image Reconstruction in the Wavelet Domain: Using Scale Mixture Models | Most wavelet-based reconstruction methods of compressive sensing (CS) are developed under the independence assumption of the wavelet coefficients. However, the wavelet coefficients of images have significant statistical dependencies. Lots of multivariate prior models for the wavelet coefficients of images have been proposed and successfully applied to the image estimation problems. In this paper, the statistical structures of the wavelet coefficients are considered for CS reconstruction of images that are sparse or compressive in wavelet domain. A multivariate pursuit algorithm (MPA) based on the multivariate models is developed. Several multivariate scale mixture models are used as the prior distributions of MPA. Our method reconstructs the images by means of modeling the statistical dependencies of the wavelet coefficients in a neighborhood. The proposed algorithm based on these scale mixture models provides superior performance compared with many state-of-the-art compressive sensing reconstruction algorithms. |
Where Do Creative Interactions Come From? The Role of Tie Content and Social Networks | U the determinants of creativity at the individual and organizational level has been the focus of a long history of research in various disciplines from the social sciences, but little attention has been devoted to studying creativity at the dyadic level. Why are some dyadic interactions more likely than others to trigger the generation of novel and useful ideas in organizations? As dyads conduit both knowledge and social forces, they offer an ideal setting to disentangle the effects of knowledge diversity, tie strength, and network structure on the generation of creative thoughts. This paper not only challenges the current belief that sporadic and distant dyadic relationships (weak ties) foster individual creativity but also argues that diverse and strong ties facilitate the generation of creative ideas. From a knowledge viewpoint, our results suggest that ties that transmit a wide (rather than narrow) set of knowledge domains (within the same tie) favor creative idea generation if exchanges occur with sufficient frequency. From a social perspective, we find that strong ties serve as effective catalysts for the generation of creative ideas when they link actors who are intrinsically motivated to work closely together. Finally, this paper also shows that dyadic network cohesion (i.e., the connections from the focal dyad to common contacts) does not always hinder the generation of creative ideas. Our empirical evidence suggests that when cohesion exceeds its average levels, it becomes detrimental to creative idea generation. Hypotheses are tested in a sociometric study conducted within the development department of a software firm. |
An Algorithm for Cost-Effectively Storing Scientific Datasets with Multiple Service Providers in the Cloud | The proliferation of cloud computing allows scientists to deploy computation and data intensive applications without infrastructure investment, where large generated datasets can be flexibly stored with multiple cloud service providers. Due to the pay-as-you-go model, the total application cost largely depends on the usage of computation, storage and bandwidth resources, and cutting the cost of cloud-based data storage becomes a big concern for deploying scientific applications in the cloud. In this paper, we propose a novel algorithm that can automatically decide whether a generated dataset should be 1) stored in the current cloud, 2) deleted and re-generated whenever reused or 3) transferred to cheaper cloud service for storage. The algorithm finds the trade-off among computation, storage and bandwidth costs in the cloud, which are three key factors for the cost of storing generated application datasets with multiple cloud service providers. Simulations conducted with popular cloud service providers' pricing models show that the proposed algorithm is highly cost-effective to be utilised in the cloud. |
Error and attack vulnerability of temporal networks. | The study of real-world communication systems via complex network models has greatly expanded our understanding on how information flows, even in completely decentralized architectures such as mobile wireless networks. Nonetheless, static network models cannot capture the time-varying aspects and, therefore, various temporal metrics have been introduced. In this paper, we investigate the robustness of time-varying networks under various failures and intelligent attacks. We adopt a methodology to evaluate the impact of such events on the network connectivity by employing temporal metrics in order to select and remove nodes based on how critical they are considered for the network. We also define the temporal robustness range, a new metric that quantifies the disruption caused by an attack strategy to a given temporal network. Our results show that in real-world networks, where some nodes are more dominant than others, temporal connectivity is significantly more affected by intelligent attacks than by random failures. Moreover, different intelligent attack strategies have a similar effect on the robustness: even small subsets of highly connected nodes act as a bottleneck in the temporal information flow, becoming critical weak points of the entire system. Additionally, the same nodes are the most important across a range of different importance metrics, expressing the correlation between highly connected nodes and those that trigger most of the changes in the optimal information spreading. Contrarily, we show that in randomly generated networks, where all the nodes have similar properties, random errors and intelligent attacks exhibit similar behavior. These conclusions may help us in design of more robust systems and fault-tolerant network architectures. |
Analyzing students’ performance using multi-criteria classification | Education is a key factor for achieving long-term economic progress. During the last decades, higher standards in education have become easier to attain due to the availability of knowledge and resources worldwide. With the emergence of new technology enhanced by using data mining it has become easier to dig into data and extract useful knowledge from data. In this research, we use data analytic techniques applied to real case studies to predict students’ performance using their past academic experience. We introduce a new hybrid classification technique which utilize decision tree and fuzzy multi-criteria classification. The technique is used to predict students’ performance based on several criteria such as age, school, address, family size, evaluation in previous grades, and activities. To check the accuracy of the model, our proposed method is compared with other well-known classifiers. This study on existing student data showed that this method is a promising classification tool. |
Multiple cardiovascular complications in a patient with Behcet's disease | Arterial and cardiac involvement of Behcet's disease is a rare but life threatening complication. The rupture of an arterial aneurysm might result in sudden death. We report a 54-year-old man with an established diagnosis of Behcet's disease who presented with multiple cardiovascular complications that eventually lead to his death. He presented with extensive venous occlusions, and sequentially developed right ventricular thrombosis with multiple pulmonary thromboembolisms, and a pulmonary artery aneurysm. We report this unusual sequence of cardiovascular complications in a patient with Behcet's disease. |
Optimization principles and application performance evaluation of a multithreaded GPU using CUDA | GPUs have recently attracted the attention of many application developers as commodity data-parallel coprocessors. The newest generations of GPU architecture provide easier programmability and increased generality while maintaining the tremendous memory bandwidth and computational power of traditional GPUs. This opportunity should redirect efforts in GPGPU research from ad hoc porting of applications to establishing principles and strategies that allow efficient mapping of computation to graphics hardware. In this work we discuss the GeForce 8800 GTX processor's organization, features, and generalized optimization strategies. Key to performance on this platform is using massive multithreading to utilize the large number of cores and hide global memory latency. To achieve this, developers face the challenge of striking the right balance between each thread's resource usage and the number of simultaneously active threads. The resources to manage include the number of registers and the amount of on-chip memory used per thread, number of threads per multiprocessor, and global memory bandwidth. We also obtain increased performance by reordering accesses to off-chip memory to combine requests to the same or contiguous memory locations and apply classical optimizations to reduce the number of executed operations. We apply these strategies across a variety of applications and domains and achieve between a 10.5X to 457X speedup in kernel codes and between 1.16X to 431X total application speedup. |
A Distortion Rectification Method for Distorted QR Code Images Based on Self-Adapting Structural Element | With the rapid development of Internet of Things and communication technologies, QR code images are widely used in the embedded automatic identification field. However, previous works for the location of QR code have not considered how to extract the vertexes accurately in geometric distorted QR code images. In this paper, utilizing the characteristics of QR code, we propose an effective mathematical morphology method based on self-adapting structural element to obtain the distorted vertexes precisely and revise the distortion. Our method evaluates two data sets. Compared with previous works, the experiments indicate that our method can achieve higher recognition rate and be more adaptive for geometric distorted QR code images. Thus, the proposed method provides valuable application for embedded system and mobile terminal device. |
Juvenile conduct disorder as a risk factor for trauma exposure and posttraumatic stress disorder. | Juvenile conduct disorder (CD) is a well-documented risk factor for posttraumatic stress disorder (PTSD). This study examines the mechanisms underlying this relationship by using data from 3,315 twin pairs in the Vietnam Era Twin Registry. Results indicate the number of conduct disorder symptoms increased risk of trauma exposure and PTSD in a dose-response fashion. This increased risk was mediated in part by the positive association between CD and lifestyle factors and was not due to confounding by shared genetic or familial vulnerability. The findings suggest CD increases risk for trauma exposure and PTSD among male veterans through direct and indirect mechanisms. Veterans who have a history of CD are at high risk for trauma exposure and development of PTSD. |
Waveform Modeling Using Stacked Dilated Convolutional Neural Networks for Speech Bandwidth Extension | This paper presents a waveform modeling and generation method for speech bandwidth extension (BWE) using stacked dilated convolutional neural networks (CNNs) with causal or non-causal convolutional layers. Such dilated CNNs describe the predictive distribution for each wideband or high-frequency speech sample conditioned on the input narrowband speech samples. Distinguished from conventional frame-based BWE approaches, the proposed methods can model the speech waveforms directly and therefore avert the spectral conversion and phase estimation problems. Experimental results prove that the BWE methods proposed in this paper can achieve better performance than the state-of-the-art frame-based approach utilizing recurrent neural networks (RNNs) incorporating long shortterm memory (LSTM) cells in subjective preference tests. |
Capacity Analysis of Cooperative Relaying Systems Using Non-Orthogonal Multiple Access | In this letter, we propose the cooperative relaying system using non-orthogonal multiple access (NOMA) to improve the spectral efficiency. The achievable average rate of the proposed system is analyzed for independent Rayleigh fading channels, and also its asymptotic expression is provided. In addition, a suboptimal power allocation scheme for NOMA used at the source is proposed. |
Hydrogenated Amorphous Silicon Thin-Film Transistor-Based Optical Pixel Sensor With High Sensitivity Under Ambient Illumination | This letter develops an optical pixel sensor that is based on hydrogenated amorphous silicon thin-film transistors. Exploiting the photo sensitivity of the photo TFTs and combining different color filters, the proposed sensor can sense an optical input signal of a specified color under high ambient illumination conditions. Measurements indicate that the proposed pixel sensor effectively reacts to the optical input signal under light intensities from 873 to 12,910 lux, proving that the sensor is highly reliable under strong ambient illumination. |
AURA: Preliminary Technical Results | This paper presents AURA, a programming language for access control that treats ordinary programming constructs (e.g., integers and recursive functions) and authorization logic constructs (e.g., principals and access control policies) in a uniform way. AURA is based on polymorphic DCC and uses dependent types to permit assertions that refer directly to AURA values while keeping computation out of the assertion level to ensure tractability. The main technical results of this paper include fully mechanically verified proofs of the decidability and soundness for AURA's type system, and a prototype typechecker and interpreter. Comments University of Pennsylvania Department of Computer and Information Science Technical Report No. MSCIS-08-10. Author(s) Limin Jia, Jeffrey A. Vaughan, Karl Mazurak, Jianzhou Zhao, Luke Zarko, Joseph Schorr, and Stephan A. Zdancewic This technical report is available at ScholarlyCommons: http://repository.upenn.edu/cis_reports/886 AURA: Preliminary Technical Results University of Pennsylvania Technical Report MS-CIS-08-10 |
The neural circuit of orexin (hypocretin): maintaining sleep and wakefulness | Sleep and wakefulness are regulated to occur at appropriate times that are in accordance with our internal and external environments. Avoiding danger and finding food, which are life-essential activities that are regulated by emotion, reward and energy balance, require vigilance and therefore, by definition, wakefulness. The orexin (hypocretin) system regulates sleep and wakefulness through interactions with systems that regulate emotion, reward and energy homeostasis. |
Exploratory study comparing the metabolic toxicities of a lopinavir/ritonavir plus saquinavir dual protease inhibitor regimen versus a lopinavir/ritonavir plus zidovudine/lamivudine nucleoside regimen. | OBJECTIVES
To assess the safety, efficacy and metabolic toxicity of lopinavir/ritonavir + saquinavir or zidovudine/lamivudine and evaluate the pharmacokinetics of lopinavir/ritonavir + saquinavir.
METHODS
HIV-1-infected, antiretroviral-naive subjects were randomized to lopinavir/ritonavir (400/100 mg) twice daily + saquinavir (800 mg) or zidovudine/lamivudine (150/300 mg) in a Phase II, 48 week study. Subjects receiving lopinavir/ritonavir + zidovudine/lamivudine initiated escalating doses of saquinavir (400, 600 and 800 mg) weekly for 3 weeks.
RESULTS
By intent-to-treat (non-completer = failure) analysis, 10/16 (63%) lopinavir/ritonavir + saquinavir-treated and 7/14 (50%) lopinavir/ritonavir + zidovudine/lamivudine-treated subjects achieved plasma HIV-1 RNA <50 copies/mL (P=0.713) at week 48. Safety, tolerability, metabolic changes and truncal fat increases were similar between groups. Small decreases in the lower extremity fat in the zidovudine/lamivudine group (-6%) and a statistically significant increase in the lower extremity fat in the saquinavir group (+19%) were observed. Lopinavir/ritonavir co-administered with saquinavir 600 or 800 mg twice daily produced saquinavir concentrations similar to those previously reported for saquinavir/ritonavir 1000/100 mg twice daily.
CONCLUSIONS
Treatment regimens had similar efficacy and tolerability. Metabolic parameters suggested lipoatrophy in the zidovudine/lamivudine treatment group. Saquinavir 600 and 800 mg twice daily produced concentrations similar to those previously reported for saquinavir/ritonavir 1000/100 mg twice daily. |
On the Resistance of Neural Nets to Label Noise | We investigate the behavior of convolutional neural networks (CNN) in the presence of label noise. We show empirically that CNN prediction for a given test sample depends on the labels of the training samples in its local neighborhood. This is similar to the way that the K-nearest neighbors (K-NN) classifier works. With this understanding, we derive an analytical expression for the expected accuracy of a KNN, and hence a CNN, classifier for any level of noise. In particular, we show that K-NN, and CNN, are resistant to label noise that is randomly spread across the training set, but are very sensitive to label noise that is concentrated. Experiments on real datasets validate our analytical expression by showing that they match the empirical results for varying degrees of label noise. |
Penile appearance, lumps and bumps. | BACKGROUND
Even after a thorough examination it can be difficult to distinguish a normal penile anatomical variant from pathology needing treatment.
OBJECTIVE
This article aims to assist diagnosis by outlining a series of common penile anatomical variants and comparing them to common pathological conditions.
DISCUSSION
The problems considered include pearly penile papules, penile sebaceous glands (Fordyce spots), Tyson glands, angiokeratomas of the scrotum, lymphocoele, penile warts, molluscum contagiosum, folliculitis and scabies. |
Attention-based Belief or Disbelief Feature Extraction for Dependency Parsing | Existing neural dependency parsers usually encode each word in a sentence with bi-directional LSTMs, and estimate the score of an arc from the LSTM representations of the head and the modifier, possibly missing relevant context information for the arc being considered. In this study, we propose a neural feature extraction method that learns to extract arcspecific features. We apply a neural network-based attention method to collect evidences for and against each possible head-modifier pair, with which our model computes certainty scores of belief and disbelief, and determines the final arc score by subtracting the score of disbelief from the one of belief. By explicitly introducing two kinds of evidences, the arc candidates can compete against each other based on more relevant information, especially for the cases where they share the same head or modifier. It makes possible to better discriminate two or more competing arcs by presenting their rivals (disbelief evidence). Experiments on various datasets show that our arc-specific feature extraction mechanism significantly improves the performance of bi-directional LSTMbased models by explicitly modeling long-distance dependencies. For both English and Chinese, the proposed model achieve a higher accuracy on dependency parsing task than most existing neural attention-based models. |
Can fiberoptic bronchoscopy be applied to critically ill patients treated with noninvasive ventilation for acute respiratory distress syndrome? Prospective observational study | BACKGROUND
Noninvasive ventilation (NIV) is a cornerstone for the treatment of acute respiratory failure of various etiologies. Using NIV is discussed in mild-to-moderate acute respiratory distress syndrome (ARDS) patients (PaO2/FiO2 > 150). These patients often have comorbidities that increase the risk for bronchoscopy related complications. The primary outcome of this prospective observational study was to evaluate the feasibility, safety and contribution in diagnosis and/or modification of the ongoing treatment of fiberoptic bronchoscopy (FOB) in patients with ARDS treated with NIV.
METHODS
ARDS patients treated with NIV and who require FOB as the diagnostic or therapeutic procedure were included the study. Intensive care ventilators or other dedicated NIV ventilators were used. NIV was applied via simple oro-nasal mask or full-face mask. Pressure support or inspiratory positive airway pressure (IPAP), external positive end expiratory pressure (PEEP) or expiratory positive airway pressure (EPAP) levels were titrated to achieve an expiratory tidal volume of 8 to 10 ml/kg according to ideal body weight, SpO2 > 90 % and respiratory rate below 25/min.
RESULTS
Twenty eight subjects (mean age 63.3 ± 15.9 years, 15 men, 13 women, PaO2/FiO2 rate 145 ± 50.1 at admission) were included the study. Overall the procedure was well tolerated with only 5 (17.9 %) patients showing minor complications. There was no impairment in arterial blood gas and cardiopulmonary parameters after FOB. PaO2/FiO2 rate increased from 132.2 ± 49.8 to 172.9 ± 63.2 (p = 0.001). No patient was intubated within 2 h after the bronchoscopy. 10.7, 32.1 and 39.3 % of the patients required invasive mechanical ventilation after 8 h, 24 h and 48 h, respectively. Bronchoscopy provided diagnosis in 27 (96.4 %) patients. Appropriate treatment was decided according to the results of the bronchoscopic sampling in 20 (71.4 %) patients.
CONCLUSION
FOB under NIV could be considered as a feasible tool for diagnosis and guide for treatment of patients with ARDS treated via NIV in intensive care units. However, FOB-correlated life-treathening complications in severe hypoxemia should not be forgotten. Furthermore, further controlled studies involving a larger series of homogeneous ARDS patients undergoing FOB under NIV are needed to confirm these preliminary findings. |
Everything you always wanted to know about SEM ( structural equations modeling ) but were afraid to ask | This article is intended to serve as a primer for structural equations models for the behavioral researcher. The technique is not mysterious—it is a natural extension of factor analysis and regression. The measurement part of a structural equations model is essentially a confirmatory factor analysis, and the structural part of the model is like a regression but vastly more flexible in the types of theoretical models that may be tested. The models and notation are introduced and the syntax is provided to replicate the analyses in the paper. Part II of this article will appear in the next issue of the Journal of Consumer Psychology, and it covers advanced issues, including fit indices, sample size, moderators, longitudinal data, mediation, and so forth. © 2009 Society for Consumer Psychology. Published by Elsevier Inc. All rights reserved. Structural equations modeling (SEM) is an important tool for marketing researchers. Our top journals contain numerous articles with SEM analyses of data as varied as B2B relationship marketing studies (e.g., Anderson and Narus, 1990; Selnes and Sallis, 2003; Steenkamp and Baumgartner, 2000) or B2C customer satisfaction surveys (e.g., Luo and Bhattacharya, 2006). Survey data from studies such as these are a natural fit for SEM, but when SEM is considered as a natural extension of regression (to be demonstrated shortly), it should be clear that SEM is a statistical tool, orthogonal to the substantive domain of data on which it is implemented. Thus, SEM may also be applied to secondary data bases (e.g., behavioral CRM or loyalty program data) or data collected in experimental settings (e.g., measures of memory, attitudes, and intentions), etc. For example, it is not unusual for an experimental consumer psychologist to inquire about mediation in an attempt to clarify the theoretical processes by which some phenomenon occurs (e.g., Iacobucci, 2008). If mediation clarifies the conceptual picture somewhat, with the insertion of just one new construct— the mediator—imagine howmuch richer the theorizing might be if researchers tried to formulate and test even more complex nomological networks. The Journal of Consumer Psychology presents this article to encourage more frequent and knowledgeable use of SEMs. We begin with an overview of the SEM approach: (1) we discuss the E-mail address: [email protected]. 1057-7408/$ see front matter © 2009 Society for Consumer Psychology. Publish doi:10.1016/j.jcps.2009.09.002 motivation underlying SEMs and a perennial cautionary concern regarding the interpretation of causality, (2) we review the source models of factor analysis and path analysis, and (3) we combine these to develop the full SEM model, and describe the entirety of parameters that may be estimated. Software syntax is introduced to enhance the likelihood of readers' trial and adoption. |
On spiking neural P systems | This work deals with several aspects concerning the formal verification of SN P systems and the computing power of some variants. A methodology based on the information given by the transition diagram associated with an SN P system is presented. The analysis of the diagram cycles codifies invariants formulae which enable us to establish the soundness and completeness of the system with respect to the problem it tries to resolve. We also study the universality of asynchronous and sequential SN P systems and the capability these models have to generate certain classes of languages. Further, by making a slight modification to the standard SN P systems, we introduce a new variant of SN P systems with a special I/O mode, called SN P modules, and study their computing power. It is demonstrated that, as string language acceptors and transducers, SN P modules can simulate several types of computing devices such as finite automata, a-finite transducers, and systolic trellis automata. |
Enhancing Knowledge Management in Online Collaborative Learning | This study aims to explore two crucial aspects of collaborative work and learning: on the one hand, the importance of enabling collaborative learning applications to capture and structure the information generated by group activity and, on the other hand, to extract the relevant knowledge in order to provide learners and tutors with efficient awareness, feedback and support as regards group performance and collaboration. To this end, in this paper we first propose a conceptual model for data analysis and management that identifies and classifies the many kinds of indicators that describe collaboration and learning into high-level aspects of collaboration. Then, we provide a computational platform that, at a first step, collects and classifies both the event information generated asynchronously from the users’ actions and the labeled dialogues from the synchronous collaboration according to these indicators. This information is then analyzed in next steps to eventually extract and present to participants the relevant knowledge about the collaboration. The ultimate aim of this platform is to efficiently embed information and knowledge into collaborative learning applications. We eventually suggest a generalization of our approach to be used in diverse collaborative learning situations and domains. |
A FRAMEWORK OF INFORMATION SECURITY CULTURE CHANGE 1 | Establishing information security culture within an organization may include transformation of how employees interact with the information assets which may be challenged with resistance, fear or confusion. Change management skills could assist organization members to smoothly adapt to the new culture. The use of change management in information security culture has been rarely investigated in the literature and very few models have been offered. This paper reviews the available change management models that have been used in information security management. Then it integrates a set of change management principles that were proposed in the literature and combine them to a comprehensive multistep framework that support and guide the transition in information security culture change within organizations. Moreover, the principles will be the base of suggestion of the appropriate guideline to support the effective implementation of change in information security culture. The framework provides guidance to information security professionals and academic researchers in taking proactive steps and measures to facilitate the culture change. |
Motivation and emotion as mediators in multimedia learning | Against the background of Moreno’s “cognitive-affective theory of learning with media” (CATLM) (Moreno, 2006), three papers on cognitive and affective processes in learning with multimedia are discussed in this commentary. The papers provide valuable insights in how cognitive processing and learning results can be affected by constructs such as “situational interest”, “positive emotions”, or “confusion”, and they suggest questions for further research in this field. 2013 Elsevier Ltd. All rights reserved. |
Adaptive feedback active noise control headset: implementation, evaluation and its extensions | In this paper, we present design and real-time implementation of a single-channel adaptive feedback active noise control (AFANC) headset for audio and communication applications. Several important design and implementation considerations, such as the ideal position of error microphone, training signal used, selection of adaptive algorithms and structures will be addressed in this paper. Real-time measurements and comparisons are also carried out with the latest commercial headset to evaluate its performance. In addition, several new extensions to the AFANC headset are described and evaluated. |
Kinematic Observers for Articulated Rovers | A state estimator design is presented for a Mars rover prototype. Odometry estimates are obtained by utilizing the f u l l kinematics of the vehicle including the nonlinear internal kinematics of the rover rocker-bogey mechanism as well as the contact kinematics between the wheels and the ground. Additional sewing using gyroscopes, acclerometers and visual sensors allows for robust rover motion state estimation. Simulation M well as experimental results are presented to illustrate the estimator opemtion. |
Environmental fate of pharmaceuticals in water/sediment systems. | In recent years there has been growing interest on the occurrence and the fate of pharmaceuticals in the aquatic environment. Nevertheless, few data are available covering the fate of the pharmaceuticals in the water/sediment compartment. In this study, the environmental fate of 10 selected pharmaceuticals and pharmaceutical metabolites was investigated in water/sediment systems including both the analysis of water and sediment. The experiments covered the application of four 14C-labeled pharmaceuticals (diazepam, ibuprofen, iopromide, and paracetamol) for which radio-TLC analysis was used as well as six nonlabeled compounds (carbamazepine, clofibric acid, 10,11-dihydro-10,11-dihydroxycarbamazepine, 2-hydroxyibuprofen, ivermectin, and oxazepam), which were analyzed via LC-tandem MS. Ibuprofen, 2-hydroxyibuprofen, and paracetamol displayed a low persistence with DT50 values in the water/sediment system < or =20 d. The sediment played a key role in the elimination of paracetamol due to the rapid and extensive formation of bound residues. A moderate persistence was found for ivermectin and oxazepam with DT50 values of 15 and 54 d, respectively. Lopromide, for which no corresponding DT50 values could be calculated, also exhibited a moderate persistence and was transformed into at least four transformation products. For diazepam, carbamazepine, 10,11-dihydro-10,11-dihydroxycarbamazepine, and clofibric acid, system DT90 values of >365 d were found, which exhibit their high persistence in the water/sediment system. An elevated level of sorption onto the sediment was observed for ivermectin, diazepam, oxazepam, and carbamazepine. Respective Koc values calculated from the experimental data ranged from 1172 L x kg(-1) for ivermectin down to 83 L x kg(-1) for carbamazepine. |
FPGA implementation of image steganography using Haar DWT and modified LSB techniques | Security places an important role in communication applications for secure data transfers. Image Steganography is one of the most reliable technique in encryption and decryption of an image (hidden) inside other image (cover) such way that only cover image is visible. In this paper frequency domain Image Steganography using DWT and Modified LSB technique is proposed. The proposed approach uses DWT to convert spatial domain information to frequency domain information. The LL band is used for further Image Steganographic process. The image is decoded using inverse LSB. Since the LL band is used for encoding and decoding purpose, memory requirement of the design is less for hardware implementation. Also this will increase the operating frequency of the architecture. The proposed technique obtains high PSNR for both stegano and recovered hidden image. |
External validation of a prognostic model predicting overall survival in metastatic castrate-resistant prostate cancer patients treated with abiraterone. | A prognostic model was derived from the population of the COU-AA-301 phase 3 trial for metastatic castrate-resistant prostate cancer patients treated with abiraterone after docetaxel, and it stratifies patients into three risk groups based on clinical parameters. We validated this model in an independent cohort of patients treated with abiraterone after docetaxel outside a clinical trial (group A; n=94) and explored its utility in patients treated with abiraterone in the prechemotherapy setting (group B; n=64). For group A, median overall survival (mOS) was significantly different across the three prognostic groups (good: n=39, mOS: 21.8 mo; intermediate: n=44, mOS: 10.6 mo; poor: n=7, mOS: 6.8 mo; p<0.001; area under the curve [AUC]: 0.71). Analysis of group B confirmed the ability of the model to prognosticate for survival in the prechemotherapy setting: (good: n=44, mOS: 45.6 mo; intermediate or poor: n=20, mOS: 34.5 mo; p=0.042; AUC: 0.61). These results serve to validate the prognostic model in an independent population treated with abiraterone after docetaxel and support clinical implementation of the score. Calibration of the model was poorer in patients receiving abiraterone prechemotherapy. Prospective evaluation of this model in clinical trials is needed. |
Does low-dose seretide reverse chronic obstructive pulmonary disease and are the benefits sustained over time? An open-label Swedish crossover cohort study between 1999 and 2005. | Chronic obstructive pulmonary disease (COPD) still poses a formidable challenge to patients and clinicians alike. A fixed-dose dry powder combination inhaler, Seretide/Advair, containing salmeterol and fluticasone, is licensed in the European Community for the treatment of moderate to severe COPD in the strength of 50/500 microg twice daily (BID). Several studies have investigated the effects of this combination and show improved forced expiratory volume in 1 s (FEV(1)), quality of life, and a decrease of exacerbations. Most of the studies have run for less than 1 year. The aim of this investigator-initiated, independent study was to elucidate if the combination containing 50 microg of salmeterol and 250 microg of fluticasone BID could be shown to have the same beneficial effect as the higher dosage, and if the effect could be sustained over time. |
Downlink and Uplink Cell Association With Traditional Macrocells and Millimeter Wave Small Cells | Millimeter wave (mmWave) links will offer high capacity but are poor at penetrating into or diffracting around solid objects. Thus, we consider a hybrid cellular network with traditional sub-6 GHz macrocells coexisting with denser mmWave small cells, where a mobile user can connect to either opportunistically. We develop a general analytical model to characterize and derive the uplink and downlink cell association in the view of the signal-to-interference-and-noise-ratio and rate coverage probabilities in such a mixed deployment. We offer extensive validation of these analytical results (which rely on several simplifying assumptions) with simulation results. Using the analytical results, different decoupled uplink and downlink cell association strategies are investigated and their superiority is shown compared with the traditional coupled approach. Finally, small cell biasing in mmWave is studied, and we show that unprecedented biasing values are desirable due to the wide bandwidth. |
The Effectiveness of Online Shopping Characteristics and Well-Designed Websites on Satisfaction | Much research has been done to understand the motivations of consumers to choose among online retailers and the retailer factors driving customer satisfaction (e.g., Kim et al. 2009; Kotha et al. 2004; Pan et al. 2002; Qu et al. 2008; Smith et al. 2000; Wolfinbarger and Gilly 2003). Concentrating on e-tailing service quality, Wolfinbarger and Gilly (2003) argue that four factors—website design, fulfillment/reliability, privacy/security, and customer service—strongly predict customer satisfaction. Kotha et al. (2004) study the role of online buying experience as a competitive advantage along five dimensions: website usability, customer confidence in the web business, the selection of goods and services on the site, the effectiveness of relationship services such as virtual community building and site personalization, and the extent of price leadership. They conclude that website usability and product selection can be easily competed away via imitation, while superior customer service can lead to a sustainable competitive advantage. Devaraj et al. (2002) find that the usefulness and ease-of-use of online shopping, together with high service quality, are factors affecting consumer satisfaction and, subsequently, their channel preference. Price can also play a role in customer satisfaction. Because online stores are only a mouse click away, many studies have argued that price is an important factor in a customer’s decision-making process (Lee and Overby 2004). Using BizRate data, Jiang and Rosenbloom (2005) find that after-delivery satisfaction and price perception have a stronger impact on customer satisfaction than at-checkout satisfaction. Combining the aforementioned studies while integrating their similar dimensions, we review three retailer characteristics, namely website design, customer service, and pricing, and we provide theoretical background for this research. |
GEOCHEMICAL CHARACTERISTICS OF ZAOZIGOU GOLD DEPOSIT AND ITS SIGNIFICANCE | Zaozigou gold mine is located in the northern margin of the West Qinling orogenic belt,and its reserve has reached the super-large scale with the exploration during recent years.The geochemical characteristics of rock in mine revealed that the mine magmatic rock has the characteristics of the Himalayan granite.The paper analyzed the relation between the environment of magmatic rock and gold metallogenesis,and supposed that the source of gold was the crustal material under the thickening mechanism related to Himalayan granite.The coupled enrichment of magmatic hydrothermal solution-structures-country rock should be the main factor to develop the accumulation of gold deposit. |
Religious belief ( not necessarily) embedded in basic trust and receptivity | Questioning Erikson (1965, 1968) and Rumke (1949), the first aim of this research was to relate a sense of basic trust to various approaches to religion. A second aim was to explore a religious coping attitude of receptivity, taking a more distant view of the problem situation in question. This study explores whether the relation between basic trust and receptivity on the one hand and religiosity on the other depends on the way people approach religion. Wulff (1991, 1997) identified four approaches to religion, which can be located in a two-dimensional space in the dimensions of inclusion versus exclusion of transcendence and literal versus symbolic. Results from a sample of adults suggest that second naivete, as a measure of symbolic belief, relates positively to basic trust, whereas orthodoxy as a measure of literal belief did not. Erikson's and Rumke's statement depends on the approach to religion. |
Sample size determination. | Scientists who use animals in research must justify the number of animals to be used, and committees that review proposals to use animals in research must review this justification to ensure the appropriateness of the number of animals to be used. This article discusses when the number of animals to be used can best be estimated from previous experience and when a simple power and sample size calculation should be performed. Even complicated experimental designs requiring sophisticated statistical models for analysis can usually be simplified to a single key or critical question so that simple formulae can be used to estimate the required sample size. Approaches to sample size estimation for various types of hypotheses are described, and equations are provided in the Appendix. Several web sites are cited for more information and for performing actual calculations |
Algorithmic Framework for Model-based Reinforcement Learning with Theoretical Guarantees | Model-based reinforcement learning (RL) is considered to be a promising approach to reduce the sample complexity that hinders model-free RL. However, the theoretical understanding of such methods has been rather limited. This paper introduces a novel algorithmic framework for designing and analyzing model-based RL algorithms with theoretical guarantees. We design a meta-algorithm with a theoretical guarantee of monotone improvement to a local maximum of the expected reward. The meta-algorithm iteratively builds a lower bound of the expected reward based on the estimated dynamical model and sample trajectories, and then maximizes the lower bound jointly over the policy and the model. The framework extends the optimism-in-face-of-uncertainty principle to non-linear dynamical models in a way that requires no explicit uncertainty quantification. Instantiating our framework with simplification gives a variant of model-based RL algorithms Stochastic Lower Bounds Optimization (SLBO). Experiments demonstrate that SLBO achieves stateof-the-art performance when only one million or fewer samples are permitted on a range of continuous control benchmark tasks.1 |
Coupled synthesis and self-assembly of nanoparticles to give structures with controlled organization | Colloidal inorganic nanoparticles have size-dependent optical, optoelectronic and material properties that are expected to lead to superstructures with a range of practical applications. Discrete nanoparticles with controlled chemical composition and size distribution are readily synthesized using reverse micelles and microemulsions as confined reaction media, but their assembly into well-defined superstructures amenable to practical use remains a difficult and demanding task. This usually requires the initial synthesis of spherical nanoparticles, followed by further processing such as solvent evaporation, molecular cross-linking or template-patterning. Here we report that the interfacial activity of reverse micelles and microemulsions can be exploited to couple nanoparticle synthesis and self-assembly over a range of length scales to produce materials with complex organization arising from the interdigitation of surfactant molecules attached to specific nanoparticle crystal faces. We demonstrate this principle by producing three different barium chromate nanostructures—linear chains, rectangular superlattices and long filaments—as a function of reactant molar ratio, which in turn is controlled by fusing reverse micelles and microemulsion droplets containing fixed concentrations of barium and chromate ions, respectively. If suitable soluble precursors and amphiphiles with headgroups complementary to the crystal surface of the nanoparticle target are available, it should be possible to extend our approach to the facile production of one-dimensional ‘wires’ and higher-order colloidal architectures made of metals and semiconductors. |
Copeptin Predicts Mortality in Critically Ill Patients | BACKGROUND
Critically ill patients admitted to a medical intensive care unit exhibit a high mortality rate irrespective of the cause of admission. Besides its role in fluid and electrolyte balance, vasopressin has been described as a stress hormone. Copeptin, the C-terminal portion of provasopressin mirrors vasopressin levels and has been described as a reliable biomarker for the individual's stress level and was associated with outcome in various disease entities. The aim of this study was to analyze whether circulating levels of copeptin at ICU admission are associated with 30-day mortality.
METHODS
In this single-center prospective observational study including 225 consecutive patients admitted to a tertiary medical ICU at a university hospital, blood was taken at ICU admission and copeptin levels were measured using a commercially available automated sandwich immunofluorescent assay.
RESULTS
Median acute physiology and chronic health evaluation II score was 20 and 30-day mortality was 25%. Median copeptin admission levels were significantly higher in non-survivors as compared with survivors (77.6 IQR 30.7-179.3 pmol/L versus 45.6 IQR 19.6-109.6 pmol/L; p = 0.025). Patients with serum levels of copeptin in the third tertile at admission had a 2.4-fold (95% CI 1.2-4.6; p = 0.01) increased mortality risk as compared to patients in the first tertile. When analyzing patients according to cause of admission, copeptin was only predictive of 30-day mortality in patients admitted due to medical causes as opposed to those admitted after cardiac surgery, as medical patients with levels of copeptin in the highest tertile had a 3.3-fold (95% CI 1.66.8, p = 0.002) risk of dying independent from APACHE II score, primary diagnosis, vasopressor use and need for mechanical ventilation.
CONCLUSION
Circulating levels of copeptin at ICU admission independently predict 30-day mortality in patients admitted to a medical ICU. |
Statin use after colorectal cancer diagnosis and survival: a population-based cohort study. | PURPOSE
To investigate whether statins used after colorectal cancer diagnosis reduce the risk of colorectal cancer-specific mortality in a cohort of patients with colorectal cancer.
PATIENTS AND METHODS
A cohort of 7,657 patients with newly diagnosed stage I to III colorectal cancer were identified from 1998 to 2009 from the National Cancer Data Repository (comprising English cancer registry data). This cohort was linked to the United Kingdom Clinical Practice Research Datalink, which provided prescription records, and to mortality data from the Office of National Statistics (up to 2012) to identify 1,647 colorectal cancer-specific deaths. Time-dependent Cox regression models were used to calculate hazard ratios (HR) for cancer-specific mortality and 95% CIs by postdiagnostic statin use and to adjust these HRs for potential confounders.
RESULTS
Overall, statin use after a diagnosis of colorectal cancer was associated with reduced colorectal cancer-specific mortality (fully adjusted HR, 0.71; 95% CI, 0.61 to 0.84). A dose-response association was apparent; for example, a more marked reduction was apparent in colorectal cancer patients using statins for more than 1 year (adjusted HR, 0.64; 95% CI, 0.53 to 0.79). A reduction in all-cause mortality was also apparent in statin users after colorectal cancer diagnosis (fully adjusted HR, 0.75; 95% CI, 0.66 to 0.84).
CONCLUSION
In this large population-based cohort, statin use after diagnosis of colorectal cancer was associated with longer rates of survival. |
Robust Distant Supervision Relation Extraction via Deep Reinforcement Learning | Distant supervision has become the standard method for relation extraction. However, even though it is an efficient method, it does not come at no cost—The resulted distantly-supervised training samples are often very noisy. To combat the noise, most of the recent state-of-theart approaches focus on selecting onebest sentence or calculating soft attention weights over the set of the sentences of one specific entity pair. However, these methods are suboptimal, and the false positive problem is still a key stumbling bottleneck for the performance. We argue that those incorrectly-labeled candidate sentences must be treated with a hard decision, rather than being dealt with soft attention weights. To do this, our paper describes a radical solution—We explore a deep reinforcement learning strategy to generate the false-positive indicator, where we automatically recognize false positives for each relation type without any supervised information. Unlike the removal operation in the previous studies, we redistribute them into the negative examples. The experimental results show that the proposed strategy significantly improves the performance of distant supervision comparing to state-of-the-art systems. |
Some hybrid models to improve Firefly algorithm performance | Firefly algorithm is one of the evolutionary optimization algorithms, and is inspired by the behavior of fireflies in nature. Though efficient, its parameters do not change during iterations, which is also true for particle swarm optimization. This paper propose a hybrid model to improve the FA algorithm by introducing learning automata to adjust firefly behavior, and using genetic algorithm to enhance global search and generate new solutions. We also propose an approach to stabilize firefly movement during iterations. Simulation results show better performance and accuracy than standard firefly algorithm. |
Predictive factors for survival of patients with inoperable malignant distal biliary strictures: a practical management guideline. | BACKGROUND
Stenting is the treatment of choice for inoperable malignant strictures of the common bile duct. Criteria for the choice of stents (plastic versus metallic) remain controversial because predicting survival is difficult.
AIMS
To define prognostic factors in order to improve the cost effectiveness of endoscopic palliation.
PATIENTS
One hundred and one patients were included in a prospective trial. Seven prognostic variables for survival were analysed (age, sex, bilirubinaemia, weight loss, presence of liver metastases, and tumour histology and size). All patients were followed until death or at least one year after inclusion. By the end of the study, 81 (80.2%) patients had died.
RESULTS
In univariate analysis, the variables associated with survival were weight loss (p < 0.05) and tumour size (p < 0.01). By multivariate analysis, tumour size was the only independent prognostic factor (p < 0.05). A threshold of 30 mm at diagnosis distinguished two survival profiles: the median survival of patients with a tumour greater than 30 mm was 3.2 months, whereas it was 6.6 months for patients with a tumour less than 30 mm (p < 0.001).
CONCLUSIONS
A practical strategy could be based on tumour size at diagnosis: a metal stent should be systematically chosen for patients with an inoperable tumour smaller than 30 mm, while larger tumours are efficiently palliated by a plastic stent. |
Inverse procedural modeling of 3D models for virtual worlds | This course presents a collection of state-of-the-art approaches for modeling and editing of 3D models for virtual worlds, simulations, and entertainment, in addition to real-world applications. The first contribution of this course is a coherent review of inverse procedural modeling (IPM) (i.e., proceduralization of provided 3D content). We describe different formulations of the problem as well as solutions based on those formulations. We show that although the IPM framework seems under-constrained, the state-of-the-art solutions actually use simple analogies to convert the problem into a set of fundamental computer science problems, which are then solved by corresponding algorithms or optimizations. The second contribution includes a description and categorization of results and applications of the IPM frameworks. Moreover, a substantial part of the course is devoted to summarizing different domain IPM frameworks for practical content generation in modeling and animation. |
Securify: Practical Security Analysis of Smart Contracts | Permissionless blockchains allow the execution of arbitrary programs (called smart contracts), enabling mutually untrusted entities to interact without relying on trusted third parties. Despite their potential, repeated security concerns have shaken the trust in handling billions of USD by smart contracts. To address this problem, we present Securify, a security analyzer for Ethereum smart contracts that is scalable, fully automated, and able to prove contract behaviors as safe/unsafe with respect to a given property. Securify's analysis consists of two steps. First, it symbolically analyzes the contract's dependency graph to extract precise semantic information from the code. Then, it checks compliance and violation patterns that capture sufficient conditions for proving if a property holds or not. To enable extensibility, all patterns are specified in a designated domain-specific language. Securify is publicly released, it has analyzed >18K contracts submitted by its users, and is regularly used to conduct security audits by experts. We present an extensive evaluation of Securify over real-world Ethereum smart contracts and demonstrate that it can effectively prove the correctness of smart contracts and discover critical violations. |
Higher Hemoglobin A1c After Discharge Is an Independent Predictor of Adverse Outcomes in Patients With Acute Coronary Syndrome - Findings From the PACIFIC Registry. | BACKGROUND
Optimal medical therapy (OMT) and the management of coronary risk factors are necessary for secondary prevention of major adverse cardiac and cerebrovascular events (MACCE) in post-acute coronary syndrome (ACS) patients. However, the effect of post-discharge patient adherence has not been investigated in Japanese patients.
METHODSANDRESULTS
The Prevention of AtherothrombotiC Incidents Following Ischemic Coronary Attack (PACIFIC) registry was a multicenter, prospective observational study of 3,597 patients with ACS. Death or MACCE occurred in 229 patients between hospitalization and up to 1 year after discharge. Among 2,587 patients, the association between OMT adherence and risk factor control at 1 year and MACCE occurring between 1 and 2 years after discharge was assessed. OMT was defined as the use of antiplatelet agents, angiotensin-converting enzyme inhibitors, β-blockers, and statins. Risk factor targets were: low-density lipoprotein-cholesterol <100 mg/dl, HbA1c <7.0%, non-smoking status, blood pressure <130/80 mmHg, and 18.5≤body mass index≤24.9 kg/m(2). The incidence of MACCE was 1.8% and associated with female sex (P=0.020), age ≥75 years (P=0.004), HbA1c ≥7.0% (P=0.004), LV ejection fraction <35% (P<0.001), estimated glomerular filtration rate <60 ml/min (P=0.008), and history of cerebral infarction (P=0.003). In multivariate analysis, lower post-discharge HbA1c was strongly associated with a lower risk of MACCE after ACS (P=0.004).
CONCLUSIONS
Hyperglycemia after discharge is a crucial target for the prevention of MACCE in post-ACS patients. (Circ J 2016; 80: 1607-1614). |
A Framework for Identifying Software Project Risks | We've all heard tales of multimillion dollar mistakes that somehow ran off course. Are software projects that risky or do managers need to take a fresh approach when preparing for such critical expeditions? Software projects are notoriously difficult to manage and too many of them end in failure. In 1995, annual U.S. spending on software projects reached approximately $250 billion and encompassed an estimated 175,000 projects [6]. Despite the costs involved, press reports suggest that project failures are occurring with alarming frequency. In 1995, U.S companies alone spent an estimated $59 billion in cost overruns on IS projects and another $81 billion on canceled software projects [6]. One explanation for the high failure rate is that managers are not taking prudent measures to assess and manage the risks involved in these projects. is Advocates of software project risk management claim that by countering these threats to success, the incidence of failure can be reduced [4, 5]. Before we can develop meaningful risk management strategies, however, we must identify these risks. Furthermore, the relative importance of these risks needs to be established, along with some understanding as to why certain risks are perceived to be more important than others. This is necessary so that managerial attention can be focused on the areas that constitute the greatest threats. Finally, identified risks must be classified in a way that suggests meaningful risk mitigation strategies. Here, we report the results of a Delphi study in which experienced software project managers identified and ranked the most important risks. The study led not only to the identification of risk factors and their relative importance, but also to novel insights into why project managers might view certain risks as being more important than others. Based on these insights, we introduce a framework for classifying software project risks and discuss appropriate strategies for managing each type of risk. Since the 1970s, both academics and practitioners have written about risks associated with managing software projects [1, 2, 4, 5, 7, 8]. Unfortunately , much of what has been written on risk is based either on anecdotal evidence or on studies limited to a narrow portion of the development process. Moreover, no systematic attempts have been made to identify software project risks by tapping the opinions of those who actually have experience in managing such projects. With a few exceptions [3, 8], there has been little attempt to understand the … |
A 70–90-GHz High-Linearity Multi-Band Quadrature Receiver in ${\hbox{0.35-}}\mu{\hbox {m}}$ SiGe Technology | An integrated frequency agile quadrature E-band receiver is presented in this paper. The complete receiver is realized in a commercial 0.35-μm SiGe:C technology with an ft/fmax of 170/250 GHz. The receiver covers the two point-to-point communication bands from 71 to 76 GHz and from 81 to 86 GHz and the automotive radar band at 77 GHz. A wide tuning range modified Colpitts oscillator provides a local oscillator (LO) tuning range > 30%. A two-stage constant phase RC polyphase network is implemented to provide wideband in-phase quadrature LO signals. The measured phase imbalance of the network stays below 8° over the receiver's frequency range. In addition the chip includes a wideband low-noise amplifier, Wilkinson power divider, down conversion mixers, and frequency prescaler. Each of the chip's receiver I/Q paths shows a measured conversion gain above 19 dB and an input referred 1-dB compression point of -22 dBm. The receiver's measured noise figure stays below 11 dB over the complete frequency range. Furthermore, the receiver has a measured IF bandwidth of 6 GHz. The complete chip including prescaler draws a current of 230 mA from a 3.3-V supply, and consumes a chip area of 1628 μm×1528 μm. |
Direct versus indirect line of sight (LOS) stabilization | Two methods are analyzed for inertially stabilizing the pointing vector defining the line of sight (LOS) of a two-axis gimbaled laser tracker. Mounting the angular rate and acceleration sensors directly on the LOS axes is often used for precision pointing applications. This configuration impacts gimbal size, and the sensors must be capable of withstanding high angular slew rates. With the other stabilization method, sensors are mounted on the gimbal base, which alleviates some issues with the direct approach but may be less efficient, since disturbances are not measured in the LOS coordinate frame. This paper investigates the impact of LOS disturbances and sensor noise on the performance of each stabilization control loop configuration. It provides a detailed analysis of the mechanisms by which disturbances are coupled to the LOS track vector for each approach, and describes the advantages and disadvantages of each. It concludes with a performance comparison based upon simulated sensor noise and three sets of platform disturbance inputs ranging from mild to harsh disturbance environments. |
Fuzzy Perceptive Values for MDPs with Discounting | In this paper, we formulate the fuzzy perceptive model for discounted Markov decision processes in which the perception for transition probabilities is described by fuzzy sets. The optimal expected reward, called a fuzzy perceptive value, is characterized and calculated by a new fuzzy relation. As a numerical example, a machine maintenance problem is considered. |
Acromelic frontonasal dysostosis and ZSWIM6 mutation: phenotypic spectrum and mosaicism | Acromelic frontonasal dysostosis (AFND) is a distinctive and rare frontonasal malformation that presents in combination with brain and limb abnormalities. A single recurrent heterozygous missense substitution in ZSWIM6, encoding a protein of unknown function, was previously shown to underlie this disorder in four unrelated cases. Here we describe four additional individuals from three families, comprising two sporadic subjects (one of whom had no limb malformation) and a mildly affected female with a severely affected son. In the latter family we demonstrate parental mosaicism through deep sequencing of DNA isolated from a variety of tissues, which each contain different levels of mutation. This has important implications for genetic counselling. |
The eukaryotic genome is structurally and functionally more like a social insect colony than a book. | Traditionally, the genome has been described as the 'book of life'. However, the metaphor of a book may not reflect the dynamic nature of the structure and function of the genome. In the eukaryotic genome, the number of centrally located protein-coding sequences is relatively constant across species, but the amount of noncoding DNA increases considerably with the increase of organismal evolutional complexity. Therefore, it has been hypothesized that the abundant peripheral noncoding DNA protects the genome and the central protein-coding sequences in the eukaryotic genome. Upon comparison with the habitation, sociality and defense mechanisms of a social insect colony, it is found that the genome is similar to a social insect colony in various aspects. A social insect colony may thus be a better metaphor than a book to describe the spatial organization and physical functions of the genome. The potential implications of the metaphor are also discussed. |
IMAGE PROCESSING TECHNIQUES FOR THE ENHANCEMENT OF BRAIN TUMOR PATTERNS | Brain tumor analysis is done by doctors but its grading gives different conclusions which may vary from one doctor to another. So for the ease of doctors, a research was done which made the use of software with edge detection and segmentation methods, which gave the edge pattern and segment of brain and brain tumor itself. Medical image segmentation had been a vital point of research, as it inherited complex problems for the proper diagnosis of brain disorders. In this research, it provides a foundation of segmentation and edge detection, as the first step towards brain tumor grading. Current segmentation approaches are reviewed with an emphasis placed on revealing the advantages and disadvantages of these methods for medical imaging applications. The use of image segmentation in different imaging modalities is also described along with the difficulties encountered in each modality. |
Evolving Graphs and Networks with Edge Encoding: Preliminary Report | We present an alternative to the cellular encoding technique [Gruau 1992] for evolving graph and network structures via genetic programming. The new technique, called edge encoding, uses edge operators rather than the node operators of cellular encoding. While both cellular encoding and edge encoding can produce all possible graphs, the two encodings bias the genetic search process in different ways; each may therefore be most useful for a different set of problems. The problems for which these techniques may be used, and for which we think edge encoding may be particularly useful, include the evolution of recurrent neural networks, finite automata, and graph-based queries to symbolic knowledge bases. In this preliminary report we present a technical description of edge encoding and an initial comparison to cellular encoding. Experimental investigation of the relative merits of these encoding schemes is currently in progress. |
Estimates of beneficial and harmful sun exposure times during the year for major Australian population centres. | OBJECTIVE
To examine the influence of geographical and seasonal factors on duration of solar ultraviolet (UV) radiation exposure of skin to produce recommended vitamin D levels without producing erythema.
DESIGN AND SETTING
An ecological study using daily Ultraviolet Index (UVI) data collected in major population centres across Australia for 1 year (1 January - 31 December 2001) to calculate sun exposure times for recommended vitamin D production and erythema.
MAIN OUTCOME MEASURES
Sun exposure times to produce either serum vitamin D concentrations equivalent to an oral intake of 200-600 IU/day or erythema for people aged 19-50 years with fair skin (Fitzpatrick type II skin) exposing 15% of the body.
RESULTS
In January, across Australia, 2-14 minutes of sun three to four times per week at 12:00 is sufficient to ensure recommended vitamin D production in fair-skinned people with 15% of the body exposed. However, erythema can occur in as little as 8 minutes. By contrast, at 10:00 and 15:00, there is a greater difference between exposure time to produce erythema and that to produce recommended vitamin D levels, thereby reducing the risk of sunburn from overexposure. From October to March, around 10-15 minutes of sun exposure at around 10:00 or 15:00 three to four times per week should be enough for fair-skinned people across Australia to produce recommended vitamin D levels. Longer exposure times are needed from April to September, particularly in southern regions of Australia.
CONCLUSION
Our study reinforces the importance of existing sun protection messages for the summer months throughout Australia. However, fair-skinned people should be able to obtain sufficient vitamin D from short periods of unprotected sun exposure of the face, arms and hands outside of the peak UV period (10:00-15:00) throughout Australia for most of the year. The greater variability in sun exposure times during winter, means that optimal sun exposure advice should be tailored to each location. |
Considerations and recent advances in QSAR models for cytochrome P450-mediated drug metabolism prediction | Quantitative structure-activity relationships (QSAR) methods are urgently needed for predicting ADME/T (absorption, distribution, metabolism, excretion and toxicity) properties to select lead compounds for optimization at the early stage of drug discovery, and to screen drug candidates for clinical trials. Use of suitable QSAR models ultimately results in lesser time-cost and lower attrition rate during drug discovery and development. In the case of ADME/T parameters, drug metabolism is a key determinant of metabolic stability, drug-drug interactions, and drug toxicity. QSAR models for predicting drug metabolism have undergone significant advances recently. However, most of the models used lack sufficient interpretability and offer poor predictability for novel drugs. In this review, we describe some considerations to be taken into account by QSAR for modeling drug metabolism, such as the accuracy/consistency of the entire data set, representation and diversity of the training and test sets, and variable selection. We also describe some novel statistical techniques (ensemble methods, multivariate adaptive regression splines and graph machines), which are not yet used frequently to develop QSAR models for drug metabolism. Subsequently, rational recommendations for developing predictable and interpretable QSAR models are made. Finally, the recent advances in QSAR models for cytochrome P450-mediated drug metabolism prediction, including in vivo hepatic clearance, in vitro metabolic stability, inhibitors and substrates of cytochrome P450 families, are briefly summarized. |
Accurate indoor navigation using Smartphone , Bluetooth Low Energy and Visual Tags | Every moment of our day takes place in a location, identified by geographical coordinates which connect physical and digital world. For that reason it’s important to know the user position: it enables new location-related services and give to the people the opportunity to find their way in complex environments such as hospitals. The best approach to realize an indoor navigation system is to exploit the potential of the smartphone in some way. Different techniques were proposed in scientific literature, but the topic is still matter of research. In this paper we propose a smartphone-based indoor navigation system which use both Bluetooth Low Energy(BLE) and a visual-tags system. A BLE beacons system is deployed in the environment to segment a big area into smaller areas and to perform a rough background localization. An accurate indoor navigation is performed using the camera to decode a visual-tags system deployed onto the room’s floor. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.