title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Hyponatremia and hypernatremia in the newborn: in medio stat virtus. | Hyponatremia and hypernatremia are complex clinical problems that occur frequently in full term newborns and in preterm infants admitted to the Neonatal Intensive Care Unit (NICU) although their real frequency and etiology are incompletely known. Pathogenetic mechanisms and clinical timing of hypo-hypernatremia are well known in adult people whereas in the newborn is less clear how and when hypo-hypernatremia could alter cerebral osmotic equilibrium and after how long time brain cells adapt themselves to the new hypo-hypertonic environment. Aim of this review is to present a practical approach and management of hypo-hypernatremia in newborns, especially in preterms. |
Low-Profile Millimeter-Wave SIW Cavity-Backed Dual-Band Circularly Polarized Antenna | A single-fed low-profile millimeter-wave substrate-integrated waveguide (SIW) cavity-backed slot antenna for dual-band circular polarization (CP) applications is proposed. First, by employing two annular exponential slots carved on the surface of a circular SIW cavity, dual-band CP radiation is achieved. Two different modes, that is, the cavity mode and the surface mode, separately modulated by the cavity field distribution and the surface current, are investigated. They are utilized to achieve the dual-band CP radiation. Next, a two-order matching circuit using inductor windows is introduced to achieve dual-band matching at the low-profile antenna design with the help of the differential evolution method. Finally, for the dual-band CP application at 37.5 and 47.8 GHz, an antenna prototype is fabricated and its performance is evaluated. Simulation and measurement results agree very well. |
Rectenna architecture based energy harvester for low power RFID application | Large scale implementation of active RFID tag technology has been restricted by the need for battery replacement. Prolonging battery lifespan may potentially promote active RFID tags which offer obvious advantages over passive RFID systems. This paper explores some opportunities to simulate and develop a prototype RF energy harvester for 2.4 GHz band specifically designed for low power active RFID tag application. This system employs a rectenna architecture which is a receiving antenna attached to a rectifying circuit that efficiently converts RF energy to DC current. Initial ADS simulation results show that 2 V output voltage can be achieved using a 7 stage Cockroft-Walton rectifying circuitry with -4.881 dBm (0.325 mW) output power under -4 dBm (0.398 mW) input RF signal. These results lend support to the idea that RF energy harvesting is indeed promising. |
EPITOMIC VARIATIONAL AUTOENCODER | In this paper, we propose epitomic variational autoencoder (eVAE), a probabilistic generative model of high dimensional data. eVAE is composed of a number of sparse variational autoencoders called ‘epitome’ such that each epitome partially shares its encoder-decoder architecture with other epitomes in the composition. We show that the proposed model greatly overcomes the common problem in variational autoencoders (VAE) of model over-pruning. We substantiate that eVAE is efficient in using its model capacity and generalizes better than VAE, by presenting qualitative and quantitative results on MNIST and TFD datasets. |
Aerial picking and delivery of magnetic objects with MAVs | Autonomous delivery of goods using a Micro Air Vehicle (MAV) is a difficult problem, as it poses high demand on the MAV's control, perception and manipulation capabilities. This problem is especially challenging if the exact shape, location and configuration of the objects are unknown. In this paper, we report our findings during the development and evaluation of a fully integrated system that is energy efficient and enables MAVs to pick up and deliver objects with partly ferrous surface of varying shapes and weights. This is achieved by using a novel combination of an electro-permanent magnetic gripper with a passively compliant structure and integration with detection, control and servo positioning algorithms. The system's ability to grasp stationary and moving objects was tested, as well as its ability to cope with different shapes of the object and external disturbances. We show that such a system can be successfully deployed in scenarios where an object with partly ferrous parts needs to be gripped and placed in a predetermined location. |
Intra-class Variation Isolation in Conditional GANs | Current state-of-the-art conditional generative adversarial networks (C-GANs) require strong supervision via labeled datasets in order to generate images with continuously adjustable, disentangled semantics. In this paper we introduce a new formulation of the C-GAN that is able to learn realistic models with continuous, semantically meaningful input parameters and which has the advantage of requiring only the weak supervision of binary attribute labels. We coin the method intra-class variation isolation (IVI) and the resulting network the IVI-GAN. The method allows continuous control over the attributes in synthesised images where precise labels are not readily available. For example, given only labels found using a simple classifier of ambient / non-ambient lighting in images, IVI has enabled us to learn a generative face-image model with controllable lighting that is disentangled from other factors in the synthesised images, such as the identity. We evaluate IVI-GAN on the CelebA and CelebA-HQ datasets, learning to disentangle attributes such as lighting, pose, expression and age, and provide a quantitative comparison of IVI-GAN with a classical continuous C-GAN. |
Music composition by onomatopoeia | We introduce a simple music notation system based on onomatopoeia. Although many text-based music notation systems have been proposed, most of them are cryptic and much more difficult to understand than standard graphical musical scores. Our music notation system, called the Sutoton notation, is based on onomatopoeia and note names which are easily pronounceable by humans with no extra training. Although being simple, Sutoton notation has been used in a variety of systems and loved by many hobbyists. |
A retrieved context account of spacing and repetition effects in free recall. | Repeating an item in a list benefits recall performance, and this benefit increases when the repetitions are spaced apart (Madigan, 1969; Melton, 1970). Retrieved context theory incorporates 2 mechanisms that account for these effects: contextual variability and study-phase retrieval. Specifically, if an item presented at position i is repeated at position j, this leads to retrieval of its context from its initial presentation at i (study-phase retrieval), and this retrieved context will be used to update the current state of context (contextual variability). Here we consider predictions of a computational model that embodies retrieved context theory, the context maintenance and retrieval model (CMR; Polyn, Norman, & Kahana, 2009). CMR makes the novel prediction that subjects are more likely to successively recall items that follow a shared repeated item (e.g., i + 1, j + 1) because both items are associated with the context of the repeated item presented at i and j. CMR also predicts that the probability of recalling at least 1 of 2 studied items should increase with the items' spacing (Lohnas, Polyn, & Kahana, 2011). We tested these predictions in a new experiment, and CMR's predictions were upheld. These findings suggest that retrieved context theory offers an integrated explanation for repetition and spacing effects in free recall tasks. |
Wireless Technologies for IoT in Smart Cities | As cities continue to grow, numerous initiatives for Smart Cities are being conducted. The concept of Smart City encompasses several concepts being governance, economy, management, infrastructure, technology and people. This means that a Smart City can have different communication needs. Wireless technologies such as WiFi, ZigBee, Bluetooth, WiMax, 4G or LTE (Long Term Evolution) have presented themselves as solutions to the communication needs of Smart City initiatives. However, as most of them employ unlicensed bands, interference and coexistence problems are increasing. In this paper, the wireless technologies available nowadays for IoT (Internet of Things) in Smart Cities are presented. Our contribution is a review of wireless technologies, their comparison and the problems that difficult coexistence among them. In order to do so, the characteristics and adequacy of wireless technologies to each domain are considered. The problems derived of over-crowded unlicensed spectrum and coexistence difficulties among each technology are discussed as well. Finally, power consumption concerns are addressed. |
Anaemia management with subcutaneous epoetin delta in patients with chronic kidney disease (predialysis, haemodialysis, peritoneal dialysis): results of an open-label, 1-year study | BACKGROUND
Anaemia is common in patients with chronic kidney disease (CKD) and can be managed by therapy with erythropoiesis-stimulating agents (ESAs). Epoetin delta (DYNEPO, Shire plc) is the only epoetin produced in a human cell line. The aim of this study was to demonstrate the safety and efficacy of subcutaneously administered epoetin delta for the management of anaemia in CKD patients (predialysis, peritoneal dialysis or haemodialysis)
METHODS
This was a 1-year, multicentre, open-label study. Patients had previously received epoetin subcutaneously and were switched to epoetin delta at an identical dose to their previous therapy. Dose was titrated to maintain haemoglobin at 10.0-12.0 g/dL. The primary endpoint was mean haemoglobin over Weeks 12-24. Secondary analyses included long-term haemoglobin, haematocrit and dosing levels. Safety was assessed by monitoring adverse events, laboratory parameters and physical examinations.
RESULTS
In total 478 patients received epoetin delta, forming the safety-evaluable population. Efficacy analyses were performed on data from 411 of these patients. Mean +/- SD haemoglobin over Weeks 12-24 was 11.3 +/- 1.1 g/dL. Mean +/- SD weekly dose over Weeks 12-24 was 84.4 +/- 72.7 IU/kg. Haemoglobin levels were maintained for the duration of the study. Epoetin delta was well tolerated, with adverse events occurring at rates expected for a CKD patient population; no patient developed anti-erythropoietin antibodies.
CONCLUSION
Subcutaneously administered epoetin delta is an effective and well-tolerated agent for the management of anaemia in CKD patients, irrespective of dialysis status.
TRIAL REGISTRATION
http://www.controlled-trials.com ISRCTN68321818. |
Leveraging social networks to gain access to organisational resources | We describe a federated identity management service that allows users to access organisational resources using their existing login accounts at social networking and other sites, without compromising the security of the organisation's resources. We utilise and extend the Level of Assurance (LoA) concept to ensure the organisation's site remains secure. Users are empowered to link together their various accounts, including their organizational one with an external one, so that the strongest registration procedure of one linked account can be leveraged by the other sites' login processes that have less stringent registration procedures. Coupled with attribute release from their organizational account, this allows users to escalate their privileges due to either an increased LoA, or additional attributes, or both. The conceptual and architectural designs are described, followed by the implementation details, the user trials we carried out, and a discussion of the current limitations of the system. |
A survey on vehicular cloud computing | Vehicular networking has become a significant research area due to its specific features and applications such as standardization, efficient traffic management, road safety and infotainment. Vehicles are expected to carry relatively more communication systems, on board computing facilities, storage and increased sensing power. Hence, several technologies have been deployed to maintain and promote Intelligent Transportation Systems (ITS). Recently, a number of solutions were proposed to address the challenges and issues of vehicular networks. Vehicular Cloud Computing (VCC) is one of the solutions. VCC is a new hybrid technology that has a remarkable impact on traffic management and road safety by instantly using vehicular resources, such as computing, storage and internet for decision making. This paper presents the state-of-the-art survey of vehicular cloud computing. Moreover, we present a taxonomy for vehicular cloud in which special attention has been devoted to the extensive applications, cloud formations, key management, inter cloud communication systems, and broad aspects of privacy and security issues. Through an extensive review of the literature, we design an architecture for VCC, itemize the properties required in vehicular cloud that support this model. We compare this mechanism with normal Cloud Computing (CC) and discuss open research issues and future directions. By reviewing and analyzing literature, we found that VCC is a technologically feasible and economically viable technological shifting paradigm for converging intelligent vehicular networks towards autonomous traffic, vehicle control and perception systems. & 2013 Published by Elsevier Ltd. |
Adaptive Control of KNTU Planar Cable-Driven Parallel Robot with Uncertainties in Dynamic and Kinematic Parameters | This paper addresses the design and implementation of adaptive control on a planar cable-driven parallel robot with uncertainties in dynamic and kinematic parameters. To develop the idea, firstly, adaptation is performed on dynamic parameters and it is shown that the controller is stable despite the kinematic uncertainties. Then, internal force term is linearly separated into a regressor matrix in addition to a kinematic parameter vector that contains estimation error. In the next step to improve the controller performance, adaptation is performed on both the dynamic and kinematic parameters. It is shown that the performance of the proposed controller is improved by correction in the internal forces. The proposed controller not only keeps all cables in tension for the whole workspace of the robot, it is computationally simple and it does not require measurement of the end-effector acceleration as well. Finally, the effectiveness of the proposed control algorithm is examined through some experiments on KNTU planar cable-driven parallel robot and it is shown that the proposed control algorithm is able to provide suitable performance in practice. |
Dynamic video anomaly detection and localization using sparse denoising autoencoders | The emergence of novel techniques for automatic anomaly detection in surveillance videos has significantly reduced the burden of manual processing of large, continuous video streams. However, existing anomaly detection systems suffer from a high false-positive rate and also, are not real-time, which makes them practically redundant. Furthermore, their predefined feature selection techniques limit their application to specific cases. To overcome these shortcomings, a dynamic anomaly detection and localization system is proposed, which uses deep learning to automatically learn relevant features. In this technique, each video is represented as a group of cubic patches for identifying local and global anomalies. A unique sparse denoising autoencoder architecture is used, that significantly reduced the computation time and the number of false positives in frame-level anomaly detection by more than 2.5%. Experimental analysis on two benchmark data sets - UMN dataset and UCSD Pedestrian dataset, show that our algorithm outperforms the state-of-the-art models in terms of false positive rate, while also showing a significant reduction in computation time. |
Blogs are Echo Chambers: Blogs are Echo Chambers | In the last decade, blogs have exploded in number, popularity and scope. However, many commentators and researchers speculate that blogs isolate readers in echo chambers, cutting them off from dissenting opinions. Our empirical paper tests this hypothesis. Using a hand-coded sample of over 1,000 comments from 33 of the world's top blogs, we find that agreement outnumbers disagreement in blog comments by more than 3 to 1. However, this ratio depends heavily on a blog's genre, varying between 2 to 1 and 9 to 1. Using these hand-coded blog comments as input, we also show that natural language processing techniques can identify the linguistic markers of agreement. We conclude by applying our empirical and algorithmic findings to practical implications for blogs, and discuss the many questions raised by our work. |
TEXT MINING : PROMISES AND CHALLENGES | Text mining, also known as knowledge discovery from text, and document information mining, refers to the process of extracting interesting patterns from very large text corpus for the purposes of discovering knowledge. Text mining is an interdisciplinary field involving information retrieval, text understanding, information extraction, clustering, categorization, visualization, database technology, machine learning, and data mining. Regarded by many as the next wave of knowledge discovery, text mining has a very high commercial value. This talk presents a general framework for text mining, consisting of two stages: text refining that transforms unstructured text documents into an intermediate form; and knowledge distill ation that deduces patterns or knowledge from the intermediate form. We then survey the state-of-the-art text mining approaches, products, and applications by aligning them based on the text refining and knowledge distill ation functions as well as the intermediate form that they adopt. In conclusion, we highlight the upcoming challenges of text mining and the opportunities it offers. |
2 What Have We Been Calling Integration ? | The word integration has been used with different meanings in the ontology field. This article aims at clarifying the meaning of the word “integration” and presenting some of the relevant work done in integration. We identify three meanings of ontology “integration”: when building a new ontology reusing (by assembling, extending, specializing or adapting) other ontologies already available; when building an ontology by merging several ontologies into a single one that unifies all of them; when building an application using one or more ontologies. We discuss the different meanings of “integration”, identify the main characteristics of the three different processes and propose This work was partially supported by JNICT grant No. PRAXIS XXI/BD/11202/97 (Sub-Programa Ciência e Tecnologia do Segundo Quadro Comunitário de Apoio) and project PRAXIS XXI/1568/95. The copyright of this paper belongs to the papers authors. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage. Proceedings of the IJCAI-99 workshop on Ontologies and Problem-Solving Methods (KRR5) Stockholm, Sweden, August 2, 1999 (V.R. Benjamins, B. Chandrasekaran, A. Gomez-Perez, N. Guarino, M. Uschold, eds.) http://sunsite.informatik.rwth-aachen.de/Publications/CEUR-WS/Vol-18/ three words to distinguish among those meanings: integration, merge and use. |
Web table taxonomy and formalization | The Web is the largest repository of data available, with over 150 million high-quality tables. Several works have combined efforts to allow queries on these tables, but there are still challenges, like the various different types of structures found on the Web. In this paper, we propose a taxonomy for the tabular structures and formalize the ones used with relational data and show, through an experimental evaluation, that WTClassifier, our supervised framework, classifies Web tables with high accuracy. Additionally, we use WTClassifier to categorize more than 300 thousandWeb tables into our taxonomy and found that 82.25% are not formatted similarly to relational structure. |
Double-Hypergraph Based Sentence Ranking for Query-Focused Multi-document Summarizaton | Traditional graph based sentence ranking approaches modeled the documents as a text graph where vertices represent sentences and edges represent pairwise similarity relationships between two sentences. Such modeling cannot capture complex group relationships shared among multiple sentences which can be useful for sentence ranking. In this paper, we propose two different group relationships (sentence-topic relationship and document-topic relationship) shared among sentences, and construct a double-hypergraph integrating these relationships into a unified framework. Then, a double-hypergraph based sentence ranking algorithm is developed for query-focused multi-document summarization, in which Markov random walk is defined on each hypergraph and the mixture Markov chains are formed so as to perform transductive learning in the double-hypergraph. When evaluated on DUC datasets, performance of the proposed approach is remarkable. |
A 3D-stacked logic-in-memory accelerator for application-specific data intensive computing | This paper introduces a 3D-stacked logic-in-memory (LiM) system that integrates the 3D die-stacked DRAM architecture with the application-specific LiM IC to accelerate important data-intensive computing. The proposed system comprises a fine-grained rank-level 3D die-stacked DRAM device and extra LiM layers implementing logic-enhanced SRAM blocks that are dedicated to a particular application. Through silicon vias (TSVs) are used for vertical interconnections providing the required bandwidth to support the high performance LiM computing. We performed a comprehensive 3D DRAM design space exploration and exploit the efficient architectures to accelerate the computing that can balance the performance and power. Our experiments demonstrate orders of magnitude of performance and power efficiency improvements compared with the traditional multithreaded software implementation on modern CPU. |
Gradient Descent Finds Global Minima of Deep Neural Networks | Simon S. Du, Jason D. Lee*, Haochuan Li*, Liwei Wang*, and Xiyu Zhai* Machine Learning Department, Carnegie Mellon University Data Science and Operations Department, University of Southern California School of Physics, Peking University Key Laboratory of Machine Perception, MOE, School of EECS, Peking University Center for Data Science, Peking University, Beijing Institute of Big Data Research Department of EECS, Massachusetts Institute of Technology |
Scalable NDN Forwarding: Concepts, Issues and Principles | Named Data Networking (NDN) is a recently proposed general- purpose network architecture that leverages the strengths of Internet architecture while aiming to address its weaknesses. NDN names packets rather than end-hosts, and most of NDN's characteristics are a consequence of this fact. In this paper, we focus on the packet forwarding model of NDN. Each packet has a unique name which is used to make forwarding decisions in the network. NDN forwarding differs substantially from that in IP; namely, NDN forwards based on variable-length names and has a read-write data plane. Designing and evaluating a scalable NDN forwarding node architecture is a major effort within the overall NDN research agenda. In this paper, we present the concepts, issues and principles of scalable NDN forwarding plane design. The essential function of NDN forwarding plane is fast name lookup. By studying the performance of the NDN reference implementation, known as CCNx, and simplifying its forwarding structure, we identify three key issues in the design of a scalable NDN forwarding plane: 1) exact string matching with fast updates, 2) longest prefix matching for variable-length and unbounded names and 3) large- scale flow maintenance. We also present five forwarding plane design principles for achieving 1 Gbps throughput in software implementation and 10 Gbps with hardware acceleration. |
Robot-assisted anterior lumbar interbody fusion in a Swine model in vivo test of the da vinci surgical-assisted spinal surgery system. | STUDY DESIGN
the use of the da Vinci Surgical System to perform an anterior lumbar interbody fusion in a swine model to identify the technical properties, processes, merits, demerits, and limitations of a video-assisted robotic surgical system.
OBJECTIVE
this study was designed to demonstrate the feasibility of using a robotic surgical system to perform spinal surgery.
SUMMARY OF BACKGROUND DATA
video-assisted laparoscopic anterior fusion was first reported in 1995 and afterward was spotlighted for several years. However, this technique has not become popular because of technical difficulties and complications associated with video-assisted procedures on the spine. As such, there is a demand for investigations to improve this technology. The da Vinci Surgical System provides 3-dimensional visualization as well as uniquely dexterous instruments that are remarkably similar to human hands. Video-assisted surgery with the da Vinci Surgical System robot has already provided great value to the fields of urology, cardiology, gynecology, and general surgery over the last decade. Preclinical studies for application of this system in spinal surgery have recently been conducted.
METHODS
a pig underwent anterior lumbar interbody fusion using da Vinci Surgical System assistance, with Tyche expandable cages used for preparation of endplates and cage placement. The setup time, operation time, amount of bleeding, and the number of complications associated with robotic manipulation were recorded. Before euthanasia, the animal underwent radiologic examination to confirm proper placement of cages.
RESULTS
the total duration of the procedure took 6 hours, with some complications related to frozen armsand robotic arm collision. Even so, there was neither any significant nerve or vessel injury nor peritoneal organ damage. Furthermore, radiologic assessment confirmed proper position of the cage in the center of the disc space.
CONCLUSION
use of the da Vinci Surgical System to perform an anterior spinal procedure was shown to be safe and effective in a swine animal model. The utilization of this advanced technology shows promise to reduce the incidence of complications compared with other approaches. It requires further testing in animal models and cadavers, along with serial comparisons to current procedures. |
Is the smile line a valid parameter for esthetic evaluation? A systematic literature review. | UNLABELLED
The "smile line" is commonly used as a parameter to evaluate and categorize a person's smile. This systematic literature review assessed the existing evidence on the validity and universal applicability of this parameter. The latter was evaluated based on studies on smile perception by orthodontists, general clinicians, and laypeople.
METHODS
A review of the literature published between October 1973 and January 2010 was conducted with the electronic database Pubmed and the search terms "smile," "smile line," "smile arc," and "smile design."
RESULTS
The search yielded 309 articles, of which nine studies were included based on the selection criteria. The selected studies typically correlate the smile line with the position of the upper lip during a smile while, on average, 75 to 100% of the maxillary anterior teeth are exposed. A virtual line that connects the incisal edges of the maxillary anterior teeth commonly follows the upper border of the lower lip. Average and parallel smile lines are most common, influenced by the age and gender of a person. Orthodontists, general clinicians, and laypeople have similar preferences and rate average smile lines as most attractive.
CONCLUSIONS
The smile line is a valid tool to assess the esthetic appearance of a smile. It can be applied universally as clinicians and laypersons perceive and judge it similarly. |
Millimetre-wave T-shaped antenna with defected ground structures for 5G wireless networks | This paper presents a T-shaped antenna at millimetre-wave (MMW) frequency ranges to offer a number of advantages including simple structure, high operating bandwidth, and high gain. Defected ground structures (DGS) have been symmetrically added in ground in order to produce multiple resonating bands, accompanied by partial ground plane to achieve continuous operating bandwidth. The antenna consists of T-shaped radiating patch with a coplanar waveguide (CPW) feed. The bottom part has a partial ground plane loaded with five symmetrical split-ring slots. Measured results of antenna prototype show a wide bandwidth of 25.1-37.5 GHz. Moreover, simulation evaluation of peak gain of the antenna is 9.86 dBi at 36.8 GHz, and efficiency is higher than 80% in complete range of operation. The proposed antenna is considered as a potential candidate for the 5G wireless networks and applications. |
Intensional semantics of system T of Gödel | Abstract This paper is a contribution to the development of a theory of behaviour of programs. We study an intensional behaviour of system T of Godel that is devoted to capturing not which function is computed but how it is computed. This intensional behaviour is captured by a denotational semantics in the domain of lazy natural numbers. It is shown that all sequential algorithms definable in system T are intensional behaviours. This leads us to obtain a general representation theorem asserting that we may compute every definable function of system T with the behaviour of a sequential algorithm using a higher-order term. |
Japanese study of tofogliflozin with type 2 diabetes mellitus patients in an observational study of the elderly (J‐STEP/EL): A 12‐week interim analysis | AIMS/INTRODUCTION
Sodium-glucose co-transporter 2 inhibitors are a promising treatment for type 2 diabetes mellitus, but are associated with concerns about specific adverse drug reactions. We carried out a 1-year post-marketing surveillance of tofogliflozin, a novel agent in this class, in Japanese elderly patients with type 2 diabetes mellitus and here report the results of a 12-week interim analysis, focusing on adverse drug reactions of special interest.
MATERIALS AND METHODS
The present prospective observational study included all type 2 diabetes mellitus patients aged ≥65 years who started tofogliflozin during the first 3 months after its launch. Data on demographic and baseline characteristics, clinical course and adverse events were collected.
RESULTS
Of 1,535 patients registered, 1,506 patients whose electronic case report forms were collected and who had at least one follow-up visit were included in the safety analysis at 12 weeks. A total of 178 of 1,506 patients (11.82%) had at least one adverse drug reaction to tofogliflozin. The incidence of adverse drug reactions of special interest (polyuria/pollakiuria, volume depletion-related events, urinary tract infection, genital infection, skin disorders and hypoglycemia) was 2.19, 2.32, 1.33, 1.13, 1.46 and 0.73%, respectively. No new safety concerns were identified. Among those evaluable for clinical effectiveness, the mean (standard deviation) glycated hemoglobin decreased from 7.65% (1.35%) at baseline to 7.25% (1.16%) at 12 weeks by 0.39% (0.94%; P < 0.0001).
CONCLUSIONS
This interim analysis characterized the safety profile of tofogliflozin in Japanese elderly patients with type 2 diabetes mellitus during the early post-marketing period. |
Clustering the solar resource for grid management in island mode | We propose a novel methodology to select candidate locations for solar power plants that take into account solar variability and geographical smoothing effects. This methodology includes the development of maps created by a clustering technique that determines regions of coherent solar quality attributes as defined by a feature which considers both solar clearness and solar variability. An efficient combination of two well-known clustering algorithms, the affinity propagation and the k-means, is introduced in order to produce stable partitions of the data to a variety of number of clusters in a computationally fast and reliable manner. We use 15 years worth of the 30min GHI gridded data for the island of Lanai in Hawaii to produce, validate and reproduce clustering maps. A family of appropriate number of clusters is obtained by evaluating the performance of three internal validity indices. We apply a correlation analysis to the family of solutions to determine the map segmentation that maximizes a definite interpretation of the distinction between and within the emerged clusters. Having selected a single clustering we validated the clustering by using a new dataset to demonstrate that the degree of similarity between the two partitions remains high at 90.91%. In the end we show how the clustering map can be used in solar energy problems. Firstly, we explore the effects of geographical smoothing in terms of the clustering maps, by determining the average ramp ratio for two location within and without the same cluster and identify the pair of clusters that shows the highest smoothing potential. Secondly, we demonstrate how the map can be used to select locations for GHI measurements to improve solar forecasting for a PV plant, by showing that additional measurements from within the cluster where the PV plant is located can lead to improvements of 10% in the forecast. 2014 Elsevier Ltd. All rights reserved. |
Mining large-scale smartphone data for personality studies | In this paper, we investigate the relationship between automatically extracted behavioral characteristics derived from rich smartphone data and self-reported Big-Five personality traits (extraversion, agreeableness, conscientiousness, emotional stability and openness to experience). Our data stem from smartphones of 117 Nokia N95 smartphone users, collected over a continuous period of 17 months in Switzerland. From the analysis, we show that several aggregated features obtained from smartphone usage data can be indicators of the Big-Five traits. Next, we describe a machine learning method to detect the personality trait of a user based on smartphone usage. Finally, we study the benefits of using gender-specific models for this task. Apart from a psychological viewpoint, this study facilitates further research on the automated classification and usage of personality traits for personalizing services on smartphones. |
Stippling by Example | In this work, we focus on stippling as an artistic style and discuss our technique for capturing and reproducing stipple features unique to an individual artist. We employ a texture synthesis algorithm based on the gray-level co-occurrence matrix (GLCM) of a texture field. This algorithm uses a texture similarity metric to generate stipple textures that are perceptually similar to input samples, allowing us to better capture and reproduce stipple distributions. First, we extract example stipple textures representing various tones in order to create an approximate tone map used by the artist. Second, we extract the stipple marks and distributions from the extracted example textures, generating both a lookup table of stipple marks and a texture representing the stipple distribution. Third, we use the distribution of stipples to synthesize similar distributions with slight variations using a numerical measure of the error between the synthesized texture and the example texture as the basis for replication. Finally, we apply the synthesized stipple distribution to a 2D grayscale image and place stipple marks onto the distribution, thereby creating a stippled image that is statistically similar to images created by the example artist. |
Factors associated with falls among older adults living in institutions | BACKGROUND
Falls have enormous impact in older adults. Yet, there is insufficient evidence regarding the effectiveness of preventive interventions in this setting. The objectives were to measure the frequency of falls and associated factors among older people living institutions.
METHODS
Data were obtained from a survey on a probabilistic sample of residents aged ≥65 years, drawn in 1998-99 from institutions of Madrid (Spain). Residents, their caregivers, and facility physicians were interviewed. Fall rates were computed based on the number of physician-reported falls in the preceding 30 days. Adjusted rate ratios were computed using negative binomial regression models, including age, sex, cognitive status, functional dependence, number of diseases, and polypharmacy.
RESULTS
The final sample comprised 733 residents. The fall rate was 2.4 falls per person-year (95% confidence interval [CI], 2.04-2.82). The strongest risk factor was number of diseases, with an adjusted rate ratio (RR) of 1.32 (95% CI, 1.17-1.50) for each additional diagnosis. Other variables associated with falls were: urinary incontinence (RR = 2.56 [95% CI, 1.32-4.94]); antidepressant use (RR = 2.32 [95% CI, 1.22-4.40]); arrhythmias (RR = 2.00 [95% CI, 1.05-3.81]); and polypharmacy (RR = 1.07 [95% CI, 0.95-1.21], for each additional medication). The attributable fraction for number of diseases (with reference to those with ≤ 1 condition) was 84% (95% CI, 45-95%).
CONCLUSIONS
Number of diseases was the main risk factor for falls in this population of institutionalized older adults. Other variables associated with falls, probably more amenable to preventive action, were urinary incontinence, antidepressants, arrhythmias, and polypharmacy.
VIRTUAL SLIDES
The virtual slide(s) for this article can be found here:http://www.diagnosticpathology.diagnomx.eu/vs/3916151157277337. |
Vertical migration of dinoflagellates: model analysis of strategies, growth, and vertical distribution patterns | Dinoflagellates demonstrate a variety of vertical migration patterns that presumably give them a competitive advantage when nutrients are depleted in the surface layer of stratified waters. In this study, a simple quota-based model was used to examine the relationships between the vertical migration pattern and internal nutritional status, and to assess how external environmental conditions, such as mixing layer depth (MLD) and internal waves, can influence these relationships. Dinoflagellates may form subsurface aggregations or conduct vertical migration (diel or non-diel) in response to their internal nutrient quota, but within a limited physiological parameter space. The model was implemented in a 1D (vertical) domain using an individual-based modeling approach, tracking the change in nutrient quota and the trajectory of many individual cells in a water column. The model shows that dinoflagellate cells might change from one vertical migration pattern to another when the external environmental conditions change. Using the average net growth rate as an index of fitness, 2 migration strategies, photo-/geotaxis vs. quota-based migration, were assessed with regard to MLD and internal wave regime. It was found that dinoflagellates might choose different migration strategies under different mixing/stratification regimes. In addition, under the same environmental conditions, different species might display unique vertical migration patterns due to inherent physiological differences. This study reveals the sensitivity of dinoflagellate vertical migration to biological and physical factors and offers possible explanations for the various vertical distributions and migration patterns observed in the field. |
CD SLAM - continuous localization and mapping in a dynamic world | When performing large-scale perpetual localization and mapping one faces problems like memory consumption or repetitive and dynamic scene elements requiring robust data association. We propose a visual SLAM method which handles short- and long-term scene dynamics in large environments using a single camera only. Through visibility-dependent map filtering and efficient keyframe organization we reach a considerable performance gain only through incorporation of a slightly more complex map representation. Experiments on a large, mixed indoor/outdoor dataset over a time period of two weeks demonstrate the scalability and robustness of our approach. |
Clinical implications of genome-wide DNA methylation studies in acute myeloid leukemia | Acute myeloid leukemia (AML) is the most common type of acute leukemia in adults. AML is a heterogeneous malignancy characterized by distinct genetic and epigenetic abnormalities. Recent genome-wide DNA methylation studies have highlighted an important role of dysregulated methylation signature in AML from biological and clinical standpoint. In this review, we will outline the recent advances in the methylome study of AML and overview the impacts of DNA methylation on AML diagnosis, treatment, and prognosis. |
Host Defense against Viral Infection Involves Interferon Mediated Down-Regulation of Sterol Biosynthesis | Little is known about the protective role of inflammatory processes in modulating lipid metabolism in infection. Here we report an intimate link between the innate immune response to infection and regulation of the sterol metabolic network characterized by down-regulation of sterol biosynthesis by an interferon regulatory loop mechanism. In time-series experiments profiling genome-wide lipid-associated gene expression of macrophages, we show a selective and coordinated negative regulation of the complete sterol pathway upon viral infection or cytokine treatment with IFNγ or β but not TNF, IL1β, or IL6. Quantitative analysis at the protein level of selected sterol metabolic enzymes upon infection shows a similar level of suppression. Experimental testing of sterol metabolite levels using lipidomic-based measurements shows a reduction in metabolic output. On the basis of pharmacologic and RNAi inhibition of the sterol pathway we show augmented protection against viral infection, and in combination with metabolite rescue experiments, we identify the requirement of the mevalonate-isoprenoid branch of the sterol metabolic network in the protective response upon statin or IFNβ treatment. Conditioned media experiments from infected cells support an involvement of secreted type 1 interferon(s) to be sufficient for reducing the sterol pathway upon infection. Moreover, we show that infection of primary macrophages containing a genetic knockout of the major type I interferon, IFNβ, leads to only a partial suppression of the sterol pathway, while genetic knockout of the receptor for all type I interferon family members, ifnar1, or associated signaling component, tyk2, completely abolishes the reduction of the sterol biosynthetic activity upon infection. Levels of the proteolytically cleaved nuclear forms of SREBP2, a key transcriptional regulator of sterol biosynthesis, are reduced upon infection and IFNβ treatment at both the protein and de novo transcription level. The reduction in srebf2 gene transcription upon infection and IFN treatment is also found to be strictly dependent on ifnar1. Altogether these results show that type 1 IFN signaling is both necessary and sufficient for reducing the sterol metabolic network activity upon infection, thereby linking the regulation of the sterol pathway with interferon anti-viral defense responses. These findings bring a new link between sterol metabolism and interferon antiviral response and support the idea of using host metabolic modifiers of innate immunity as a potential antiviral strategy. |
Topiramate add-on treatment in schizophrenia: a randomised, double-blind, placebo-controlled clinical trial. | Glutamate antagonists such as topiramate have been proposed based on the glutamate hypothesis of schizophrenia because its properties encourage its exploration and possible development as a medication for the treatment of schizophrenia. A randomised, double-blind, placebo-controlled clinical trial was performed on 18- to 45-year-old patients with schizophrenia. Baseline information including vital signs, height, weight, smoking status, demographic characteristics, (past) psychiatric history, medication history and medication-related adverse effects were collected. Patients were randomly assigned to a topiramate or placebo group. Efficacy of medication was measured by administering Positive and Negative Syndrome Scale (PANSS), and tolerability of treatment was recorded on day 0 (baseline), day 28 and day 56. PANSS values (95% confidence interval) at baseline, day 28 and day 56 in the topiramate group were 96.87 (85.37-108.37), 85.68 (74.67-96.70) and 76.87 (66.06-87.69), respectively; compared with 101.87 (90.37-113.37), 100.31 (89.29-111.32) and 100.56 (89.74-111.37) in the placebo group. General linear model for repeated measures analysis showed that topiramate has lowered PANSS values significantly compared with the placebo group. Similar significant decline patterns were found in all three subscales (negative, positive and psychopathology sign). Clinical response (more than 20% reduction in PANSS) was significantly higher in topiramate-treated subjects than controls (50% vs 12.5%). Topiramate can be an effective medication in controlling schizophrenic symptoms, considering its effect on negative symptoms and controlling antipsychotic-associated weight gain. |
Extraction, isolation and characterization of bioactive compounds from plants' extracts. | Natural products from medicinal plants, either as pure compounds or as standardized extracts, provide unlimited opportunities for new drug leads because of the unmatched availability of chemical diversity. Due to an increasing demand for chemical diversity in screening programs, seeking therapeutic drugs from natural products, interest particularly in edible plants has grown throughout the world. Botanicals and herbal preparations for medicinal usage contain various types of bioactive compounds. The focus of this paper is on the analytical methodologies, which include the extraction, isolation and characterization of active ingredients in botanicals and herbal preparations. The common problems and key challenges in the extraction, isolation and characterization of active ingredients in botanicals and herbal preparations are discussed. As extraction is the most important step in the analysis of constituents present in botanicals and herbal preparations, the strengths and weaknesses of different extraction techniques are discussed. The analysis of bioactive compounds present in the plant extracts involving the applications of common phytochemical screening assays, chromatographic techniques such as HPLC and, TLC as well as non-chromatographic techniques such as immunoassay and Fourier Transform Infra Red (FTIR) are discussed. |
ROS Regulation During Abiotic Stress Responses in Crop Plants | Abiotic stresses such as drought, cold, salt and heat cause reduction of plant growth and loss of crop yield worldwide. Reactive oxygen species (ROS) including hydrogen peroxide (H2O2), superoxide anions (O2 (•-)), hydroxyl radical (OH•) and singlet oxygen ((1)O2) are by-products of physiological metabolisms, and are precisely controlled by enzymatic and non-enzymatic antioxidant defense systems. ROS are significantly accumulated under abiotic stress conditions, which cause oxidative damage and eventually resulting in cell death. Recently, ROS have been also recognized as key players in the complex signaling network of plants stress responses. The involvement of ROS in signal transduction implies that there must be coordinated function of regulation networks to maintain ROS at non-toxic levels in a delicate balancing act between ROS production, involving ROS generating enzymes and the unavoidable production of ROS during basic cellular metabolism, and ROS-scavenging pathways. Increasing evidence showed that ROS play crucial roles in abiotic stress responses of crop plants for the activation of stress-response and defense pathways. More importantly, manipulating ROS levels provides an opportunity to enhance stress tolerances of crop plants under a variety of unfavorable environmental conditions. This review presents an overview of current knowledge about homeostasis regulation of ROS in crop plants. In particular, we summarize the essential proteins that are involved in abiotic stress tolerance of crop plants through ROS regulation. Finally, the challenges toward the improvement of abiotic stress tolerance through ROS regulation in crops are discussed. |
Schinzel-Giedion syndrome: report of splenopancreatic fusion and proposed diagnostic criteria. | We report on the 46th patient with Schinzel-Giedion syndrome (SGS) and the first observation of splenopancreatic fusion in this syndrome. In the antenatal period, a male fetus was found to have bilateral hydronephrosis. Postnatally, in keeping with a diagnosis of SGS, there were large fontanelles, ocular hypertelorism, a wide, broad forehead, midface retraction, a short, upturned nose, macroglossia, and a short neck. Other anomalies included cardiac defects, widened and dense long bone cortices, cerebral ventriculomegaly, and abnormal fundi. Splenopancreatic fusion, usually encountered in trisomy 13, was found on autopsy. Schinzel-Giedion syndrome is likely a monogenic condition for which neither the heritability pattern nor pathogenesis has yet been determined. A clinical diagnosis may be made by identifying the facial phenotype, including prominent forehead, midface retraction, and short, upturned nose, plus one of either of the two other major distinguishing features: typical skeletal abnormalities or hydronephrosis. Typical skeletal anomalies include a sclerotic skull base, wide supraoccipital-exoccipital synchondrosis, increased cortical density or thickness, and broad ribs. Other highly supportive features include neuroepithelial tumors (found in 17%), hypertrichosis, and brain abnormalities. Severe developmental delay and poor survival are constant features in reported patients. |
Discovering Basic Emotion Sets via Semantic Clustering on a Twitter Corpus | A plethora of words are used to describe the spectrum of human emotions, but how many emotions are there really, and how do they interact? Over the past few decades, several theories of emotion have been proposed, each based around the existence of a set of basic emotions, and each supported by an extensive variety of research including studies in facial expression, ethology, neurology and physiology. Here we present research based on a theory that people transmit their understanding of emotions through the language they use surrounding emotion keywords. Using a labelled corpus of over 21,000 tweets, six of the basic emotion sets proposed in existing literature were analysed using Latent Semantic Clustering (LSC), evaluating the distinctiveness of the semantic meaning attached to the emotional label. We hypothesise that the more distinct the language is used to express a certain emotion, then the more distinct the perception (including proprioception) of that emotion is, and thus more basic. This allows us to select the dimensions best representing the entire spectrum of emotion. We find that Ekman’s set, arguably the most frequently used for classifying emotions, is in fact the most semantically distinct overall. Next, taking all analysed (that is, previously proposed) emotion terms into account, we determine the optimal semantically irreducible basic emotion set using an iterative LSC algorithm. Our newly-derived set (Accepting, Ashamed, Contempt, Interested, Joyful, Pleased, Sleepy, Stressed) generates a 6.1% increase in distinctiveness over Ekman’s set (Angry, Disgusted, Joyful, Sad, Scared). We also demonstrate how using LSC data can help visualise emotions. We introduce the concept of an Emotion Profile and briefly analyse compound emotions both visually and mathematically. |
A 20/20 Vision of the VLDB-2020? | A 20/20 vision in ophthalmology implies a perfect view of things that are in front of you. The term is also used to mean a perfect sight of the things to come. Here we focus on a speculative vision of the VLDB in the year 2020. This panel is the follow-up of the one I organised (with S. Navathe) at the Kyoto VLDB in 1986, with the title: "Anyone for a VLDB in the Year 2000?". In that panel, the members discussed the major advances made in the database area and conjectured on its future, following a concern of many researchers that the database area was running out of interesting research topics and therefore it might disappear into other research topics, such as software engineering, operating systems and distributed systems. That did not happen. |
Investigating the Relationship between Internet Privacy Concerns and Online Purchase Behavior | Many organizations now emphasize the use of technology that can help them get closer to consumers and build ongoing relationships with them. The ability to compile consumer data profiles has been made even easier with Internet technology. However, it is often assumed that consumers like to believe they can trust a company with their personal details. Lack of trust may cause consumers to have privacy concerns. Addressing such privacy concerns may therefore be crucial to creating stable and ultimately profitable customer relationships. Three specific privacy concerns that have been frequently identified as being of importance to consumers include unauthorized secondary use of data, invasion of privacy, and errors. Results of a survey study indicate that both errors and invasion of privacy have a significant inverse relationship with online purchase behavior. Unauthorized use of secondary data appears to have little impact. Managerial implications include the careful selection of communication channels for maximum impact, the maintenance of discrete “permission-based” contact with consumers, and accurate recording and handling of data. |
Optimizing Intersection-Over-Union in Deep Neural Networks for Image Segmentation | We consider the problem of learning deep neural networks (DNNs) for object category segmentation, where the goal is to label each pixel in an image as being part of a given object (foreground) or not (background). Deep neural networks are usually trained with simple loss functions (e.g., softmax loss). These loss functions are appropriate for standard classification problems where the performance is measured by the overall classification accuracy. For object category segmentation, the two classes (foreground and background) are very imbalanced. The intersectionover-union (IoU) is usually used to measure the performance of any object category segmentation method. In this paper, we propose an approach for directly optimizing this IoU measure in deep neural networks. Our experimental results on two object category segmentation datasets demonstrate that our approach outperforms DNNs trained with standard softmax loss. |
PERSONAL GOALS, LIFE MEANING, AND VIRTUE: WELLSPRINGS OF A POSITIVE LIFE | As far as we know humans are the only meaning-seeking species on the planet. Meaning-making is an activity that is distinctly human, a function of how the human brain is organized. The many ways in which humans conceptualize, create, and search for meaning has become a recent focus of behavioral science research on quality of life and subjective well-being. This chapter will review the recent literature on meaning-making in the context of personal goals and life purpose. My intention will be to document how meaningful living, expressed as the pursuit of personally significant goals, contributes to positive experience and to a positive life. |
CacheKit: Evading Memory Introspection Using Cache Incoherence | With the growing importance of networked embedded devices in the upcoming Internet of Things, new attacks targeting embedded OSes are emerging. ARM processors, which power over 60% of embedded devices, introduce a hardware security extension called TrustZone to protect secure applications in an isolated secure world that cannot be manipulated by a compromised OS in the normal world. Leveraging TrustZone technology, a number of memory integrity checking schemes have been proposed in the secure world to introspect malicious memory modification of the normal world. In this paper, we first discover and verify an ARM TrustZone cache incoherence behavior, which results in the cache contents of the two worlds, secure and non-secure, potentially being different even when they are mapped to the same physical address. Furthermore, code in one TrustZone world cannot access the cache content in the other world. Based on this observation, we develop a new rootkit called CacheKit that hides in the cache of the normal world and is able to evade memory introspection from the secure world. We implement a CacheKit prototype on Cortex-A8 processors after solving a number of challenges. First, we employ the Cache-as-RAM technique to ensure that the malicious code is only loaded into the CPU cache and not RAM. Thus, the secure world cannot detect the existence of the malicious code by examining the RAM. Second, we use the ARM processor's hardware support on cache settings to keep the malicious code persistent in the cache. Third, to evade introspection that flushes cache content back into RAM, we utilize physical addresses from the I/O address range that is not backed by any real I/O devices or RAM. The experimental results show that CacheKit can successfully evade memory introspection from the secure world and has small performance impacts on the rich OS. We discuss potential countermeasures to detect this type of rootkit attack. |
The Berlin brain-computer interface: EEG-based communication without subject training | The Berlin Brain-Computer Interface (BBCI) project develops a noninvasive BCI system whose key features are 1) the use of well-established motor competences as control paradigms, 2) high-dimensional features from 128-channel electroencephalogram (EEG), and 3) advanced machine learning techniques. As reported earlier, our experiments demonstrate that very high information transfer rates can be achieved using the readiness potential (RP) when predicting the laterality of upcoming left- versus right-hand movements in healthy subjects. A more recent study showed that the RP similarly accompanies phantom movements in arm amputees, but the signal strength decreases with longer loss of the limb. In a complementary approach, oscillatory features are used to discriminate imagined movements (left hand versus right hand versus foot). In a recent feedback study with six healthy subjects with no or very little experience with BCI control, three subjects achieved an information transfer rate above 35 bits per minute (bpm), and further two subjects above 24 and 15 bpm, while one subject could not achieve any BCI control. These results are encouraging for an EEG-based BCI system in untrained subjects that is independent of peripheral nervous system activity and does not rely on evoked potentials even when compared to results with very well-trained subjects operating other BCI systems. |
A risk management framework for software engineering practice | Formal risk analysis and management in software engineering is still an emerging part of project management. We provide a brief introduction to the concepts of risk management for software development projects, and then an overview of a new risk management framework. Risk management for software projects is intended to minimize the chances of unexpected events, or more specifically to keep all possible outcomes under tight management control. Risk management is also concerned with making judgments about how risk events are to be treated, valued, compared and combined. The ProRisk management framework is intended to account for a number of the key risk management principles required for managing the process of software development. It also provides a support environment to operationalize these management tasks. |
Nocturnal enuresis in Turkey: prevalence and accompanying factors in different socioeconomic environments. | AIM
To study the prevalence of nocturnal enuresis and the impact of associated familial factors in Turkish children with a different socioeconomic status.
METHODS
A specific questionnaire was distributed to 3,000 parents of primary school children (6-12 years old). Of these children, 1,500 attended primary schools in Umraniye, a suburban region of Istanbul (group 1), and the other 1,500 children visited schools in Suadiye, a well-developed part of Istanbul (group 2). The first part of the survey investigated the familial conditions of the children (financial status, family history of enuresis, and family size). The second part of the questionnaire surveyed the demographic and physical characteristics of the children. The last part was designed to investigate the opinions and beliefs of the parents about nocturnal enuresis and treatment modalities. The prevalence rates of nocturnal enuresis and associated familial factors of these children from two different regions of Istanbul were compared.
RESULTS
Of the 3,000 questionnaires distributed, 2,589 (86.3%) were returned and included in the final analysis. The mean age of group 1 and 2 children was 8.88 +/- 1.4 and 8.9 +/- 1.5 years, respectively (p > 0.05). The gender of the subjects was equally distributed (48.6% males and 51.4% females). Enuresis was present in 334 children (25.5%) of group 1 and in 205 children (16%) of group 2. Enuresis was significantly more common in group 1 (p < 0.01). The families consisted of 4.69 +/- 1.4 and 4.1 +/- 1.1 persons, respectively (p < 0.01). A yearly income of USD 7,000 was achieved in group 2 by 54%, in group 1 by only 0.7% (p < 0.01). Only 26 children of group 1 (7.8%) and 22 children of group 2 (10.8%) were noted to receive medical enuresis treatments, with no statistically significant difference between the groups (p > 0.01). The parents of the enuretic children from the suburban region of Istanbul were found to consider the condition a normal developmental entity. They believed that enuresis will resolve spontaneously and that no treatment is necessary. On the contrary, the parents of the enuretic children in the well-developed region of the city believed that enuresis is a psychological problem and that intensive psychological assistance is essential for the management.
CONCLUSIONS
Our study indicates that the prevalence of nocturnal enuresis in Turkey is comparable to that reported in the literature. The parents consider that enuresis nocturna is not a fatal disorder, that the drugs used in the treatment may be harmful, and that no medical assistance is required. Trained health personnel and physicians should inform the parents about enuresis in order to prevent possible behavioral and self-esteem problems. |
The Effect of Heterogeneity on Coalition Formation in Iterated Request for Proposal Scenarios | This paper explores a general model of economic exchange between heterogeneous agents representing firms, traders, or other socioeconomic entities, that self-organise into coalitions to face specific tasks. In particular, the work addresses coalition formation problems in which many tasks are addressed to the same population over time in an iterative fashion – giving the agents the possibility to organise themselves for specific tasks. In particular, the purpose of the paper is to describe the impact of task and skill heterogeneity in the population on the performance of the agents. Experiments are carried out for two common strategies from the economic world, namely competitive and conservative strategies. Results obtained show that competitive population outperforms conservative population, and that the heterogeneity degree has a direct effect on the advantage of the first strategy versus the second. We analyse experimental results by using a novel data mining technique called collaboration graphs. |
Speech enhancement based on deep denoising autoencoder | We previously have applied deep autoencoder (DAE) for noise reduction and speech enhancement. However, the DAE was trained using only clean speech. In this study, by using noisyclean training pairs, we further introduce a denoising process in learning the DAE. In training the DAE, we still adopt greedy layer-wised pretraining plus fine tuning strategy. In pretraining, each layer is trained as a one-hidden-layer neural autoencoder (AE) using noisy-clean speech pairs as input and output (or transformed noisy-clean speech pairs by preceding AEs). Fine tuning was done by stacking all AEs with pretrained parameters for initialization. The trained DAE is used as a filter for speech estimation when noisy speech is given. Speech enhancement experiments were done to examine the performance of the trained denoising DAE. Noise reduction, speech distortion, and perceptual evaluation of speech quality (PESQ) criteria are used in the performance evaluations. Experimental results show that adding depth of the DAE consistently increase the performance when a large training data set is given. In addition, compared with a minimum mean square error based speech enhancement algorithm, our proposed denoising DAE provided superior performance on the three objective evaluations. |
A real-time extension to the Android platform | Android belongs to the leading operating systems for mobile devices, e.g. smartphones or tablets. The availability of Android's source code under general public license allows interesting developments and useful modifications of the platform for third parties, like the integration of real-time support. This paper presents an extension of Android improving its real-time capabilities, without loss of original Android functionality and compatibility to existing applications. In our approach we apply the RT_PREEMPT patch to the Linux kernel, modify essential Android components like the Dalvik virtual machine and introduce a new real-time interface for Android developers. The resulting Android system supports applications with real-time requirements, which can be implemented in the same way as non-real-time applications. |
Patterns of third-molar agenesis and associated dental anomalies in an orthodontic population. | INTRODUCTION
The aim of this study was to investigate the frequency of dental anomalies in orthodontic patients with different patterns of third-molar agenesis, comparing them with patients without third-molar agenesis.
METHODS
A sample of 374 patients with agenesis of at least 1 third molar was divided into 4 groups according to the third-molar agenesis pattern, and a control group of 98 patients without third-molar agenesis was randomly selected from the patient archives. Panoramic radiographs and cast models were used to determine the associated dental anomalies, such as hypodontia, hyperdontia, impaction, dilaceration, microdontia, ectopic eruption, transposition, and transmigration. The Pearson chi-square and Fisher exact tests were used to determine the differences in the distribution of the associated dental anomalies among the groups.
RESULTS
The prevalence of agenesis of other teeth (11.2%, n = 42) was significantly greater in our study sample (groups 1-4) than in the control group (group 5) (4.1%, n = 4; P <0.05). When we compared the groups according to the various third-molar agenesis patterns, we found that agenesis of other teeth was more common in patients with agenesis of 3 and 4 third molars. In addition, the patients with agenesis of 4 third molars exhibited maxillary lateral-incisor microdontia more frequently. Another important finding was a higher prevalence of total dental anomalies in patients with agenesis of 3 and 4 third molars compared with the control group.
CONCLUSIONS
Permanent tooth agenesis, microdontia of maxillary lateral incisors, and total dental anomalies are more frequently associated with agenesis of 4 third molars than with the presence of third molars. |
SCANNING DYNAMIC COMPETITIVE LANDSCAPES : A MARKET-BASED AND RESOURCE-BASED FRAMEWORK | Heterogeneity among rivals implies that each firm faces a unique competitive set, despite overlapping market domains. This suggests the utility of a firm-level approach to competitor identification and analysis, particularly under dynamic environmental conditions. We take such an approach in developing a market-based and resource-based framework for scanning complex competitive fields. By facilitating a search for functional similarities among products and resources, the framework reveals relevant commonalities in an otherwise heterogeneous competitive set. Beyond its practical contribution, the paper also advances resource-based theory as a theory of competitive advantage. Most notably, we show that resource substitution conditions not only the sustainability of a competitive advantage, but the attainment of competitive advantage as well. With equifinality among resources of different types, the rareness condition for even temporary competitive advantage must include resource substitutes. It is not rareness in terms of resource type that matters, but rareness in terms of resource functionality. Copyright 2003 John Wiley & Sons, Ltd. |
Security analysis of an image encryption algorithm based on a DNA addition combining with chaotic maps | In this paper, we propose to cryptanalyse an encryption algorithm which combines a DNA addition and a chaotic map to encrypt a gray scale image. Our contribution consists on, at first, demonstrating that the algorithm, as it is described, is non-invertible, which means that the receiver cannot decrypt the ciphered image even if he posses the secret key. Then, a chosen plaintext attack on the invertible encryption block is described, where, the attacker can illegally decrypt the ciphered image by a temporary access to the encryption machinery. |
Learning Map-Independent Evaluation Functions for Real-Time Strategy Games | Real-time strategy (RTS) games have drawn great attention in the AI research community, for they offer a challenging and rich testbed for both machine learning and AI techniques. Due to their enormous state spaces and possible map configurations, learning good and generalizable representations for machine learning is crucial to build agents that can perform well in complex RTS games. In this paper we present a convolutional neural network approach to learn an evaluation function that focuses on learning general features that are independent of the map configuration or size. We first train and evaluate the network on a winner prediction task on a dataset collected with a small set of maps with a fixed size. Then we evaluate the network’s generalizability to three set of larger maps. by using it as an evaluation function in the context of Monte Carlo Tree Search. Our results show that the presented architecture can successfully capture general and map-independent features applicable to more complex RTS situations. |
A Protocol for Interledger Payments | We present a protocol for payments across payment systems. It enables secure transfers between ledgers and allows anyone with accounts on two ledgers to create a connection between them. Ledger-provided escrow removes the need to trust these connectors. Connections can be composed to enable payments between any ledgers, creating a global graph of liquidity or Interledger. Unlike previous approaches, this protocol requires no global coordinating system or blockchain. Transfers are escrowed in series from the sender to the recipient and executed using one of two modes. In the Atomic mode, transfers are coordinated using an ad-hoc group of notaries selected by the participants. In the Universal mode, there is no external coordination. Instead, bounded execution windows, participant incentives and a “reverse” execution order enable secure payments between parties without shared trust in any system or institution. |
Spatiotemporal Multi-Task Network for Human Activity Understanding | Recently, remarkable progress has been achieved in human action recognition and detection by using deep learning techniques. However, for action detection in real-world untrimmed videos, the accuracies of most existing approaches are still far from satisfactory, due to the difficulties in temporal action localization. On the other hand, the spatiotempoal features are not well utilized in recent work for video analysis. To tackle these problems, we propose a spatiotemporal, multi-task, 3D deep convolutional neural network to detect (including temporally localize and recognition) actions in untrimmed videos. First, we introduce a fusion framework which aims to extract video-level spatiotemporal features in the training phase. And we demonstrate the effectiveness of video-level features by evaluating our model on human action recognition task. Then, under the fusion framework, we propose a spatiotemporal multi-task network, which has two sibling output layers for action classification and temporal localization, respectively. To obtain precise temporal locations, we present a novel temporal regression method to revise the proposal window which contains an action. Meanwhile, in order to better utilize the rich motion information in videos, we introduce a novel video representation, interlaced images, as an additional network input stream. As a result, our model outperforms state-of-the-art methods for both action recognition and detection on standard benchmarks. |
Second-order complex random vectors and normal distributions | We formulate as a deconvolution problem the causalhoncausal non-Gaussian multichannel autoregressive (AR) parameter estimation problem. The super exponential aljporithm presented in a recent paper by Shalvi and Weinstein is generalized to the vector case. We present an adaptive implementation that is very attractive since it is higher order statistics (HOS) based b u t does not present the high comlputational complexity of methods proposed up to now. |
Intelligent Detection System for e-banking Phishing websites using Fuzzy Data Mining | Detecting and identifying e-banking Phishing websites is really a complex and dynamic problem involving many factors and criteria. Because of the subjective considerations and the ambiguities involved in the detection, Fuzzy Data Mining Techniques can be an effective tool in assessing and identifying e-banking phishing websites since it offers a more natural way of dealing with quality factors rather than exact values. In this paper, we present novel approach to overcome the „fuzziness‟ in the e-banking phishing website assessment and propose an intelligent resilient and effective model for detecting e-banking phishing websites. The proposed model is based on Fuzzy logic combined with Data Mining algorithms to characterize the e-banking phishing website factors and to investigate its techniques by classifying there phishing types and defining six e-banking phishing website attack criteria‟s with a layer structure. A Case study was applied to illustrate and simulate the phishing process. Our experimental results showed the significance and importance of the e-banking phishing website criteria (URL & Domain Identity) represented by layer one, and the variety influence of the phishing characteristic layers on the final e-banking phishing website rate. KeywordsPhishing, Fuzzy Logic, Data Mining, Classification, association, apriori, e-banking risk assessment |
A Smart Educational Game to Model Personality Using Learning Analytics | Various research work have highlighted the importance of modeling learner's personality to provide a personalized computer based learning. In particular, questionnaire is the most used method to model personality which can be long and not motivating. This makes learners unwilling to take it. Therefore, this paper refers to Learning Analytics (LA) to implicitly model learners' personalities based on their traces generated during the learning-playing process. In this context, an LA system and an educational game were developed. Forty five participants (34 learners and 11 teachers) participated in an experiment to evaluate the accuracy level of the learners' modeling results and the teachers' satisfaction degree towards this LA system. The obtained results highlighted that the LA system has a high level of accuracy and a "good" agreement degree compared to the questionnaire paper. Besides, the teachers found the LA system easy to use, useful and they were willing to use it in the future. |
The impact of DDoS and other security shocks on Bitcoin currency exchanges: evidence from Mt. Gox | We investigate how distributed denial-of-service (DDoS) attacks and other disruptions affect the Bitcoin ecosystem. In particular, we investigate the impact of shocks on trading activity at the leading Mt. Gox exchange between April 2011 and November 2013. We find that following DDoS attacks on Mt. Gox, the number of large trades on the exchange fell sharply. In particular, the distribution of the daily trading volume becomes less skewed (fewer big trades) and had smaller kurtosis on days following DDoS attacks. The results are robust to alternative specifications, as well as to restricting the data to activity prior to March 2013, i.e., the period before the first large appreciation in the price of and attention paid to Bitcoin. |
AFrame: isolating advertisements from mobile applications in Android | Android uses a permission-based security model to restrict applications from accessing private data and privileged resources. However, the permissions are assigned at the application level, so even untrusted third-party libraries, such as advertisement, once incorporated, can share the same privileges as the entire application, leading to over-privileged problems.
We present AFrame, a developer friendly method to isolate untrusted third-party code from the host applications. The isolation achieved by AFrame covers not only the process/permission isolation, but also the display and input isolation. Our AFrame framework is implemented through a minimal change to the existing Android code base; our evaluation results demonstrate that it is effective in isolating the privileges of untrusted third-party code from applications with reasonable performance overhead. |
Pedagogical utilization and assessment of the statistic online computational resource in introductory probability and statistics courses | Technology-based instruction represents a new recent pedagogical paradigm that is rooted in the realization that new generations are much more comfortable with, and excited about, new technologies. The rapid technological advancement over the past decade has fueled an enormous demand for the integration of modern networking, informational and computational tools with classical pedagogical instruments. Consequently, teaching with technology typically involves utilizing a variety of IT and multimedia resources for online learning, course management, electronic course materials, and novel tools of communication, engagement, experimental, critical thinking and assessment.The NSF-funded Statistics Online Computational Resource (SOCR) provides a number of interactive tools for enhancing instruction in various undergraduate and graduate courses in probability and statistics. These resources include online instructional materials, statistical calculators, interactive graphical user interfaces, computational and simulation applets, tools for data analysis and visualization. The tools provided as part of SOCR include conceptual simulations and statistical computing interfaces, which are designed to bridge between the introductory and the more advanced computational and applied probability and statistics courses. In this manuscript, we describe our designs for utilizing SOCR technology in instruction in a recent study. In addition, present the results of the effectiveness of using SOCR tools at two different course intensity levels on three outcome measures: exam scores, student satisfaction and choice of technology to complete assignments. Learning styles assessment was completed at baseline. We have used three very different designs for three different undergraduate classes. Each course included a treatment group, using the SOCR resources, and a control group, using classical instruction techniques. Our findings include marginal effects of the SOCR treatment per individual classes; however, pooling the results across all courses and sections, SOCR effects on the treatment groups were exceptionally robust and significant. Coupling these findings with a clear decrease in the variance of the quantitative examination measures in the treatment groups indicates that employing technology, like SOCR, in a sound pedagogical and scientific manner enhances overall the students' understanding and suggests better long-term knowledge retention. |
AN OVERVIEW OF PREPROCESSING OF WEB LOG FILES FOR WEB USAGE MINING | With the Internet usage gaining popularity and the steady growth of users, the World Wide Web has become a huge repository of data and serves as an important platform for the dissemination of information. The users’ accesses to Web sites are stored in Web server logs. However, the data stored in the log files do not present an accurate picture of the users’ accesses to the Web site. Hence, preprocessing of the Web log data is an essential and pre-requisite phase before it can be used for knowledge-discovery or mining tasks. The preprocessed Web data can then be suitable for the discovery and analysis of useful information referred to as Web mining. Web usage mining, a classification of Web mining, is the application of data mining techniques to discover usage patterns from clickstream and associated data stored in one or more Web servers. This paper presents an overview of the various steps involved in the preprocessing stage. |
An Optimal Graph Theoretic Approach to Data Clustering: Theory and Its Application to Image Segmentation | (57) Data collected from devices and human condition may be used to forewarn of critical events such as machine/structural failure or events from brain/heart wave data stroke. By moni toring the data, and determining what values are indicative of a failure forewarning, one can provide adequate notice of the impending failure in order to take preventive measures. This disclosure teaches a computer-based method to convert dynamical numeric data representing physical objects (un structured data) into discrete-phase-space states, and hence into a graph (Structured data) for extraction of condition change. ABSTRACT |
A Birth-cohort testing intervention identified hepatitis c virus infection among patients with few identified risks: a cross-sectional study | BACKGROUND
International guidelines and U.S. guidelines prior to 2012 only recommended testing for hepatitis C virus (HCV) infection among patients at risk, but adherence to guidelines is poor, and the majority of those infected remain undiagnosed. A strategy to perform one-time testing of all patients born during 1945-1965, birth cohort testing, may diagnose HCV infection among patients whose risk remains unknown. We sought to determine if a birth-cohort testing intervention for HCV antibody positivity helped identify patients with fewer documented risk factors or medical indications than a pre-intervention, risk-based testing strategy.
METHODS
We used a cross-sectional design with retrospective electronic medical record review to examine patients identified with HCV antibody positivity (Ab+) during a pre-intervention (risk-based) phase, the standard of care at the time, vs. a birth-cohort testing intervention phase. We compared demographic and clinical characteristics and HCV risk-associated factors among patients whose HCV Ab + was identified during the pre-intervention (risk-based testing) vs. post birth-cohort intervention phases. Study subjects were patients identified as HCV-Ab + in the baseline (risk-based) and birth-cohort testing phases of the Hepatitis C Assessment and Testing (HepCAT) Project.
RESULTS
Compared to the risk-based phase, patients newly diagnosed with HCV Ab + after the birth-cohort intervention were significantly less likely to have a history of any substance abuse (30.5% vs. 49.5%, p = 0.02), elevated alanine transaminase levels of > 40 U/L (22.0% vs. 46.7%, p = 0.002), or the composite any risk-associated factor (55.9% vs. 79.0%, p = 0.002).
CONCLUSIONS
Birth-cohort testing is an useful strategy for identifying previously undiagnosed HCV Ab + because it does not require providers ask risk-based questions, or patients to disclose risk behaviors, and appears to identify HCV Ab + in patients who would not have been identified using a risk-based testing strategy. |
Disorders of higher mental function due to single infarctions in the thalamus and in the area of the thalamofrontal tracts | Nine patients (five female and four male, mean age 58 years) with small infarcts in the thalamus (TH) or in the region of the thalamofrontal tracts and producing acute mental disturbances which in the acute phase of insult consisted of dementia in seven cases and mild cognitive disturbances in two cases. The complex of mental changes was similar to that seen in “frontal syndrome” and was characterized largely by lack of spontaneity, adynamia, disorientation, loss of attention and memory, slowing of all mental processes, and lack of criticality and adequacy. Accompanying focal neurological symptoms were mild in seven patients and moderate or pronounced in two. In five patients, the severity of mental disturbances decreased with time. Computer tomography demonstrated small infarcts in the anterior or medial parts of the TH in seven patients and in the posteromedial parts of the anterior limb of the internal capsule, i.e., the thalmofrontal tracts, in two cases. In five cases, infarcts were located in the dominant hemisphere, with lesions in the non-dominant hemisphere in three and in both hemispheres in one. The positions of all foci corresponded to structures traversed by pathways connecting the TH and the lower part of the reticular formation to the frontal lobes. It is suggested that disconnection of these pathways leads to cognitive lesions or dementia because of functional inactivation of the frontal cortex. |
Depression, anxiety and stress in dental students | Objectives
To measure the occurrence and levels of depression, anxiety and stress in undergraduate dental students using the Depression, Anxiety and Stress Scale (DASS-21).
Methods
This cross-sectional study was conducted in November and December of 2014. A total of 289 dental students were invited to participate, and 277 responded, resulting in a response rate of 96%. The final sample included 247 participants. Eligible participants were surveyed via a self-reported questionnaire that included the validated DASS-21 scale as the assessment tool and questions about demographic characteristics and methods for managing stress.
Results
Abnormal levels of depression, anxiety and stress were identified in 55.9%, 66.8% and 54.7% of the study participants, respectively. A multiple linear regression analysis revealed multiple predictors: gender (for anxiety b=-3.589, p=.016 and stress b=-4.099, p=.008), satisfaction with faculty relationships (for depression b=-2.318, p=.007; anxiety b=-2.213, p=.004; and stress b=-2.854, p<.001), satisfaction with peer relationships (for depression b=-3.527, p<.001; anxiety b=-2.213, p=.004; and stress b=-2.854, p<.001), and dentistry as the first choice for field of study (for stress b=-2.648, p=.045). The standardized coefficients demonstrated the relationship and strength of the predictors for each subscale. To cope with stress, students engaged in various activities such as reading, watching television and seeking emotional support from others.
Conclusions
The high occurrence of depression, anxiety and stress among dental students highlights the importance of providing support programs and implementing preventive measures to help students, particularly those who are most susceptible to higher levels of these psychological conditions. |
A Framework for Multiple-Instance Learning | Multiple-instance learning is a variation on supervised learning, where the task is to learn a concept given positive and negative bags of instances. Each bag may contain many instances, but a bag is labeled positive even if only one of the instances in it falls within the concept. A bag is labeled negative only if all the instances in it are negative. We describe a new general framework, called Diverse Density, for solving multiple-instance learning problems. We apply this framework to learn a simple description of a person from a series of images (bags) containing that person, to a stock selection problem, and to the drug activity prediction problem. |
A Survey on Access Control Deployment | Access control is a security aspect whose requirements evolve with technology advances and, at the same time, contemporary social contexts. Multitudes of access control models grow out of their respective application domains such as healthcare and collaborative enterprises; and even then, further administering means, human factor considerations, and infringement management are required to effectively deploy the model in the particular usage environment. This paper presents a survey of access control mechanisms along with their deployment issues and solutions available today. We aim to give a comprehensive big picture as well as pragmatic deployment details to guide in understanding, setting up and enforcing access control in its real world application. |
Smoothing spline estimation of variance functions | This article considers spline smoothing of variance functions. We focus on selection of the smoothing parameters and develop three direct data-driven methods: unbiased risk (UBR), generalized approximate cross-validation (GACV), and generalized maximum likelihood (GML). In addition to guaranteed convergence, simulations show that these direct methods perform better than existing indirect UBR, generalized cross-validation (GCV), and GML methods. The direct UBR and GML methods perform better than the GACV method. An application to array-based comparative genomic hybridization data illustrates the usefulness of the proposed methods. |
The information bottleneck method | We define the relevant information in a signal x ∈ X as being the information that this signal provides about another signal y ∈ Y . Examples include the information that face images provide about the names of the people portrayed, or the information that speech sounds provide about the words spoken. Understanding the signal x requires more than just predicting y, it also requires specifying which features of X play a role in the prediction. We formalize this problem as that of finding a short code for X that preserves the maximum information about Y . That is, we squeeze the information that X provides about Y through a ‘bottleneck’ formed by a limited set of codewords X̃. This constrained optimization problem can be seen as a generalization of rate distortion theory in which the distortion measure d(x, x̃) emerges from the joint statistics of X and Y . This approach yields an exact set of self consistent equations for the coding rules X → X̃ and X̃ → Y . Solutions to these equations can be found by a convergent re–estimation method that generalizes the Blahut–Arimoto algorithm. Our variational principle provides a surprisingly rich framework for discussing a variety of problems in signal processing and learning, as will be described in detail elsewhere. |
Educational Software Applied in Teaching Electrocardiogram: A Systematic Review | Background
The electrocardiogram (ECG) is the most used diagnostic tool in medicine; in this sense, it is essential that medical undergraduates learn how to interpret it correctly while they are still on training. Naturally, they go through classic learning (e.g., lectures and speeches). However, they are not often efficiently trained in analyzing ECG results. In this regard, methodologies such as other educational support tools in medical practice, such as educational software, should be considered a valuable approach for medical training purposes.
Methods
We performed a literature review in six electronic databases, considering studies published before April 2017. The resulting set comprises 2,467 studies. From this collection, 12 studies have been selected, initially, whereby we carried out a snowballing process to identify other relevant studies through the reference lists of these studies, resulting in five relevant studies, making up a total of 17 articles that passed all stages and criteria.
Results
The results show that 52.9% of software types were tutorial and 58.8% were designed to be run locally on a computer. The subjects were discussed together with a greater focus on the teaching of electrophysiology and/or cardiac physiology, identifying patterns of ECG and/or arrhythmias.
Conclusions
We found positive results with the introduction of educational software for ECG teaching. However, there is a clear need for using higher quality research methodologies and the inclusion of appropriate controls, in order to obtain more precise conclusions about how beneficial the inclusion of such tools can be for the practices of ECG interpretation. |
How mangrove forests adjust to rising sea level. | Mangroves are among the most well described and widely studied wetland communities in the world. The greatest threats to mangrove persistence are deforestation and other anthropogenic disturbances that can compromise habitat stability and resilience to sea-level rise. To persist, mangrove ecosystems must adjust to rising sea level by building vertically or become submerged. Mangroves may directly or indirectly influence soil accretion processes through the production and accumulation of organic matter, as well as the trapping and retention of mineral sediment. In this review, we provide a general overview of research on mangrove elevation dynamics, emphasizing the role of the vegetation in maintaining soil surface elevations (i.e. position of the soil surface in the vertical plane). We summarize the primary ways in which mangroves may influence sediment accretion and vertical land development, for example, through root contributions to soil volume and upward expansion of the soil surface. We also examine how hydrological, geomorphological and climatic processes may interact with plant processes to influence mangrove capacity to keep pace with rising sea level. We draw on a variety of studies to describe the important, and often under-appreciated, role that plants play in shaping the trajectory of an ecosystem undergoing change. |
Safety of the Up-titration of Nifedipine GITS and Valsartan or Low-dose Combination in Uncontrolled Hypertension: the FOCUS Study. | PURPOSE
Doubling the dose of antihypertensive drugs is necessary to manage hypertension in patients whose disease is uncontrolled. However, this strategy can result in safety issues. This study compared the safety and efficacy of up-titration of the nifedipine gastrointestinal therapeutic system (GITS) with up-titration of valsartan monotherapy; these were also compared with low-dose combinations of the two therapies.
METHODS
This prospective, open-label, randomized, active-controlled, multicenter study lasted 8 weeks. If patients did not meet the target blood pressure (BP) after 4 weeks of treatment with low-dose monotherapy, they were randomized to up-titration of the nifedipine GITS dose from 30 mg (N30) to 60 mg or valsartan from 80 mg to 160 mg or they were randomized to receive a low-dose combination of N30 and valsartan 80 mg for another 4 weeks. BP variability was assessed by using the SD or the %CV of the short-term BP measured at clinic.
FINDINGS
Of the 391 patients (20~70 years with stage II or higher hypertension) screened for study inclusion, 362 patients who had 3 BP measurements were enrolled. The reduction in the mean systolic/diastolic BP from baseline to week 4 was similar in both low-dose monotherapy groups with either N30 or valsartan 80 mg. BP variability (SD) was unchanged with either therapy, but the %CV was slightly increased in the N30 group. There was no significant difference in BP variability either in SD or %CV between responders and nonresponders to each monotherapy despite the significant difference in the mean BP changes. The up-titration effect of nifedipine GTS from 30 to 60 mg exhibited an additional BP reduction, but this effect was not shown in the up-titration of valsartan from 80 to 160 mg. Although the difference in BP was obvious between high-dose nifedipine GTS and valsartan, the BP variability was unchanged between the 2 drugs and was similar to the low-dose combinations. There was a low rate of adverse events in all treatment groups. In addition, escalating the dose of either nifedipine GITS or valsartan revealed a similar occurrence of adverse effects with low-dose monotherapy or the low-dose combination.
IMPLICATIONS
Compared with up-titration of the angiotensin receptor blocker valsartan, up-titration of the calcium channel blocker nifedipine GITS provided no additional increased safety concerns and revealed better mean reductions in BP without affecting short-term BP variability. ClinicalTrials.gov identifier: NCT01071122. |
Rule Learning from Knowledge Graphs Guided by Embedding Models | Rules over a Knowledge Graph (KG) capture interpretable patterns in data and various methods for rule learning have been proposed. Since KGs are inherently incomplete, rules can be used to deduce missing facts. Statistical measures for learned rules such as confidence reflect rule quality well when the KG is reasonably complete; however, these measures might be misleading otherwise. So it is difficult to learn high-quality rules from the KG alone, and scalability dictates that only a small set of candidate rules is generated. Therefore, the ranking and pruning of candidate rules is a major problem. To address this issue, we propose a rule learning method that utilizes probabilistic representations of missing facts. In particular, we iteratively extend rules induced from a KG by relying on feedback from a precomputed embedding model over the KG and external information sources including text corpora. Experiments on real-world KGs demonstrate the effectiveness of our novel approach both with respect to the quality of the learned rules and fact predictions that they produce. |
Missing data techniques in analogy-based software development effort estimation | Missing Data (MD) is a widespread problem that can affect the ability to use data to construct effective software development effort prediction systems. This paper investigates the use of missing data (MD) techniques with two analogy-based software development effort estimation techniques: Classical Analogy and Fuzzy Analogy. More specifically, we analyze the predictive performance of these two analogy-based techniques when using toleration, deletion or k-nearest neighbors (KNN) imputation techniques. A total of 1,512 experiments were conducted involving seven data sets, three MD techniques (toleration, deletion and KNN imputation), three missingness mechanisms (MCAR: missing completely at random, MAR: missing at random, NIM: non-ignorable missing), and MD percentages from 10 percent to 90 percent. The results suggest that Fuzzy Analogy generates more accurate estimates in terms of the Standardized Accuracy measure (SA) than Classical Analogy regardless of the MD technique, the data set used, the missingness mechanism or the MD percentage. Moreover, this study found that the use of KNN imputation, rather than toleration or deletion, may improve the prediction accuracy of both analogy-based techniques. However, toleration, deletion and KNN imputation are affected by the missingness mechanism and the MD percentage, both of which have a strong negative impact upon effort prediction accuracy. |
Prebiopsy Multiparametric Magnetic Resonance Imaging for Prostate Cancer Diagnosis in Biopsy-naive Men with Suspected Prostate Cancer Based on Elevated Prostate-specific Antigen Values: Results from a Randomized Prospective Blinded Controlled Trial. | BACKGROUND
Multiparametric magnetic resonance imaging (MP-MRI) may improve the detection of clinically significant prostate cancer (PCa).
OBJECTIVE
To compare MP-MRI transrectal ultrasound (TRUS)-fusion targeted biopsy with routine TRUS-guided random biopsy for overall and clinically significant PCa detection among patients with suspected PCa based on prostate-specific antigen (PSA) values.
DESIGN, SETTING, AND PARTICIPANTS
This institutional review board-approved, single-center, prospective, randomized controlled trial (April 2011 to December 2014) included 130 biopsy-naive patients referred for prostate biopsy based on PSA values (PSA <20 ng/ml or free-to-total PSA ratio ≤0.15 and PSA <10 ng/ml). Patients were randomized 1:1 to the MP-MRI or control group. Patients in the MP-MRI group underwent prebiopsy MP-MRI followed by 10- to 12-core TRUS-guided random biopsy and cognitive MRI/TRUS fusion targeted biopsy. The control group underwent TRUS-guided random biopsy alone.
INTERVENTION
MP-MRI 3-T phased-array surface coil.
OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS
The primary outcome was the number of patients with biopsy-proven PCa in the MP-MRI and control groups. Secondary outcome measures included the number of positive prostate biopsies and the proportion of clinically significant PCa in the MP-MRI and control groups. Between-group analyses were performed.
RESULTS AND LIMITATIONS
Overall, 53 and 60 patients were evaluable in the MP-MRI and control groups, respectively. The overall PCa detection rate and the clinically significant cancer detection rate were similar between the MP-MRI and control groups, respectively (64% [34 of 53] vs 57% [34 of 60]; 7.5% difference [95% confidence interval (CI), -10 to 25], p=0.5, and 55% [29 of 53] vs 45% [27 of 60]; 9.7% difference [95% CI, -8.5 to 27], p=0.8). The PCa detection rate was higher than assumed during the planning of this single-center trial.
CONCLUSIONS
MP-MRI/TRUS-fusion targeted biopsy did not improve PCa detection rate compared with TRUS-guided biopsy alone in patients with suspected PCa based on PSA values.
PATIENT SUMMARY
In this randomized clinical trial, additional prostate magnetic resonance imaging (MRI) before prostate biopsy appeared to offer similar diagnostic accuracy compared with routine transrectal ultrasound-guided random biopsy in the diagnosis of prostate cancer. Similar numbers of cancers were detected with and without MRI.
TRIAL REGISTRATION
ClinicalTrials.gov identifier: NCT01357512. |
IEEE standard 1149.6 implementation for a XAUI-to-serial 10-Gbps transceiver | The design, implementation and verification of IEEE standard1149.6 IP for a transceiver manufactured with 90 nm technology and using current mode logic (CML) are challenging because (i) CML has high operating frequency, (ii) CML has very low operating voltage range, and (iii) CML is inherently a differential type of circuitry. This work describes how major building blocks of IEEE standard1149.6 IP-such as input test receiver, boundary scan register containing new AC boundary scan cells, output test signal generation circuitry, and modified TAP controller-were implemented and verified. Third-party CAD tools typically used for IEEE standard 1149.1 IP generation were used for this implementation. |
FitLoc: Fine-Grained and Low-Cost Device-Free Localization for Multiple Targets Over Various Areas | Many emerging applications driven the fast development of the device-free localization DfL technique, which does not require the target to carry any wireless devices. Most current DfL approaches have two main drawbacks in practical applications. First, as the pre-calibrated received signal strength RSS in each location i.e., radio-map of a specific area cannot be directly applied to the new areas, the manual calibration for different areas will lead to a high human effort cost. Second, a large number of RSS are needed to accurately localize the targets, thus causes a high communication cost and the areas variety will further exacerbate this problem. This paper proposes FitLoc, a fine-grained and low cost DfL approach that can localize multiple targets over various areas, especially in the outdoor environment and similar furnitured indoor environment. FitLoc unifies the radio-map over various areas through a rigorously designed transfer scheme, thus greatly reduces the human effort cost. Furthermore, benefiting from the compressive sensing theory, FitLoc collects a few RSS and performs a fine-grained localization, thus reduces the communication cost. Theoretical analyses validate the effectivity of the problem formulation and the bound of localization error is provided. Extensive experimental results illustrate the effectiveness and robustness of FitLoc. |
How mindfulness changed my sleep: focus groups with chronic insomnia patients | BACKGROUND
Chronic insomnia is a major public health problem affecting approximately 10% of adults. Use of meditation and yoga to develop mindful awareness ('mindfulness training') may be an effective approach to treat chronic insomnia, with sleep outcomes comparable to nightly use of prescription sedatives, but more durable and with minimal or no side effects. The purpose of this study was to understand mindfulness training as experienced by patients with chronic insomnia, and suggest procedures that may be useful in optimizing sleep benefits.
METHODS
Adults (N = 18) who completed an 8-week mindfulness-based stress reduction (MBSR) program as part of a randomized, controlled clinical trial to evaluate MBSR as a treatment for chronic insomnia were invited to participate in post-trial focus groups. Two groups were held. Participants (n = 9) described how their sleep routine, thoughts and emotions were affected by MBSR and about utility (or not) of various mindfulness techniques. Groups were audio-recorded, transcribed and analyzed using content analysis.
RESULTS
Four themes were identified: the impact of mindfulness on sleep and motivation to adopt a healthy sleep lifestyle; benefits of mindfulness on aspects of life beyond sleep; challenges and successes in adopting mindfulness-based practices; and the importance of group sharing and support. Participants said they were not sleeping more, but sleeping better, waking more refreshed, feeling less distressed about insomnia, and better able to cope when it occurred. Some participants experienced the course as a call to action, and for them, practicing meditation and following sleep hygiene guidelines became priorities. Motivation to sustain behavioral changes was reinforced by feeling physically better and more emotionally stable, and seeing others in the MBSR class improve. The body scan was identified as an effective tool to enable falling asleep faster. Participants described needing to continue practicing mindfulness to maintain benefits.
CONCLUSIONS
First-person accounts are consistent with published trial results of positive impacts of MBSR on sleep measured by sleep diary, actigraphy, and self-report sleep scales. Findings indicate that mindfulness training in a group format, combined with sleep hygiene education, is important for effective application of MBSR as a treatment for chronic insomnia. |
Recommendation Systems for Software Engineering | Software development can be challenging because of the large information spaces that developers must navigate. Without assistance, developers can become bogged down and spend a disproportionate amount of their time seeking information at the expense of other value-producing tasks. Recommendation systems for software engineering (RSSEs) are software tools that can assist developers with a wide range of activities, from reusing code to writing effective bug reports. The authors provide an overview of recommendation systems for software engineering: what they are, what they can do for developers, and what they might do in the future. |
Spectral reflectance of coral reef bottom-types worldwide and implications for coral reef remote sensing | Coral reef benthic communities are mosaics of individual bottom-types that are distinguished by their taxonomic composition and functional roles in the ecosystem. Knowledge of community structure is essential to understanding many reef processes. To develop techniques for identification and mapping of reef bottom-types using remote sensing, we measured 13,100 in situ optical reflectance spectra (400–700 nm, 1-nm intervals) of 12 basic reef bottom-types in the Atlantic, Pacific, and Indian Oceans: fleshy (1) brown, (2) green, and (3) red algae; non-fleshy (4) encrusting calcareous and (5) turf algae; (6) bleached, (7) blue, and (8) brown hermatypic coral; (9) soft/gorgonian coral; (10) seagrass; (11) terrigenous mud; and (12) carbonate sand. Each bottom-type exhibits characteristic spectral reflectance features that are conservative across biogeographic regions. Most notable are the brightness of carbonate sand and local extrema near 570 nm in blue (minimum) and brown (maximum) corals. Classification function analyses for the 12 bottom-types achieve mean accuracies of 83%, 76%, and 71% for full-spectrum data (301-wavelength), 52-wavelength, and 14-wavelength subsets, respectively. The distinguishing spectral features for the 12 bottom-types exist in well-defined, narrow (10–20 nm) wavelength ranges and are ubiquitous throughout the world. We reason that spectral reflectance features arise primarily as a result of spectral absorption processes. Radiative transfer modeling shows that in typically clear coral reef waters, dark substrates such as corals have a depth-of-detection limit on the order of 10–20 m. Our results provide the foundation for design of a sensor with the purpose of assessing the global status of coral reefs. Published by Elsevier Science Inc. |
Eigenvector method for maximum-likelihood estimation of phase errors in synthetic-aperture-radar imagery | Received November 25, 1992; revised manuscript received April 14, 1993; accepted May 11, 1993 We develop a maximum-likelihood (ML) algorithm for estimation and correction (autofocus) of phase errors induced in synthetic-aperture-radar (SAR) imagery. Here, M pulse vectors in the range-compressed domain are used as input for simultaneously estimating M 1 phase values across the aperture. The solution involves an eigenvector of the sample covariance matrix of the range-compressed data. The estimator is then used within the basic structure of the phase gradient autofocus (PGA) algorithm, replacing the original phase-estimation kernel. We show that, in practice, the new algorithm provides excellent restorations to defocused SAR imagery, typically in only one or two iterations. The performance of the new phase estimator is demonstrated essentially to achieve the Cramer-Rao lower bound on estimation-error variance for all but small values of target-toclutter ratio. We also show that for the case in which M is equal to 2, the ML estimator is similar to that of the original PGA method but achieves better results in practice, owing to a bias inherent in the original PGA phaseestimation kernel. Finally, we discuss the relationship of these algorithms to the shear-averaging and spatialcorrelation methods, two other phase-correction techniques that utilize the same phase-estimation kernel but that produce substantially poorer performance because they do not employ several fundamental signal-processing steps that are critical to the algorithms of the PGA class. |
Automatic Seizure Detection Based on Time-Frequency Analysis and Artificial Neural Networks | The recording of seizures is of primary interest in the evaluation of epileptic patients. Seizure is the phenomenon of rhythmicity discharge from either a local area or the whole brain and the individual behavior usually lasts from seconds to minutes. Since seizures, in general, occur infrequently and unpredictably, automatic detection of seizures during long-term electroencephalograph (EEG) recordings is highly recommended. As EEG signals are nonstationary, the conventional methods of frequency analysis are not successful for diagnostic purposes. This paper presents a method of analysis of EEG signals, which is based on time-frequency analysis. Initially, selected segments of the EEG signals are analyzed using time-frequency methods and several features are extracted for each segment, representing the energy distribution in the time-frequency plane. Then, those features are used as an input in an artificial neural network (ANN), which provides the final classification of the EEG segments concerning the existence of seizures or not. We used a publicly available dataset in order to evaluate our method and the evaluation results are very promising indicating overall accuracy from 97.72% to 100%. |
Authorship Analysis of the Zeus Botnet Source Code | Authorship analysis has been used successfully to analyse the provenance of source code files in previous studies. The source code for Zeus, one of the most damaging and effective botnets to date, was leaked in 2011. In this research, we analyse the source code from the lens of authorship clustering, aiming to estimate how many people wrote this malware, and what their roles are. The research provides insight into the structure the went into creating Zeus and its evolution over time. The work has potential to be used to link the malware with other malware written by the same authors, helping investigations, classification, deterrence and detection. |
An evaluation of reading comprehension of expository text in adults with traumatic brain injury. | PURPOSE
This project was conducted to obtain information about reading problems of adults with traumatic brain injury (TBI) with mild-to-moderate cognitive impairments and to investigate how these readers respond to reading comprehension strategy prompts integrated into digital versions of text.
METHOD
Participants from 2 groups, adults with TBI (n = 15) and matched controls (n = 15), read 4 different 500-word expository science passages linked to either a strategy prompt condition or a no-strategy prompt condition. The participants' reading comprehension was evaluated using sentence verification and free recall tasks.
RESULTS
The TBI and control groups exhibited significant differences on 2 of the 5 reading comprehension measures: paraphrase statements on a sentence verification task and communication units on a free recall task. Unexpected group differences were noted on the participants' prerequisite reading skills. For the within-group comparison, participants showed significantly higher reading comprehension scores on 2 free recall measures: words per communication unit and type-token ratio. There were no significant interactions.
CONCLUSION
The results help to elucidate the nature of reading comprehension in adults with TBI with mild-to-moderate cognitive impairments and endorse further evaluation of reading comprehension strategies as a potential intervention option for these individuals. Future research is needed to better understand how individual differences influence a person's reading and response to intervention. |
A re-examination of relevance: toward a dynamic, situational definition | Although relevance judgments are fundamental to the design and evaluation of all information retrieval systems, information scientists have not reached a consensus in defining the central concept of relevance. In this paper we ask two questions: What is the meaning of relevance? and What role does relevance play in information behavior? We attempt to address these questions by reviewing literature over the last 30 years that presents various views of relevance as topical, user-oriented, multidimensional, cognitive, and dynamic. We then discuss traditional assumptions on which most research in the field has been based and begin building a case for an approach to the problem of definition based on alternative assumptions. The dynamic, situational approach we suggest views the user-regardless of system-as the central and active determinant of the dimensions of relevance. We believe that relevance is a multidimensional concept; that it is dependent on both internal (cognitive) and external (situational) factors; that it is based on a dynamic human judgment process; and that it is a complex but systematic and mea- |
Multifaceted Feature Visualization: Uncovering the Different Types of Features Learned By Each Neuron in Deep Neural Networks | We can better understand deep neural networks by identifying which features each of their neurons have learned to detect. To do so, researchers have created Deep Visualization techniques including activation maximization, which synthetically generates inputs (e.g. images) that maximally activate each neuron. A limitation of current techniques is that they assume each neuron detects only one type of feature, but we know that neurons can be multifaceted, in that they fire in response to many different types of features: for example, a grocery store class neuron must activate either for rows of produce or for a storefront. Previous activation maximization techniques constructed images without regard for the multiple different facets of a neuron, creating inappropriate mixes of colors, parts of objects, scales, orientations, etc. Here we introduce an algorithm that explicitly uncovers the multiple facets of each neuron by producing a synthetic visualization of each of the types of images that activate a neuron. We also introduce regularization methods that produce state-of-the-art results in terms of the interpretability of images obtained by activation maximization. By separately synthesizing each type of image a neuron fires in response to, the visualizations have more appropriate colors and coherent global structure. Multifaceted feature visualization thus provides a clearer and more comprehensive description of the role of each neuron. Proceedings of the 33 rd International Conference on Machine Learning, New York, NY, USA, 2016. JMLR: W&CP volume 48. Copyright 2016 by the author(s). Figure 1. Top: Visualizations of 8 types of images (feature facets) that activate the same “grocery store” class neuron. Bottom: Example training set images that activate the same neuron, and resemble the corresponding synthetic image in the top panel. |
Bleu: a Method for Automatic Evaluation of Machine Translation | Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.1 |
Visualization and Visual Analysis of Multifaceted Scientific Data: A Survey | Visualization and visual analysis play important roles in exploring, analyzing, and presenting scientific data. In many disciplines, data and model scenarios are becoming multifaceted: data are often spatiotemporal and multivariate; they stem from different data sources (multimodal data), from multiple simulation runs (multirun/ensemble data), or from multiphysics simulations of interacting phenomena (multimodel data resulting from coupled simulation models). Also, data can be of different dimensionality or structured on various types of grids that need to be related or fused in the visualization. This heterogeneity of data characteristics presents new opportunities as well as technical challenges for visualization research. Visualization and interaction techniques are thus often combined with computational analysis. In this survey, we study existing methods for visualization and interactive visual analysis of multifaceted scientific data. Based on a thorough literature review, a categorization of approaches is proposed. We cover a wide range of fields and discuss to which degree the different challenges are matched with existing solutions for visualization and visual analysis. This leads to conclusions with respect to promising research directions, for instance, to pursue new solutions for multirun and multimodel data as well as techniques that support a multitude of facets. |
Practical guide to controlled experiments on the web: listen to your customers not to the hippo | The web provides an unprecedented opportunity to evaluate ideas quickly using controlled experiments, also called randomized experiments (single factor or factorial designs), A/B tests (and their generalizations), split tests, Control/Treatment tests, and parallel flights. Controlled experiments embody the best scientific design for establishing a causal relationship between changes and their influence on user-observable behavior. We provide a practical guide to conducting online experiments, where end-users can help guide the development of features. Our experience indicates that significant learning and return-on-investment (ROI) are seen when development teams listen to their customers, not to the Highest Paid Person's Opinion (HiPPO). We provide several examples of controlled experiments with surprising results. We review the important ingredients of running controlled experiments, and discuss their limitations (both technical and organizational). We focus on several areas that are critical to experimentation, including statistical power, sample size, and techniques for variance reduction. We describe common architectures for experimentation systems and analyze their advantages and disadvantages. We evaluate randomization and hashing techniques, which we show are not as simple in practice as is often assumed. Controlled experiments typically generate large amounts of data, which can be analyzed using data mining techniques to gain deeper understanding of the factors influencing the outcome of interest, leading to new hypotheses and creating a virtuous cycle of improvements. Organizations that embrace controlled experiments with clear evaluation criteria can evolve their systems with automated optimizations and real-time analyses. Based on our extensive practical experience with multiple systems and organizations, we share key lessons that will help practitioners in running trustworthy controlled experiments. |
Creating a bond between caregivers online: effect on caregivers' coping strategies. | Numerous studies have investigated the effect of Interactive Cancer Communication Systems (ICCSs) on system users' improvements in psychosocial status. Research in this area, however, has focused mostly on cancer patients, rather than on caregivers, and on the direct effects of ICCSs on improved outcomes, rather than on the psychological mechanisms of ICCS effects. To understand the underlying mechanisms, this study examines the mediating role of perceived caregiver bonding in the relation between one ICCS (the Comprehensive Health Enhancement Support System [CHESS]) use and caregivers' coping strategies. To test the hypotheses, a secondary analysis of data was conducted on 246 caregivers of lung cancer patients. These caregivers were randomly assigned to (a) the Internet, with links to high-quality lung cancer websites, or (b) access to CHESS, which integrated information, communication, and interactive coaching tools. Findings suggest that perceived bonding has positive effects on caregivers' appraisal and problem-focused coping strategies, and it mediates the effect of ICCS on the coping strategies 6 months after the intervention has begun. |
Vision: The Case for Symbiosis in the Internet of Things | Smart devices are becoming more powerful with faster processors, larger storage, and different types of communication modalities (e.g., WiFi, Bluetooth, and cellular). In the predominant view of Internet of Things (IoT) architecture, all smart devices are expected to communicate with cloud services and/or user-held mobile devices for processing, storage, and user interaction. This architecture heavily taxes Internet bandwidth by moving large volumes of data from the edge to the cloud, and presumes the availability of low-cost, high-performance cloud services that satisfy all user needs. We envision a new approach where all devices within the same network are 1) logically mesh connected either directly through Bluetooth or indirectly through WiFi, and 2) cooperate in a symbiotic fashion to perform different tasks. We consider instantiating this vision in a system we call SymbIoT. We first present the design goals that need to be satisfied in SymbIoT. We then discuss a strawman system's architecture that allows devices to assume different roles based on their capabilities (e.g., processing, storage, and UI). Finally, we show that it is, indeed, feasible to use low-end smart device capabilities in a cooperative manner to meet application requirements. |
Dimensionality Reduction of Multimodal Labeled Data by Local Fisher Discriminant Analysis | Reducing the dimensionality of data without losing intrinsic information is an important preprocessing step in high-dimensional data analysis. Fisher discriminant analysis (FDA) is a traditional technique for supervised dimensionality reduction, but it tends to give undesired results if samples in a class are multimodal. An unsupervised dimensionality reduction method called localitypreserving projection (LPP) can work well with multimodal data due to its locality preserving property. However, since LPP does not take the label information into account, it is not necessarily useful in supervised learning scenarios. In this paper, we propose a new linear supervised dimensionality reduction method called local Fisher discriminant analysis (LFDA), which effectively combines the ideas of FDA and LPP. LFDA has an analytic form of the embedding transformation and the solution can be easily computed just by solving a generalized eigenvalue problem. We demonstrate the practical usefulness and high scalability of the LFDA method in data visualization and classification tasks through extensive simulation studies. We also show that LFDA can be extended to non-linear dimensionality reduction scenarios by applying the kernel trick. |
Estimating Consumers’ Knowledge and Attitudes Towards Over-The-Counter Analgesic Medication in Greece in the Years of Financial Crisis: The Case of Paracetamol | INTRODUCTION
Non-prescription over-the-counter (OTC) drugs are widely used by patients to control aches, pain, and fever. One of the most frequently used OTC medications worldwide is paracetamol (acetaminophen). The aim of the present study was to fill the current knowledge gap regarding the beliefs and attitudes of people in Greece associated with the use of paracetamol during the years of financial crisis.
METHODS
The present study employed a sample of individuals visiting community pharmacies in the second largest city of Greece, Thessaloniki. All participants anonymously answered a questionnaire regarding their beliefs and characteristics of paracetamol consumption. Their answers were then statistically analyzed.
RESULTS
The generic paracetamol compound was shown to be more well known than the original. A significant percentage of participants, ranging between 9.9% and 33.7%, falsely believed that certain medications [mainly non-steroidal anti-inflammatory drugs (NSAIDs)] contained paracetamol. Participants' age, level of education, and gender were shown to be predictive of this false belief. Additionally, 11.1% of participants believed that the maximum allowed daily dose of paracetamol was higher than the correct one. Better educated individuals were less likely to consume alcohol in parallel with paracetamol (odd ratio 0.230, 95% confidence interval 0.058-0.916, P = 0.037).
CONCLUSION
Paracetamol is commonly used, both in its original and generic forms. However, a significant number of individuals confuse it with NSAIDs. Age, level of education, and gender are important determinants of the characteristics of paracetamol consumption. It seems that patients prefer to take paracetamol on their own decision during the financial crisis. |
Bullying Among Lesbian, Gay, Bisexual, and Transgender Youth. | Bullying of lesbian, gay, bisexual, and transgender (LGBT) youth is prevalent in the United States, and represents LGBT stigma when tied to sexual orientation and/or gender identity or expression. LGBT youth commonly report verbal, relational, and physical bullying, and damage to property. Bullying undermines the well-being of LGBT youth, with implications for risky health behaviors, poor mental health, and poor physical health that may last into adulthood. Pediatricians can play a vital role in preventing and identifying bullying, providing counseling to youth and their parents, and advocating for programs and policies to address LGBT bullying. |
Ride comfort evaluations on electric vehicle conversion and improvement using Magnetorheological semi active suspension system | This paper presents the development of 7 degrees of freedom of ride model of a passenger vehicle to study the ride comfort performance of passenger vehicle if converted into an electric vehicle. The data of Malaysian made passenger vehicle were used in this study by assuming that the vehicle is going to be converted into an electric vehicle. The simulation results from the developed simulation model are validated with the experimental results. Ride test known as pitch mode tests were conducted to validate the reliability of the developed simulation model. The validated simulation model was used to evaluate the vehicle ride comfort performance when converted into an electric vehicle. The validated simulation model was also integrated with the semi active suspension system in order to improve the EV conversion ride comfort performance. It was found that the EV conversion's ride comfort was not significantly affected from the modifications and the used of semi active suspension system can be an alternatives in further improving EV conversion ride comfort. |
Face recognition using Laplacianfaces | We propose an appearance-based face recognition method called the Laplacianface approach. By using locality preserving projections (LPP), the face images are mapped into a face subspace for analysis. Different from principal component analysis (PCA) and linear discriminant analysis (LDA) which effectively see only the Euclidean structure of face space, LPP finds an embedding that preserves local information, and obtains a face subspace that best detects the essential face manifold structure. The Laplacianfaces are the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the face manifold. In this way, the unwanted variations resulting from changes in lighting, facial expression, and pose may be eliminated or reduced. Theoretical analysis shows that PCA, LDA, and LPP can be obtained from different graph models. We compare the proposed Laplacianface approach with Eigenface and Fisherface methods on three different face data sets. Experimental results suggest that the proposed Laplacianface approach provides a better representation and achieves lower error rates in face recognition. |
Social snapshots: digital forensics for online social networks | Recently, academia and law enforcement alike have shown a strong demand for data that is collected from online social networks. In this work, we present a novel method for harvesting such data from social networking websites. Our approach uses a hybrid system that is based on a custom add-on for social networks in combination with a web crawling component. The datasets that our tool collects contain profile information (user data, private messages, photos, etc.) and associated meta-data (internal timestamps and unique identifiers). These social snapshots are significant for security research and in the field of digital forensics. We implemented a prototype for Facebook and evaluated our system on a number of human volunteers. We show the feasibility and efficiency of our approach and its advantages in contrast to traditional techniques that rely on application-specific web crawling and parsing. Furthermore, we investigate different use-cases of our tool that include consensual application and the use of sniffed authentication cookies. Finally, we contribute to the research community by publishing our implementation as an open-source project. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.