title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Flattening curved documents in images | Compared to scanned images, document pictures captured by camera can suffer from distortions due to perspective and page warping. It is necessary to restore a frontal planar view of the page before other OCR techniques can be applied. In this paper we describe a novel approach for flattening a curved document in a single picture captured by an uncalibrated camera. To our knowledge this is the first reported method able to process general curved documents in images without camera calibration. We propose to model the page surface by a developable surface, and exploit the properties (parallelism and equal line spacing) of the printed textual content on the page to recover the surface shape. Experiments show that the output images are much more OCR friendly than the original ones. While our method is designed to work with any general developable surfaces, it can be adapted for typical special cases including planar pages, scans of thick books, and opened books. |
An Effective Face Detection Algorithm Based on Skin Color Information | Face detection approach is presented in this paper combines skin color detection and neural network. The first motivation for our paper is to decide which color space is the best in order to build efficient skin color detector can be embedded in the overall face detection system. The proposed skin detection approach uses a chrominance distribution model of skin-color information in the input image in order to detect skin pixels over the entire image. Next, morphological operations are used in order to smooth the detected skin region and generate, finally, face candidates for face-base applications. Finally, neural network is used in order to verify these face candidates. Many experiments using color images gathered from the Internet and from our own database are conducted and give encouraging results. It is expected to combine the proposed face detector with face recognition approach to be embedded later in human computer interaction applications. |
Real-world characterization and differentiation of the Global Initiative for Chronic Obstructive Lung Disease strategy classification | BACKGROUND
This study aimed to characterize and differentiate the Global Initiative for Chronic Obstructive Lung Disease (GOLD) strategy 2011 cut points through the modified Medical Research Council dyspnea scale (mMRC) and chronic obstructive pulmonary disease (COPD) assessment test (CAT).
METHODS
Analysis of COPD patient data from the 2012 Adelphi Respiratory Disease Specific Program was conducted in Europe and US. Matched data from physicians and patients included CAT and mMRC scores. Receiver operating characteristic curves and kappa analysis determined a cut point for CAT and mMRC alignment and thus defined patient movement ("movers") within GOLD groups A-D, depending on the tool used. Logistic regression analysis, with a number of physician- and patient-reported covariates, characterized those movers.
RESULTS
Comparing GOLD-defined high-symptom patients using mMRC and CAT cut points (≥2 and ≥10, respectively), there were 890 (53.65%) movers; 887 of them (99.66%) moved from less symptomatic GOLD groups A and C (using mMRC) to more symptomatic groups B and D (using CAT). For receiver operating characteristic (area under the curve: 0.82, P<0.001) and kappa (maximized: 0.45) recommended CAT cut points of ≥24 and ≥26, movers reduced to 429 and 403 patients, respectively. Logistic regression analysis showed variables significantly associated with movers were related to impact on normal life, age, cough, and sleep (all P<0.05). Within movers, direction of movement was significantly associated with the same variables (all P<0.05).
CONCLUSION
Use of current mMRC or CAT cut points leads to inconsistencies for COPD assessment classification. It is recommended that cut points are aligned and both tools administered simultaneously for optimal patient care and to allow for closer management of movers. Our research may suggest an opportunity to investigate a combined score approach to patient management based on the worst result of mMRC and CAT. The reduced number of remaining movers may then identify patients who have greater impact of disease and may require a more personalized treatment plan. |
Attribute based object identification | Over the last years, the robotics community has made substantial progress in detection and 3D pose estimation of known and unknown objects. However, the question of how to identify objects based on language descriptions has not been investigated in detail. While the computer vision community recently started to investigate the use of attributes for object recognition, these approaches do not consider the task settings typically observed in robotics, where a combination of appearance attributes and object names might be used in referral language to identify specific objects in a scene. In this paper, we introduce an approach for identifying objects based on natural language containing appearance and name attributes. To learn rich RGB-D features needed for attribute classification, we extend recently introduced sparse coding techniques so as to automatically learn attribute-dependent features. We introduce a large data set of attribute descriptions of objects in the RGB-D object dataset. Experiments on this data set demonstrate the strong performance of our approach to language based object identification. We also show that our attribute-dependent features provide significantly better generalization to previously unseen attribute values, thereby enabling more rapid learning of new attribute values. |
Lens distortion correction using ideal image coordinates | This paper proposes a fast and simple mapping method for lens distortion correction. Typical correction methods use a distortion model defined on distorted coordinates. They need inverse mapping for distortion correction. Inverse mapping of distortion equations is not trivial; approximation must be taken for real time applications. We propose a distortion model defined on ideal undistorted coordinates, so that we can reduce computation time and maintain the high accuracy. We verify accuracy and efficiency of the proposed method from experiments. |
SME e-readiness in Malaysia : Implications for Planning and Implementation | This study hoped to answer 2 main objectives. The first objective was to assess the level of e-readiness of SMEs in Northern Malaysia. The second objective was to investigate the factors contributing to the e-readiness of SMEs in Northern Malaysia. Questionnaires were distributed using a simple random sampling method to 300 SMEs in Penang, Kedah and Perlis. The findings of this study show that SMEs in Northern Malaysia are ready to go for e-business, e-commerce and Internet in general. The findings also showed that in general top management commitment and infrastructure and technology have significant impact on SMEs’ e-readiness. However, human capital, resistance to change, and information security do not have significant impact or contribution on e-readiness in SMEs. |
Domain Adaptation with Regularized Optimal Transport | We present a new and original method to solve the domain adaptation problem using optimal transport. By searching for the best transportation plan between the probability distribution functions of a source and a target domain, a non-linear and invertible transformation of the learning samples can be estimated. Any standard machine learning method can then be applied on the transformed set, which makes our method very generic. We propose a new optimal transport algorithm that incorporates label information in the optimization: this is achieved by combining an efficient matrix scaling technique together with a majoration of a non-convex regularization term. By using the proposed optimal transport with label regularization, we obtain significant increase in performance compared to the original transport solution. The proposed algorithm is computationally efficient and effective, as illustrated by its evaluation on a toy example and a challenging real life vision dataset, against which it achieves competitive results with respect to state-of-the- |
A 13-week, multicenter, randomized, double-blind study of lumiracoxib in hip osteoarthritis | The aim of this 13-week, multicenter, randomized, double-blind, double-dummy, placebo- and positive-internal (celecoxib)-controlled, parallel-group study was to demonstrate the efficacy, safety, and tolerability of lumiracoxib in primary hip osteoarthritis (OA) patients. Eligible patients (n = 1,262; ACR criteria) were randomized (1:1:1) to receive lumiracoxib 100 mg once daily (o.d.) (n = 427), celecoxib 200 mg o.d. (n = 419), or matching placebo o.d. (n = 416) administered orally. The primary objective was to compare lumiracoxib 100 mg o.d. and placebo with respect to three co-primary efficacy variables: the pain subscale of the Western Ontario and McMaster Universities Osteoarthritis Index Likert version 3.1 (WOMAC™ LK 3.1) questionnaire, the function subscale of the WOMAC™ LK 3.1 questionnaire, and patient’s global assessment of disease activity (100-mm visual analog scale (VAS)) after 13 weeks of treatment. Of the 1,262 randomized patients, 951 completed the study. All randomized patients were included in the intention-to-treat and safety populations. Lumiracoxib was superior to the placebo (p < 0.001) after 13 weeks for all three co-primary endpoints. By week 13, the patient’s global assessment of disease activity (100-mm VAS) improved by 23.3 mm (±SD, 27.83 mm) with lumiracoxib and 13.3 mm (±26.71 mm) with placebo. The WOMAC™ function score decreased by 10.4 (±13.56) with lumiracoxib and 6.8 (±12.55) with placebo. The WOMAC™ pain scores decreased by 3.4 (±4.16) with lumiracoxib and 2.2 (±3.94) with placebo at week 13. Similar results were observed for secondary endpoints: OA pain intensity and WOMAC™ total score. Lumiracoxib was similar to celecoxib for all three co-primary endpoints. All treatments were well tolerated. In conclusion, lumiracoxib is effective in reducing pain and improving function in hip OA patients. Clinical trial registration information: www.clinicaltrials.gov ; NCT00154219 |
Competitive Data Trading in Wireless-Powered Internet of Things (IoT) Crowdsensing Systems with Blockchain | With the explosive growth of smart IoT devices at the edge of the Internet, embedding sensors on mobile devices for massive data collection and collective environment sensing has been envisioned as a cost-effective solution for IoT applications. However, existing IoT platforms and framework rely on dedicated middleware for (semi-) centralized task dispatching, data storage and incentive provision. Consequently, they are usually expensive to deploy, have limited adaptability to diverse requirements, and face a series of data security and privacy issues. In this paper, we employ permissionless blockchains to construct a purely decentralized platform for data storage and trading in a wirelesspowered IoT crowdsensing system. In the system, IoT sensors use power wirelessly transferred from RF-energy beacons for data sensing and transmission to an access point. The data is then forwarded to the blockchain for distributed ledger services, i.e., data/transaction verification, recording, and maintenance. Due to coupled interference of wireless transmission and transaction fee incurred from blockchain’s distributed ledger services, rational sensors have to decide on their transmission rates to maximize individual utility. Thus, we formulate a noncooperative game model to analyze this competitive situation among the sensors. We provide the analytical condition for the existence of the Nash equilibrium as well as a series of insightful numerical results about the equilibrium strategies in the game. |
A MATTER OF LIFE AND DEATH: SOME THOUGHTS ON THE LANGUAGE OF SPORT: | The purpose of this article is to offer some thoughts on the language of sport, particularly as it is constructed on the basis of three metaphorical conventions—namely, the conventions of violence, sex, and the machine. Although noting the masculinist perspective that these metaphors represent, the article ultimately argues that these linguistic preferences reveal important ontological insights that transcend both gender and sport itself. |
A Vision Based Method for Automatic Evaluation of Germination Rate of Rice Seeds | Rice is one of the most cultivated cereal in Asian countries and Vietnam in particular. Good seed germination is important for rice seed quality, that impacts the rice production and crop yield. Currently, seed germination evaluation is carried out manually by experienced persons. This is a tedious and time-consuming task. In this paper, we present a system for automatic evaluation of rice seed germination rate based on advanced techniques in computer vision and machine learning. We propose to use U-Net - a convolutional neural network - for segmentation and separation of rice seeds. Further processing such as computing distance transform and thresholding will be applied on the segmented regions for rice seed detection. Finally, ResNet is utilized to classify segmented rice seed regions into two classes: germinated and non- germinated seeds. Our contributions in this paper are three-fold. Firstly, we propose a framework which confirms that convolutional neural networks are better than traditional methods for both segmentation and classification tasks (with F1- scores of 93.38\% and 95.66\% respectively). Secondly, we deploy successfully the automatic tool in a real application for estimating rice germination rate. Finally, we introduce a new dataset of 1276 images of rice seeds from 7 to 8 seed varieties germinated during 6 to 10 days. This dataset is publicly available for research purpose. |
Visualization Techniques for Mining Large Databases: A Comparison | Visual data mining techniques ha ve proven to be of high v alue in exploratory data analysis and the y also have a high potential for mining lar ge databases. In this article, we describe and e valuate a ne w visualization-based approach to mining lar ge databases. The basic idea of our visual data mining techniques is to represent as man y d ta items as possible on the screen at the same time by mapping each data v alue o a pixel of the screen and arranging the pixels adequately . The major goal of this article is to e valuate our visual data mining techniques and to compare them to other well-kno wn visualization techniques for multidimensional data: the parallel coordinate and stick figure visualization techniques. F or the evaluation of visual data mining techniques, in the first place the perception of properties of the data counts, and only in the second place the CPU time and the number of secondary storage accesses are important. In addition to testing the visualization techniques using real data, we de velop d a testing environment for database visualizations similar to the benchmark approach used for comparing the performance of database systems. The testing en vironment allows the generation of test data sets with predefined data characteristics which are important for comparing the perceptual abilities of visual data mining techniques. |
THE DEVELOPMENT AND THE EFFECTIVENESS OF ENGLISH E-LEARNING | In the twenty first century, teachers are required to have a digital literacy skill. They must be able to integrate technology in learning process. It was already conducted by a teacher in one of public junior high schools in Jakarta. She searched the materials from the internet but she had a problem to adjust the learning materials to her students’ needs and characteristics. Therefore, this study was undertaken to explore deeply how to develop e-learning in English class based on her students’ needs and characteristics and examined its effectiveness. This study employed research and development methodology with qualitative and quantitative approach. The participants were an English teacher and the first graders of junior high school. The result showed that this e-learning consisted of listening, reading, and grammar used PHP, Framework Bookstrap, MySQL, and some software to make the images and some animated videos. The result also showed that the learning English using e-learning was more effective than without it. The difference of mean was 5.53 %. It was also different significantly between them (Sig.2tailed 0.011 < 0.05). It is expected to contribute as supplemental learning media in English learning process. |
Guidelines for the diagnosis and treatment of chronic lymphocytic leukemia: a report from the International Workshop on Chronic Lymphocytic Leukemia updating the National Cancer Institute-Working Group 1996 guidelines. | Standardized criteria for diagnosis and response assessment are needed to interpret and compare clinical trials and for approval of new therapeutic agents by regulatory agencies. Therefore, a National Cancer Institute-sponsored Working Group (NCI-WG) on chronic lymphocytic leukemia (CLL) published guidelines for the design and conduct of clinical trials for patients with CLL in 1988, which were updated in 1996. During the past decade, considerable progress has been achieved in defining new prognostic markers, diagnostic parameters, and treatment options. This prompted the International Workshop on Chronic Lymphocytic Leukemia (IWCLL) to provide updated recommendations for the management of CLL in clinical trials and general practice. |
Weighted Similarity Schemes for High Scalability in User-Based Collaborative Filtering | Similarity-based algorithms, often referred to as memory-based collaborative filtering techniques, are one of the most successful methods in recommendation systems. When explicit ratings are available, similarity is usually defined using similarity functions, such as the Pearson correlation coefficient, cosine similarity or mean square difference. These metrics assume similarity is a symmetric criterion. Therefore, two users have equal impact on each other in recommending new items. In this paper, we introduce new weighting schemes that allow us to consider new features in finding similarities between users. These weighting schemes, first, transform symmetric similarity to asymmetric similarity by considering the number of ratings given by users on non-common items. Second, they take into account the habit effects of users are regarded on rating items by measuring the proximity of the number of repetitions for each rate on common rated items. Experiments on two datasets were implemented and compared to other similarity measures. The results show that adding weighted schemes to traditional similarity measures These authors contributed equally to this work as the first author. P. Pirasteh · D. Hwang Department of Computer Engineering, Yeungnam University, Gyeongsan, Republic of Korea P. Pirasteh e-mail: [email protected] D. Hwang e-mail: [email protected] J. E. Jung ( ) Department of Computer Engineering, Chung-Ang University, Seoul, Republic of Korea e-mail: [email protected] significantly improve the results obtained from traditional similarity measures. |
Mitigation of Inrush Currents in Network Transformers by Reducing the Residual Flux With an Ultra-Low-Frequency Power Source | A methodology for the reduction of the residual flux in network transformers is proposed in this paper. The purpose is the mitigation of large inrush currents taken by numerous transformers when a long feeder is energized. Time-domain simulations are used to prove that a small-power device can substantially reduce the residual flux of all transformers simultaneously. The device consists of a low-voltage dc source, a suitable power-electronic switching unit, and a simple controller. Before a feeder is re-energized, the residual flux is reduced to a minimum and, as a consequence, the large inrush currents are reduced to an acceptable level. This greatly enhances the probability for the feeder to be successfully energized when otherwise a false trip would have occurred. Inrush current reductions of more than 60% are obtained at the head of the feeder. |
Ear-EEG allows extraction of neural responses in challenging listening scenarios — A future technology for hearing aids? | Advances in brain-computer interface research have recently empowered the development of wearable sensors to record mobile electroencephalography (EEG) as an unobtrusive and easy-to-use alternative to conventional scalp EEG. One such mobile solution is to record EEG from the ear canal, which has been validated for auditory steady state responses and discrete event related potentials (ERPs). However, it is still under discussion where to place recording and reference electrodes to capture best responses to auditory stimuli. Furthermore, the technology has not yet been tested and validated for ecologically relevant auditory stimuli such as speech. In this study, Ear-EEG and conventional scalp EEG were recorded simultaneously in a discrete-tone as well as a continuous-speech design. The discrete stimuli were applied in a dichotic oddball paradigm, while continuous stimuli were presented diotically as two simultaneous talkers. Cross-correlation of stimulus envelope and Ear-EEG was assessed as a measure of ongoing neural tracking. The extracted ERPs from Ear-EEG revealed typical auditory components yet depended critically on the reference electrode chosen. Reliable neural-tracking responses were extracted from the Ear-EEG for both paradigms, albeit weaker in amplitude than from scalp EEG. In conclusion, this study shows the feasibility of extracting relevant neural features from ear-canal-recorded “Ear-EEG”, which might augment future hearing technology. |
Neural mapping of guilt: a quantitative meta-analysis of functional imaging studies | Guilt is a self-conscious emotion associated with the negative appraisal of one’s behavior. In recent years, several neuroimaging studies have investigated the neural correlates of guilt, but no meta-analyses have yet identified the most robust activation patterns. A systematic review of literature found 16 functional magnetic resonance imaging studies with whole-brain analyses meeting the inclusion criteria, for a total of 325 participants and 135 foci of activation. A meta-analysis was then conducted using activation likelihood estimation. Additionally, Meta-Analytic Connectivity Modeling (MACM) analysis was conducted to investigate the functional connectivity of significant clusters. The analysis revealed 12 significant clusters of brain activation (voxel-based FDR-corrected p < 0.05) located in the prefrontal, temporal and parietal regions, mainly in the left hemisphere. Only the left dorsal cingulate cluster survived stringent FWE correction (voxel-based p < 0.05). Secondary analyses (voxel-based FDR-corrected p < 0.05) on the 7 studies contrasting guilt with another emotional condition showed an association with clusters in the left precuneus, the anterior cingulate, the left medial frontal gyrus, the right superior frontal gyrus and the left superior temporal gyrus. MACM demonstrated that regions associated with guilt are highly interconnected. Our analysis identified a distributed neural network of left-lateralized regions associated with guilt. While voxel-based FDR-corrected results should be considered exploratory, the dorsal cingulate was robustly associated with guilt. We speculate that this network integrates cognitive and emotional processes involved in the experience of guilt, including self-representation, theory of mind, conflict monitoring and moral values. Limitations of our meta-analyses comprise the small sample size and the heterogeneity of included studies, and concerns about naturalistic validity. |
Low-Rank Modeling and Its Applications in Image Analysis | Low-rank modeling generally refers to a class of methods that solves problems by representing variables of interest as low-rank matrices. It has achieved great success in various fields including computer vision, data mining, signal processing, and bioinformatics. Recently, much progress has been made in theories, algorithms, and applications of low-rank modeling, such as exact low-rank matrix recovery via convex programming and matrix completion applied to collaborative filtering. These advances have brought more and more attention to this topic. In this article, we review the recent advances of low-rank modeling, the state-of-the-art algorithms, and the related applications in image analysis. We first give an overview of the concept of low-rank modeling and the challenging problems in this area. Then, we summarize the models and algorithms for low-rank matrix recovery and illustrate their advantages and limitations with numerical experiments. Next, we introduce a few applications of low-rank modeling in the context of image analysis. Finally, we conclude this article with some discussions. |
Preserving Privacy by De-identifying Facial Images | In the context of sharing video surveillance data, a significant threat to privacy is face recognition software, which can automatically identify known people, such as from a database of drivers’ license photos, and thereby track people regardless of suspicion. This paper introduces an algorithm to protect the privacy of individuals in video surveillance data by de-identifying faces such that many facial characteristics remain but the face cannot be reliably recognized. A trivial solution to de-identifying faces involves blacking out each face. This thwarts any possible face recognition, but because all facial details are obscured, the result is of limited use. Many ad hoc attempts, such as covering eyes or randomly perturbing image pixels, fail to thwart face recognition because of the robustness of face recognition methods. This paper presents a new privacy-enabling algorithm, named k-Same, that scientifically limits the ability of face recognition software to reliably recognize faces while maintaining facial details in the images. The algorithm determines similarity between faces based on a distance metric and creates new faces by averaging image components, which may be the original image pixels ( k-Same-Pixel) or eigenvectors ( k-Same-Eigen). Results are presented on a standard collection of real face images with varying k. E. Newton, L. Sweeney, and B. Malin. Preserving Privacy by De-identifying Facial Images , Carnegie Mellon University, School of Computer Science, Technical Report, CMU-CS-03-119. P ittsburgh: March2003. 2 This research was supported in part by the Laboratory for International Data Privacy at Carnegie Mellon University and in part by the Defense Advanced Research Projects Agency and managed by the Naval Sea Systems Command under contract N00024-98-D-8124. E. Newton, L. Sweeney, and B. Malin. Preserving Privacy by De-identifying Facial Images , Carnegie Mellon University, School of Computer Science, Technical Report, CMU-CS-03-119. P ittsburgh: March2003. 3 |
SpanDex: Secure Password Tracking for Android | This paper presents SpanDex, a set of extensions to Android’s Dalvik virtual machine that ensures apps do not leak users’ passwords. The primary technical challenge addressed by SpanDex is precise, sound, and efficient handling of implicit information flows (e.g., information transferred by a program’s control flow). SpanDex handles implicit flows by borrowing techniques from symbolic execution to precisely quantify the amount of information a process’ control flow reveals about a secret. To apply these techniques at runtime without sacrificing performance, SpanDex runs untrusted code in a data-flow sensitive sandbox, which limits the mix of operations that an app can perform on sensitive data. Experiments with a SpanDex prototype using 50 popular Android apps and an analysis of a large list of leaked passwords predicts that for 90% of users, an attacker would need over 80 login attempts to guess their password. Today the same attacker would need only one attempt for all users. |
Improving Cytoarchitectonic Segmentation of Human Brain Areas with Self-supervised Siamese Networks | Cytoarchitectonic parcellations of the human brain serve as anatomical references in multimodal atlas frameworks. They are based on analysis of cell-body stained histological sections and the identification of borders between brain areas. The de-facto standard involves a semi-automatic, reproducible border detection, but does not scale with high-throughput imaging in large series of sections at microscopical resolution. Automatic parcellation, however, is extremely challenging due to high variation in the data, and the need for a large field of view at microscopic resolution. The performance of a recently proposed Convolutional Neural Network model that addresses this problem especially suffers from the naturally limited amount of expert annotations for training. To circumvent this limitation, we propose to pre-train neural networks on a self-supervised auxiliary task, predicting the 3D distance between two patches sampled from the same brain. Compared to a random initialization, fine-tuning from these networks results in significantly better segmentations. We show that the self-supervised model has implicitly learned to distinguish several cortical brain areas – a strong indicator that the proposed auxiliary task is appropriate for cytoarchitectonic mapping. |
Evolution: What's on the menu? | food substrates has evolved from interactions with the trillions of microorganisms that co-exist in our gut. But what evolutionary factors shape the diversity of gut microbial communities? Reporting in Science, Ley, Gordon and colleagues now provide us with a microbial view of mammalian evolution with their finding that mammalian diet, phylogeny and gut physiology influence the composition of the gut microbiota and that microbial communities have co-diversified with their hosts. Gordon et al. conducted a census of the faecal bacterial communities of 59 different mammalian species living in zoos in San Diego and St Louis, USA, and in the wild, including 17 non-human primates and animals that had an unusual diet for their taxonomic lineage. New network-based analyses together with more traditional tree-based analyses were then used to relate gut community ecology to the mammalian hosts that harbour them. In general, the gut microbial composition of members of a single mammalian species was similar regardless of its provenance, supporting the notion of inter-generational transfer of gut communities. The ability to use plants as a food source evolved repeatedly from ancestral carnivorous mammals. Moreover, most extant mammals are herbivores. Based on 16S ribosomal RNA, the gut microbiota of herbivores in this study was found to be more diverse than that of omnivores and carnivores, which suggests that diet is an important determinant of microbial community structure. The authors’ work indicates that promiscuous gut microorganisms allowed unrelated mammals with similar gut structures to assemble similar microbial communities and that the microbial solution to herbivory in different gut structures (for example, a simple gut, a hindgut or a foregut) was similar regardless of host lineage. Humans seem to be representative of typical omnivores and can be placed together with our omnivorous primate relatives. This bottom-up perspective of animal evolution implicates gut microorganisms as co-conspirators in the spectacular evolutionary success of mammals. According to Gordon and colleagues, ‘‘our next task is to characterize the microbial communities’ gene content (microbiomes) to better understand how these observations relate to the metabolic capacities of the microbiota and expand these studies to other members of the animal kingdom.’’ Gillian Young |
Empirical evaluation of a dynamic and distributed taxi-sharing system | Modern societies rely on efficient transportation systems for sustainable mobility. In this paper, we perform a large-scale and empirical evaluation of a dynamic and distributed taxi-sharing system. The novel system takes advantage of nowadays widespread availability of communication and computation to convey a cost-efficient, door-to-door and flexible system, offering a quality of service similar to traditional taxis. The shared taxi service is assessed in a real-city scenario using a highly realistic simulation platform. Simulation results have shown the system's advantages for both passengers and taxi drivers, and that trade-offs need to be considered. Compared with the current taxi operation model, results show a increase of 48% on the average occupancy per traveled kilometer with a full deployment of the taxi-sharing system. |
Neural Networks Approach to the Random Walk Dilemma of Financial Time Series | Predictions of financial time series often show a characteristic one step shift relative to the original data as in a random walk. This has been the cause for opposing views whether such time series do contain information that can be extracted for predictions, or are simply random walks. In this case study, we show that NNs that are capable of extracting weak low frequency periodic signals buried in a strong high frequency signal, consistently predict the next value in the series to be the current value, as in a random walk, when used for one-step-ahead predictions of the detrended S&P 500 time series. In particular for the Time Delay Feed Forward Networks and Elman Networks of various configurations, our study supports the view of the detrended S&P 500 being a random walk series. This is consistent with the long standing hypothesis that some financial time series are random walk series. |
Individual Education Program ( IEP ) Paperwork : A Narrative Review | Previous studies have shown that teachers understand the importance of Individual Education Program (IEP), but they consider the administrative tasks of IEP as a burden. This review aims to illustrate how long the teacher completed the IEP administrative tasks, to explain why teachers view IEP as a burden, and to describe the strategies to minimize obstacles related to the administrative burden of IEP. The procedure of narrative review is selecting journals based on the inclusion and exclusion criteria related to administrative burden of IEP paperwork. The result shows that teachers spend more time doing IEP paperwork than assessing students’ assignments, communicating with parents, and sharing with colleagues. IEP paperwork takes up more than 10% of working time. The reasons IEP paperwork perceived as burdens are because of a large number of IEP forms and details, the multiple IEP service flow, the lack of knowledge of the personnel relating to the preparation or implementation of IEPs, the lack of assistance of administrative staff to complete the IEP paperwork, and the short/limited deadlines for administrative duties of IEP. The proposed strategies are improving appropriate technology, streamlining the contents of IEP forms, group IEP and increase the IEP administrative skills of the teachers. |
Hybrid publishing design methods for technical books | We know that the printed book is a social, cultural and economic construction, but also an artifact (either printed or digital) that requires a deep knowledge in its production where design is key to the performance of the object.
The starting point of this paper is the definition of technical books and its publishing field. Looking into their formal structures, we try to make an analysis of how technical books function today and how they are designed. We sought to characterize the program and the design methodology of a technical book in hybrid context and the possible models in terms of production, distribution and reading. |
A randomized comparison of print and web communication on colorectal cancer screening. | BACKGROUND
New methods to enhance colorectal cancer (CRC) screening rates are needed. The web offers novel possibilities to educate patients and to improve health behaviors, such as cancer screening. Evidence supports the efficacy of health communications that are targeted and tailored to improve the uptake of recommendations.
METHODS
We identified unscreened women at average risk for CRC from the scheduling databases of obstetrics and gynecology practices in 2 large health care systems. Participants consented to a randomized controlled trial that compared CRC screening uptake after receipt of CRC screening information delivered via the web or in print form. Participants could also be assigned to a control (usual care) group. Women in the interventional arms received tailored information in a high- or low-monitoring Cognitive Social Information Processing model-defined attentional style. The primary outcome was CRC screening participation at 4 months.
RESULTS
A total of 904 women were randomized to the interventional or control group. At 4 months, CRC screening uptake was not significantly different in the web (12.2%), print (12.0%), or control (12.9%) group. Attentional style had no effect on screening uptake for any group. Some baseline participant factors were associated with greater screening, including higher income (P = .03), stage of change (P < .001), and physician recommendation to screen (P < .001).
CONCLUSIONS
A web-based educational intervention was no more effective than a print-based one or control (no educational intervention) in increasing CRC screening rates in women at average risk of CRC. Risk messages tailored to attentional style had no effect on screening uptake. In average-risk populations, use of the Internet for health communication without additional enhancement is unlikely to improve screening participation.
TRIAL REGISTRATION
clinicaltrials.gov Identifier: NCT00459030. |
Using the ankle-brachial index to diagnose peripheral artery disease and assess cardiovascular risk. | The ankle-brachial index is valuable for screening for peripheral artery disease in patients at risk and for diagnosing the disease in patients who present with lower-extremity symptoms that suggest it. The ankle-brachial index also predicts the risk of cardiovascular events, cerebrovascular events, and even death from any cause. Few other tests provide as much diagnostic accuracy and prognostic information at such low cost and risk. |
The accuracy with which the 5 times sit-to-stand test, versus gait speed, can identify poor exercise tolerance in patients with COPD | Identifying those patients who underperform in the 6-minute walk test (6MWT <350 m), and the reasons for their poor performance, is a major concern in the management of chronic obstructive pulmonary disease.To explore the accuracy and relevance of the 4-m gait-speed (4MGS) test, and the 5-repetition sit-to-stand (5STS) test, as diagnostic markers, and clinical determinants, of poor performance in the 6MWT.We recruited 137 patients with stable chronic obstructive pulmonary disease to participate in our cross-sectional study. Patients completed the 4MGS and 5STS tests, with quantitative (in seconds) and qualitative ordinal data collected; the latter were categorized using a scale of 0 to 4. The following potential covariates and clinical determinants of poor 6MWT were collated: age, quadriceps muscle-strength (QMS), health status, dyspnea, depression, and airflow limitation. Area under the receiver-operating characteristic curve data (AUC) was used to assess accuracy, with logistic regression used to explore relevance as clinical determinants.The AUCs generated using the 4MGS and 5STS tests were comparable, at 0.719 (95% confidence interval [CI] 0.629-0.809) and 0.711 (95% CI 0.613-0.809), respectively. With ordinal data, the 5STS test was most accurate (AUC of 0.732; 95% CI 0.645-0.819); the 4MGS test showed poor discriminatory power (AUC <0.7), although accuracy improved (0.726, 95% CI 0.637-0.816) when covariates were included. Unlike the 4MGS test, the 5STS test provided a significant clinical determinant of a poor 6MWT (odds ratio 1.23, 95% CI 1.05-1.44).The 5STS test reliably predicts a poor 6MWT, especially when using ordinal data. Used alone, the 4MGS test is reliable when measured with continuous data. |
The International Criteria for Behçet's Disease (ICBD): a collaborative study of 27 countries on the sensitivity and specificity of the new criteria. | OBJECTIVE
Behçet's disease (BD) is a chronic, relapsing, inflammatory vascular disease with no pathognomonic test. Low sensitivity of the currently applied International Study Group (ISG) clinical diagnostic criteria led to their reassessment.
METHODS
An International Team for the Revision of the International Criteria for BD (from 27 countries) submitted data from 2556 clinically diagnosed BD patients and 1163 controls with BD-mimicking diseases or presenting at least one major BD sign. These were randomly divided into training and validation sets. Logistic regression, 'leave-one-country-out' cross-validation and clinical judgement were employed to develop new International Criteria for BD (ICBD) with the training data. Existing and new criteria were tested for their performance in the validation set.
RESULTS
For the ICBD, ocular lesions, oral aphthosis and genital aphthosis are each assigned 2 points, while skin lesions, central nervous system involvement and vascular manifestations 1 point each. The pathergy test, when used, was assigned 1 point. A patient scoring ≥4 points is classified as having BD. In the training set, 93.9% sensitivity and 92.1% specificity were assessed compared with 81.2% sensitivity and 95.9% specificity for the ISG criteria. In the validation set, ICBD demonstrated an unbiased estimate of sensitivity of 94.8% (95% CI: 93.4-95.9%), considerably higher than that of the ISG criteria (85.0%). Specificity (90.5%, 95% CI: 87.9-92.8%) was lower than that of the ISG-criteria (96.0%), yet still reasonably high. For countries with at least 90%-of-cases and controls having a pathergy test, adding 1 point for pathergy test increased the estimate of sensitivity from 95.5% to 98.5%, while barely reducing specificity from 92.1% to 91.6%.
CONCLUSION
The new proposed criteria derived from multinational data exhibits much improved sensitivity over the ISG criteria while maintaining reasonable specificity. It is proposed that the ICBD criteria to be adopted both as a guide for diagnosis and classification of BD. |
Business Intelligence Maturity: Development and Evaluation of a Theoretical Model | In order to identify and explore the strengths and weaknesses of business intelligence (BI) initiatives, managers in charge need to assess the maturity of their BI efforts. For this, a wide range of maturity models has been developed, but these models often focus on technical details and do not address the potential value proposition of BI. Based on an extensive literature review and an empirical study, we develop and evaluate a theoretical model of impact-oriented BI maturity. Building on established IS theories, the model integrates BI deployment, BI usage, individual impact, and organizational performance. This conceptualization helps to refocus the topic of BI maturity to business needs and can be used as a theoretical foundation for future research. |
Collaborative trust-based secure routing against colluding malicious nodes in multi-hop ad hoc networks | The Trust-embedded AODV (T-AODV) routing protocol was designed by us to secure the ad hoc network from independent malicious nodes by finding a secure end-to-end route. In This work we have proposed an extension of T-AODV that can withstand attack by multiple malicious nodes acting in collusion to disrupt the network. It finds a secure end-to-end path free of malicious nodes and can effectively isolate a malicious entity trying to attack the network independently or in collusion with other malicious entities. In this respect, the solution is unique and, to the best of our knowledge, the first such solution proposed so far. We have shown the efficiency of our protocol by extensive simulation and also analyzed its security by evaluating different threat scenarios. |
A Multitrait–Multimethod Analysis of the Construct Validity of Child Anxiety Disorders in a Clinical Sample | The present study examines the construct validity of separation anxiety disorder (SAD), social phobia (SoP), panic disorder (PD), and generalized anxiety disorder (GAD) in a clinical sample of children. Participants were 174 children, 6 to 17 years old (94 boys) who had undergone a diagnostic evaluation at a university hospital based clinic. Parent and child ratings of symptom severity were assessed using the Multidimensional Anxiety Scale for Children (MASC). Diagnostician ratings were obtained from the Anxiety Disorders Interview Schedule for Children and Parents (ADIS: C/P). Discriminant and convergent validity were assessed using confirmatory factor analytic techniques to test a multitrait-multimethod model. Confirmatory factor analyses supported the current classification of these child anxiety disorders. The disorders demonstrated statistical independence from each other (discriminant validity of traits), the model fit better when the anxiety syndromes were specified than when no specific syndromes were specified (convergent validity), and the methods of assessment yielded distinguishable, unique types of information about child anxiety (discriminant validity of methods). Using a multi-informant approach, these findings support the distinctions between childhood anxiety disorders as delineated in the current classification system, suggesting that disagreement between informants in psychometric studies of child anxiety measures is not due to poor construct validity of these anxiety syndromes. |
Computational Lambda-Calculus and Monads | The λ-calculus is considered an useful mathematical tool in the study of programming languages. However, if one uses βη-conversion to prove equivalence of programs, then a gross simplification is introduced. We give a calculus based on a categorical semantics for computations , which provides a correct basis for proving equivalence of programs, independent from any specific computational model. |
Comparison of a soluble co-formulation of insulin degludec/insulin aspart vs biphasic insulin aspart 30 in type 2 diabetes: a randomised trial | OBJECTIVE
Insulin degludec/insulin aspart (IDegAsp) is a soluble co-formulation of insulin degludec (70%) and insulin aspart (IAsp: 30%). Here, we compare the efficacy and safety of IDegAsp, an alternative IDegAsp formulation (AF: containing 45% IAsp), and biphasic IAsp 30 (BIAsp 30).
DESIGN
Sixteen-week, open-label, randomised, treat-to-target trial.
METHODS
Insulin-naive subjects with type 2 diabetes (18-75 years) and a HbA1c of 7-11% were randomised to twice-daily IDegAsp (n=61), AF (n=59) or BIAsp 30 (n=62), all in combination with metformin. Insulin was administered pre-breakfast and dinner (main evening meal) and titrated to pre-breakfast and pre-dinner plasma glucose (PG) targets of 4.0-6.0 mmol/l.
RESULTS
Mean HbA1c after 16 weeks was comparable for IDegAsp, AF and BIAsp 30 (6.7, 6.6 and 6.7% respectively). With IDegAsp, 67% of subjects achieved HbA1c 7.0% Without confirmed hypoglycaemia in the last 4 weeks of treatment compared with 53% (AF) and 40% (BIAsp 30). Mean fasting PG was significantly lower for IDegAsp vs BIAsp 30 (treatment difference (TD): -0.99 mmol/l (95% confidence interval: -1.68; 0.29)) and AF vs BIAsp 30 (TD: -0.88 mmol/l (-1.58; -0.18)). A significant, 58% lower rate of confirmed hypoglycaemia was found for IDegAsp vs BIAsp 30 (rate ratio (RR): 0.42 (0.23; 0.75)); rates were similar for AF vs BIAsp 30 (RR: 0.92 (0.54; 1.57)). IDegAsp and AF had numerically lower rates of nocturnal confirmed hypoglycaemia vs BIAsp 30 (RR: 0.33 (0.09; 1.14) and 0.66 (0.22; 1.93) respectively).
CONCLUSIONS
IDegAsp provided comparable overall glycaemic control to BIAsp 30 with a significantly lower rate of hypoglycaemia. |
Combination of Video Change Detection Algorithms by Genetic Programming | Within the field of computer vision, change detection algorithms aim at automatically detecting significant changes occurring in a scene by analyzing the sequence of frames in a video stream. In this paper we investigate how state-of-the-art change detection algorithms can be combined and used to create a more robust algorithm leveraging their individual peculiarities. We exploited genetic programming (GP) to automatically select the best algorithms, combine them in different ways, and perform the most suitable post-processing operations on the outputs of the algorithms. In particular, algorithms’ combination and post-processing operations are achieved with unary, binary and ${n}$ -ary functions embedded into the GP framework. Using different experimental settings for combining existing algorithms we obtained different GP solutions that we termed In Unity There Is Strength. These solutions are then compared against state-of-the-art change detection algorithms on the video sequences and ground truth annotations of the ChangeDetection.net 2014 challenge. Results demonstrate that using GP, our solutions are able to outperform all the considered single state-of-the-art change detection algorithms, as well as other combination strategies. The performance of our algorithm are significantly different from those of the other state-of-the-art algorithms. This fact is supported by the statistical significance analysis conducted with the Friedman test and Wilcoxon rank sum post-hoc tests. |
The Psychological Significance of the Blush: Frontmatter | 1. The study of the blush: Darwin and after W. Ray Crozier and Peter J. de Jong Part I. The Nature of the Blush: 2. Psychophysiology of the blush Peter D. Drummond 3. Measurement of the blush Ruth Cooper and Alexander L. Gerlach Part II. Theoretical Perspectives on the Blush: 4. Psychological theories of blushing Mark R. Leary and Kaitlin Toner 5. Colours of the face: a comparative glance Jan A. R. A. M. van Hooff 6. Self-conscious emotional development Hedy Stegge 7. A biosocial perspective on embarrassment Ryan S. Darby and Christine R. Harris 8. The affective neuroscience of human social anxiety Vladimir Miskovic and Louis A. Schmidt Part III. The Blush in Social Interaction: 9. The interactive origins and outcomes of embarrassment Rowland S. Miller 10. Performing the blush: a dramaturgical perspective Susie Scott 11. Blushing and the private self W. Ray Crozier 12. Signal value and interpersonal implications of the blush Peter J. de Jong and Corine Dijk Part IV. Blushing Problems: Processes and Interventions: 13. Red, hot and scared: mechanisms underlying fear of blushing Corine Dijk and Peter J. de Jong 14. Psychological interventions for fear of blushing Michelle C. Capozzoli, Imke J. J. Vonk, Susan M. Bogels and Stefan G. Hofmann 15. Psychological aspects of rosacea Peter D. Drummond and Daphne Su Conclusions: 16. Conclusions, what we don't know and future directions for research W. Ray Crozier and Peter J. de Jong. |
Dorsal Scaphoid Subluxation on Sagittal Magnetic Resonance Imaging as a Marker for Scapholunate Ligament Tear. | PURPOSE
To evaluate the diagnostic utility of scaphoid dorsal subluxation on magnetic resonance imaging (MRI) as a predictor of scapholunate interosseous ligament (SLIL) tears and compare this with radiographic findings.
METHODS
Thirty-six MRIs were retrospectively reviewed: 18 with known operative findings of complete Geissler IV SLIL tears that were surgically repaired, and 18 MRIs performed for ulnar-sided wrist pain but no SLIL tear. Dorsal subluxation of the scaphoid was measured on the sagittal MRI cut, which demonstrated the maximum subluxation. Independent samples t tests were used to compare radiographic measurements of scapholunate (SL) gap, SL angle, and capitolunate/third metacarpal-lunate angles between the SLIL tear and the control groups and to compare radiographic measurements between wrists that had dorsal subluxation of the scaphoid and wrists that did not have dorsal subluxation. Interrater reliability of subluxation measurements on lateral radiographs and on MRI were calculated using kappa coefficients.
RESULTS
Thirteen of 18 wrists with complete SLIL tears had greater than 10% dorsal subluxation of the scaphoid relative to the scaphoid facet. Average subluxation in this group was 34%. Four of 18 wrists with known SLIL tears had no subluxation. No wrists without SLIL tears (control group) had dorsal subluxation. The SL angle, capitolunate/third metacarpal-lunate angle and SL gap were greater in wrists that had dorsal subluxation of the scaphoid on MRI. Interrater reliability of measurements of dorsal subluxation of the scaphoid was superior on MRI than on lateral x-ray.
CONCLUSIONS
An MRI demonstration of dorsal subluxation of the scaphoid, of as little as 10%, as a predictor of SLIL tear had a sensitivity of 72% and a specificity of 100%. The high positive predictive value indicates that the presence of dorsal subluxation accurately predicts SLIL tear.
TYPE OF STUDY/LEVEL OF EVIDENCE
Diagnostic II. |
Between East and West: Sappho Leontias (1830–1900) and her Educational Theory | The paper presents one of the most prominent Greek women teachers and educators of the nineteenth century, and a leading figure of Greek women’s education in Ottoman territory, Sappho Leontias (1830–1900). Within a transnational framework and based on the study of the writings of Sappho Leontias, the paper presents her educational views, theory and philosophy and focuses on her connections to educational thought and activity “beyond ethnic/national borders”, investigating the influences on her theoretical schema. A secondary intention of the paper is to present the influence Leontias exercised on the education of her times through an overview of her educational activity. The historical and social conditions of the time and place, as well as gender ideologies, are taken into consideration; the paper supports the position that they affect or shape individual projects and choices to a great extent. |
RGB-H-CbCr skin colour model for human face detection | While the RGB, HSV and YUV (YCbCr) are standard models used in various colour imaging applications, not all of their information are necessary to classify skin colour. This paper presents a novel skin colour model, RGB-H-CbCr for the detection of human faces. Skin regions are extracted using a set of bounding rules based on the skin colour distribution obtained from a training set. The segmented face regions are further classified using a parallel combination of simple morphological operations. Experimental results on a large photo data set have demonstrated that the proposed model is able to achieve good detection success rates for near-frontal faces of varying orientations, skin colour and background environment. The results are also comparable to that of the AdaBoost face classifier. |
Tales from the grave: Opposing autopsy reports from a body exhumed. | We report an autopsy case of a 42-year-old woman who, when discovered, had been dead in her apartment for approximately 1 week under circumstances involving treachery, assault and possible drug overdose. This case is unique as it involved two autopsies of the deceased by two different medical examiners who reached opposing conclusions. The first autopsy was performed about 10 days after death. The second autopsy was performed after an exhumation approximately 2 years after burial. Evidence collected at the crime scene included blood samples from which DNA was extracted and analysed, fingerprints and clothing containing dried body fluids. The conclusion of the first autopsy was accidental death due to cocaine toxicity; the conclusion of the second autopsy was death due to homicide given the totality of evidence. Suspects 1 and 2 were linked to the death of the victim by physical evidence and suspect 3 was linked by testimony. Suspect 1 received life in prison, and suspects 2 and 3 received 45 and 20 years in prison, respectively. This case indicates that cocaine toxicity is difficult to determine in putrefied tissue and that exhumations can be important in collecting forensic information. It further reveals that the combined findings of medical examiners, even though contradictory, are useful in determining the circumstances leading to death in criminal justice. Thus, this report demonstrates that such criminal circumstances require comparative forensic review and, in such cases, scientific conclusions can be difficult. |
Federated Meta-Learning for Recommendation | Recommender systems have been widely studied from the machine learning perspective, where it is crucial to share information among users while preserving user privacy. In this work, we present a federated meta-learning framework for recommendation in which user information is shared at the level of algorithm, instead of model or data adopted in previous approaches. In this framework, user-specific recommendation models are locally trained by a shared parameterized algorithm, which preserves user privacy and at the same time utilizes information from other users to help model training. Interestingly, the model thus trained exhibits a high capacity at a small scale, which is energyand communicationefficient. Experimental results show that recommendation models trained by meta-learning algorithms in the proposed framework outperform the state-of-the-art in accuracy and scale. For example, on a production dataset, a shared model under Google Federated Learning (McMahan et al., 2017) with 900,000 parameters has prediction accuracy 76.72%, while a shared algorithm under federated meta-learning with less than 30,000 parameters achieves accuracy of 86.23%. |
Learning to Assign Orientations to Feature Points | We show how to train a Convolutional Neural Network to assign a canonical orientation to feature points given an image patch centered on the feature point. Our method improves feature point matching upon the state-of-the art and can be used in conjunction with any existing rotation sensitive descriptors. To avoid the tedious and almost impossible task of finding a target orientation to learn, we propose to use Siamese networks which implicitly find the optimal orientations during training. We also propose a new type of activation function for Neural Networks that generalizes the popular ReLU, maxout, and PReLU activation functions. This novel activation performs better for our task. We validate the effectiveness of our method extensively with four existing datasets, including two non-planar datasets, as well as our own dataset. We show that we outperform the state-of-the-art without the need of retraining for each dataset. |
Risk of death during and after opiate substitution treatment in primary care: prospective observational study in UK General Practice Research Database | OBJECTIVE
To investigate the effect of opiate substitution treatment at the beginning and end of treatment and according to duration of treatment.
DESIGN
Prospective cohort study. Setting UK General Practice Research Database.
PARTICIPANTS
Primary care patients with a diagnosis of substance misuse prescribed methadone or buprenorphine during 1990-2005. 5577 patients with 267 003 prescriptions for opiate substitution treatment followed-up (17 732 years) until one year after the expiry of their last prescription, the date of death before this time had elapsed, or the date of transfer away from the practice.
MAIN OUTCOME MEASURES
Mortality rates and rate ratios comparing periods in and out of treatment adjusted for sex, age, calendar year, and comorbidity; standardised mortality ratios comparing opiate users' mortality with general population mortality rates.
RESULTS
Crude mortality rates were 0.7 per 100 person years on opiate substitution treatment and 1.3 per 100 person years off treatment; standardised mortality ratios were 5.3 (95% confidence interval 4.0 to 6.8) on treatment and 10.9 (9.0 to 13.1) off treatment. Men using opiates had approximately twice the risk of death of women (morality rate ratio 2.0, 1.4 to 2.9). In the first two weeks of opiate substitution treatment the crude mortality rate was 1.7 per 100 person years: 3.1 (1.5 to 6.6) times higher (after adjustment for sex, age group, calendar period, and comorbidity) than the rate during the rest of time on treatment. The crude mortality rate was 4.8 per 100 person years in weeks 1-2 after treatment stopped, 4.3 in weeks 3-4, and 0.95 during the rest of time off treatment: 9 (5.4 to 14.9), 8 (4.7 to 13.7), and 1.9 (1.3 to 2.8) times higher than the baseline risk of mortality during treatment. Opiate substitution treatment has a greater than 85% chance of reducing overall mortality among opiate users if the average duration approaches or exceeds 12 months.
CONCLUSIONS
Clinicians and patients should be aware of the increased mortality risk at the start of opiate substitution treatment and immediately after stopping treatment. Further research is needed to investigate the effect of average duration of opiate substitution treatment on drug related mortality. |
Detecting sentiment embedded in Arabic social media - A lexicon-based approach | Sentiment analysis aims at extracting sentiment embedded mainly in text reviews. The prevalence of semantic web technologies has encouraged users of the web to become authors as well as readers. People write on a wide range of topics. These writings embed valuable information for organizations and industries. This paper introduces a novel framework for sentiment detection in Arabic tweets. The heart of this framework is a sentiment lexicon. This lexicon was built by translating the SentiStrength English sentiment lexicon into Arabic and afterwards the lexicon was expanded using Arabic thesauri. To assess the viability of the suggested framework, the authors have collected and manually annotated a set of 4400 Arabic tweets. These tweets were classified according to their sentiment into positive or negative tweets using the proposed framework. The results reveal that lexicons are helpful for sentiment detection. The overall results are encouraging and open venues for future research. |
Differential and gonad stage-dependent roles of kisspeptin1 and kisspeptin2 in reproduction in the modern teleosts, morone species. | Kisspeptin is an important regulator of reproduction in many vertebrates. The involvement of the two kisspeptins, Kiss1 and Kiss2, and their receptors, Gpr54-1 and Gpr54-2, in controlling reproduction was studied in the brains of the modern teleosts, striped and hybrid basses. In situ hybridization and laser capture microdissection followed by quantitative RT (QRT)-PCR detected coexpression of kiss1 and kiss2 in the hypothalamic nucleus of the lateral recess. Neurons expressing gpr54-1 and gpr54-2 were detected in several brain regions. In the preoptic area, gpr54-2 was colocalized in GnRH1 neurons while gpr54-1 was expressed in cells attached to GnRH1 fibers, indicating two different modes of GnRH1 regulation. The expression of all four genes was measured in the brains of males and females at different life stages using QRT-PCR. The levels of kiss1 and gpr54-1 mRNA, the latter being expressed in minute levels, were consistently lower than those of kiss2 and gpr54-2. While neither gene's expression increased at prepuberty, all were dramatically elevated in mature females. The levels of kiss2 mRNA increased also in mature males. Kiss1 peptide was less potent than Kiss2 in elevating plasma luteinizing hormone levels and in up-regulating gnrh1 and gpr54-2 expression in prepubertal hybrid bass in vivo. In contrast, during recrudescence, Kiss1 was more potent than Kiss2 in inducing luteinizing hormone release, and Kiss2 down-regulated gnrh1 and gpr54-2 expression. This is the first report in fish to demonstrate the alternating actions and the importance of both neuropeptides for reproduction. The organization of the kisspeptin system suggests a transitional evolutionary state between early to late evolving vertebrates. |
When should a new test become the current reference standard? | Key Summary Points Whether a new test (such as polymerase chain reaction) should replace a current reference test (such as culture) can be determined by a fair umpire test. A fair umpire test may have errors even larger than the tests under evaluation. What makes it a fair umpire is that its errors are independent of those of both tests. Possible umpires include causal exposures, concurrent testing, prognosis, or response to treatment. To decide whether the new test or current reference standard test is more accurate requires the fair umpire to be applied only to cases in which these tests' results differ. Time is often a better diagnostician than the best anatomical pathologist. Clifton Meador New diagnostic tests do not simply alter our diagnostic processes; they may also change whom we classify as having a disease. Gold standard tests are often imperfect, which leads to modification or change of the standardsometimes by deliberate decision and sometimes by stealth. A definitional shift occurs when a new test detects additional cases of apparent disease but creates uncertainty about whether these additional cases should be classified and treated in the same way. For example, magnetic resonance imaging for suspected multiple sclerosis has widened the diagnosis in practice, but the correlation of lesions found on imaging with eventual clinical course is poor (1); hence, some patients will be falsely labeled and may receive unnecessary treatment. This problem is not unique to multiple sclerosis; new enzyme tests, such as troponins (2); DNA methods in microbiology, such as polymerase chain reaction tests (3); new cardiac hormone tests, such as B-type natriuretic peptide for heart failure (4); and new imaging methods, such as magnetic resonance imaging and positron emission tomography in neurologic and musculoskeletal diseases (5), are changing the spectrum of patients considered to have specific diseases. These changes have led to discussion and dispute about how the additional cases detected should be managed. As our diagnostic armamentarium continues to expand and improve, this dilemma will increasingly challenge us. One trigger for changing the reference test is known flaws in the current reference test. For example, the insensitivity of culture for viruses and difficult-to-culture bacteria created interest in polymerase chain reaction methods, and problems with diagnosing diastolic heart failure created interest in alternatives to echocardiography. A second trigger is when a new technology, such as magnetic resonance imaging or positron emission tomography, suggests new diagnostic possibilities or stages of disease not previously perceived. This need or perception does not necessarily translate into greater diagnostic accuracy that leads to greater clinical benefits. Of course, not all new diagnostic technologies will lead to reconsideration of the reference test. Some may be only improvements on a triage test, or may be less invasive or more convenient replacements of other tests (6). But tests that do change our definition of disease require more than a consideration of accuracy; they also require assessment of the clinical consequences of the change. What principles should guide replacement of a current clinical reference test? Investigators may claim that the new test is better than existing reference standard tests on theoretical grounds, or is more reliable, or has better sensitivity. If the current reference standard is flawed, then we have no obvious means of deciding whether the proposed new reference standard is really better. Most previous work on this topic has considered the problem of estimating the accuracy of the new test (7) by using such methods as combined tests (using several tests as the reference standard) or discrepant analysis (810) (use of a third test to resolve disagreements between the 2 tests). However, estimation of accuracy is not essential in the decision to adopt a new reference standard (11). Any principles or criteria for replacing the current reference standard need to adequately assess the consequences of the switch, both nosologically and clinically. We aim to set out the criteria and principles for accepting a new test as a better reference standard or component of a reference standard. The 3 key principles that assist with deciding on the value of the new test are as follows. The consequences of the new reference test can be understood through the disagreements between the old and new reference tests. Resolving the disagreements between old and new test requires a fair, but not necessarily perfect, umpire test. Possible umpire tests include causal exposures, concurrent testing, prognosis, or the response to treatment. We examine each principle in turn and then discuss how they may work together. Disagreements between Old and New Tests A new reference standard test may lead to a diagnostic spectrum shift by either broadening or narrowing a diagnosis. Most commonly, an apparently more sensitive test broadens the diagnosis by detecting earlier or less consequential cases. Troponins in chest pain and magnetic resonance imaging in breast disease or suspected multiple sclerosis are typical examples of this. Less commonly, the test will narrow the diagnosis by excluding cases that were previously diagnosed, either because the disease was inconsequential or the reference test yielded an incorrect diagnosis. For example, nerve conduction studies narrowed the range of patients considered to have carpal tunnel syndrome. Narrowing also occurs when specific diagnoses are removed from a broad nonspecific category, such as when antibody testing led to reclassification of some patients previously labeled as having the irritable bowel syndrome (12). To understand the consequences of switching reference tests, we may use the following principle to focus investigation: Principle 1: The consequences of the new reference test can be understood through the disagreements between the old and new reference tests. The shaded cells in Table 1 show these consequences through the hypothetical comparison of a new and old reference standard test. The consequence of switching from the old to the new reference test is that the spectrum of disease in the patients being treated shifts from cell C (old test positive; new test negative) to cell B (old test negative; new test positive). Table 1. Possible Consequences of Switching from Old to New Reference Tests The possible disagreements in Table 1 may be divided into 2 simpler cases (Table 2). First, the new test may detect extra possible cases. This apparent additional sensitivity of the new test may also involve a shift to the detection of earlier or less severe cases, such as additional cases of myocardial ischemia detected by troponin (2) or cases of celiac disease detected by endomysial antibody (13). However, these extra cases may also be false positives or cases of less severe illness, so careful assessment is required. Second, the new test may detect fewer possible cases. This apparently better specificity of the new test may also involve some shift from earlier or less severe and less consequential cases. For example, concerns about white-coat hypertension have led to increasing use of ambulatory blood pressure monitoring to reduce false-positive diagnoses of hypertension. Similar concerns have been expressed about overdiagnosis of many conditions, particularly from screening techniques (such as lung and breast cancer screening) (14). However, these cases may also be false negatives, and careful assessment is again required. For example, because we would wish to see that persons classified as nonhypertensive by ambulatory blood pressure monitoring have a similar prognosis to those with no hypertension, the clinical consequences for the discordant group need careful assessment. Table 2. Performance of New Test if (I) More Sensitive or (II) More Specific It is possible to have cases in both cells B and C of Table 1. The crucial question is not simply the numbers in the discordant cells (B and C), but the nature of those cases. Are the additional cases detected by the new test actual (consequential) cases of the disease or merely false-positive results? And are the cases not detected by the new test actual (consequential) cases of the disease (that is, is the test yielding false-negative results?) If the numbers were equal but the additional cases detected by the new test (cell B) were more serious cases that had a greater gain from treatment than the missed cases (cell C), then the new test would provide a net benefit. Conversely, if the additional cases were less consequential and benefited less from treatment, then the shift would be undesirable. Deciding on the balance of consequences of the new test is not simple. A randomized trial may be required to demonstrate the value (or nonvalue) of treatment in patients in cells B and C. For example, the introduction of troponins has altered our ability to detect ischemic myocardial damage but led to uncertainty about appropriate treatments for this new spectrum of patients. This uncertainty led to several randomized trials in nonST elevation myocardial infarction, which, when pooled, suggest that early revascularization is beneficial (15), at least for some subgroups (16). Other examples or scenarios may be simpler, such as the identification of additional surgically correctable epileptic foci by positron emission tomography (5), and well-documented beforeafter case series may be sufficient (17). Randomized trials will only need to be considered for patients in the discordant cells (cells B and C in Table 1). However, such trials are sometimes unnecessary (18) or not feasible. We should begin with an examination of any known flaws in the current reference test (19) to see whether these might explain the differences. This is a useful start but alone is insufficient, and the next 2 principles provide gu |
Convolutional neural networks and multimodal fusion for text aided image classification | With the exponential growth of web meta-data, exploiting multimodal online sources via standard search engine has become a trend in visual recognition as it effectively alleviates the shortage of training data. However, the web meta-data such as text data is usually not as cooperative as expected due to its unstructured nature. To address this problem, this paper investigates the numerical representation of web text data. We firstly adopt convolutional neural network (CNN) for web text modeling on top of word vectors. Combined with CNN for image, we present a multimodal fusion to maximize the discriminative power of visual and textual modality data for decision level and feature level simultaneously. Experimental results show that the proposed framework achieves significant improvement in large-scale image classification on Pascal VOC-2007 and VOC-2012 datasets. |
A Compositional Object-Based Approach to Learning Physical Dynamics | This paper presents the Neural Physics Engine (NPE), an object-based neural network architecture for learning predictive models of intuitive physics. The NPE draws on the strengths of both symbolic and neural approaches: like a symbolic physics engine, it is endowed with generic notions of objects and their interactions, but as a neural network it can also be trained via stochastic gradient descent to adapt to specific object properties and dynamics of different worlds. We evaluate the efficacy of our approach on simple rigid body dynamics in two-dimensional worlds of bouncing balls. By comparing to less structured architectures, we show that the NPE’s compositional representation of the causal structure in physical interactions improves its ability to predict movement, generalize to different numbers of objects, and infer latent properties of objects such as mass. |
[Delayed skin reaction in a group of 400 hospital patients: control study (author's transl)]. | Delayed hypersensitivity skin tests were carried out with 5 antigens - PPD, staphylococcus, streptokinase/streptodornase, candida and trichophytin-in 400 hospital patients without any known causes for diminished delayed hypersensitivity. The degree of reactivity to each antigen was: PPD. 69.50%; candida 49.75%; streptokinase/streptodornase 44.50%; trichophytin 42.00%; staphylococcus 14.25%. Reactivity to either PPD or candida occurred in 86.50% of all cases. Positive response to streptokinase/streptodornase was present in 7%; which brings the total cases with reactions to one or more of the 3 antigens to 93.50%. Those who responded to trichophytin or staphylococcus were 3% only, bringing the total response of all cases to at least one antigen, to 96.5%. |
ATOL: A Framework for Automated Analysis and Categorization of the Darkweb Ecosystem | We present a framework for automated analysis and categorization of .onion websites in the darkweb to facilitate analyst situational awareness of new content that emerges from this dynamic landscape. Over the last two years, our team has developed a large-scale darkweb crawling infrastructure called OnionCrawler that acquires new onion domains on a daily basis, and crawls and indexes millions of pages from these new and previously known .onion sites. It stores this data into a research repository designed to help better understand Tor’s hidden service ecosystem. The analysis component of our framework is called Automated Tool for Onion Labeling (ATOL), which introduces a two-stage thematic labeling strategy: (1) it learns descriptive and discriminative keywords for different categories, and (2) uses these terms to map onion site content to a set of thematic labels. We also present empirical results of ATOL and our ongoing experimentation with it, as we have gained experience applying it to the entirety of our darkweb repository, now over 70 million indexed pages. We find that ATOL can perform site-level thematic label assignment more accurately than keywordbased schemes developed by domain experts — we expand the analyst-provided keywords using an automatic keyword discovery algorithm, and get 12% gain in accuracy by using a machine learning classification model. We also show how ATOL can discover categories on previously unlabeled onions and discuss applications of ATOL in supporting various analyses and investigations of the darkweb. |
Accurate Floating-Point Summation Part II: Sign, K-Fold Faithful and Rounding to Nearest | In this Part II of this paper we first refine the analysis of error-free vector transformations presented in Part I. Based on that we present an algorithm for calculating the rounded-to-nearest result of s := ∑ pi for a given vector of floatingpoint numbers pi, as well as algorithms for directed rounding. A special algorithm for computing the sign of s is given, also working for huge dimensions. Assume a floating-point working precision with relative rounding error unit eps. We define and investigate a K-fold faithful rounding of a real number r. Basically the result is stored in a vector Resν of K non-overlapping floating-point numbers such that ∑ Resν approximates r with relative accuracy epsK , and replacing ResK by its floating-point neighbors in ∑ Resν forms a lower and upper bound for r. For a given vector of floating-point numbers with exact sum s, we present an algorithm for calculating a K-fold faithful rounding of s using solely the working precision. Furthermore, an algorithm for calculating a faithfully rounded result of the sum of a vector of huge dimension is presented. Our algorithms are fast in terms of measured computing time because they allow good instruction-level parallelism, they neither require special operations such as access to mantissa or exponent, they contain no branch in the inner loop, nor do they require some extra precision: The only operations used are standard floating-point addition, subtraction and multiplication in one working precision, for example double precision. Certain constants used in the algorithms are proved to be optimal. |
A survey of MRI-based medical image analysis for brain tumor studies. | MRI-based medical image analysis for brain tumor studies is gaining attention in recent times due to an increased need for efficient and objective evaluation of large amounts of data. While the pioneering approaches applying automated methods for the analysis of brain tumor images date back almost two decades, the current methods are becoming more mature and coming closer to routine clinical application. This review aims to provide a comprehensive overview by giving a brief introduction to brain tumors and imaging of brain tumors first. Then, we review the state of the art in segmentation, registration and modeling related to tumor-bearing brain images with a focus on gliomas. The objective in the segmentation is outlining the tumor including its sub-compartments and surrounding tissues, while the main challenge in registration and modeling is the handling of morphological changes caused by the tumor. The qualities of different approaches are discussed with a focus on methods that can be applied on standard clinical imaging protocols. Finally, a critical assessment of the current state is performed and future developments and trends are addressed, giving special attention to recent developments in radiological tumor assessment guidelines. |
Dose-response for chiropractic care of chronic low back pain. | BACKGROUND CONTEXT
There have been no trials of optimal chiropractic care in terms of number of office visits for spinal manipulation and other therapeutic modalities.
PURPOSE
To conduct a pilot study to make preliminary identification of the effects of number of chiropractic treatment visits for manipulation with and without physical modalities (PM) on chronic low back pain and disability.
STUDY DESIGN/SETTING
Randomized controlled trial with a balanced 4x2 factorial design. Conducted in the faculty practice of a chiropractic college outpatient clinic.
PATIENT SAMPLE
Seventy-two patients with chronic, nonspecific low back pain of mechanical origin.
MAIN OUTCOME MEASURES
Von Korff pain and disability (100-point) scales.
METHODS
Patients were randomly allocated to visits (1, 2, 3 or 4 visits/week for 3 weeks) and to treatment regimen (spinal manipulation only or spinal manipulation with PM). All patients received high-velocity low-amplitude spinal manipulation. Half received one or two of the following PM at each visit: soft tissue therapy, hot packs, electrotherapy or ultrasound.
RESULTS
Pain intensity: At 4 weeks, there was a substantial linear effect of visits favoring a larger number of visits: 5.7 points per 3 visits (SE=2.3, p=.014). There was no effect of treatment regimen. At 12 weeks, the data suggested the potential for a similar effect of visits on patients receiving both manipulation and PM. Functional disability: At 4 weeks, a visits effect was noted (p=.018); the slope for group means was approximately 5 points per 3 visits. There were no group differences at 12 weeks.
CONCLUSIONS
There was a positive, clinically important effect of the number of chiropractic treatments for chronic low back pain on pain intensity and disability at 4 weeks. Relief was substantial for patients receiving care 3 to 4 times per week for 3 weeks. |
Regional Variation in Postoperative Myocardial Infarction in Patients Undergoing Vascular Surgery in the United States. | BACKGROUND
The aim of this study is to assess for regional variation in the incidence of postoperative myocardial infarction (POMI) following nonemergent vascular surgery across the United States to identify potential areas for quality improvement initiatives.
METHODS
We evaluated POMI rates across 17 regional Vascular Quality Initiative (VQI) groups that comprised 243 centers with 1,343 surgeons who performed 75,057 vascular operations from 2010 to 2014. Four procedures were included in the analysis: carotid endarterectomy (CEA, n = 39,118), endovascular abdominal aortic aneurysm (AAA) repair (EVAR, n = 15,106), infrainguinal bypass (INFRA, n = 17,176), and open infrarenal AAA repair (OAAA, n = 3,657). POMI was categorized by the method of diagnosis as troponin-only or clinical/ECG and rates were investigated in regions with ≥100 consecutive cases. Regions with significantly different POMI rates were defined as those >1.5 interquartile lengths beyond the 75th percentile of the distribution. Risk-adjusted rates of POMI were assessed using the VQI Cardiac Risk Index all-procedures prediction model to compare the observed versus expected rates for each region.
RESULTS
Overall rates of POMI varied by procedure type: CEA 0.8%, EVAR 1.1%, INFRA 2.7%, and OAAA 4.2% (P < 0.001). Significant variation in POMI rates was observed between regions, resulting in differing ranges of POMI rates for each procedure: CEA 0.5-2.0% (P = 0.001), EVAR 0.3-3.1% (P < 0.001), INFRA 1.1-4.8% (P < 0.001), and OAAA 2.2-10.0% (P < 0.001). A single region in 3 of the 4 procedure-specific datasets was identified as a statistical outlier with a significantly higher POMI rate after CEA, EVAR, and OAAA; this region was identical for the EVAR and OAAA datasets but was a different region for the CEA dataset. No significant variation in POMI was noted between regions after INFRA. Procedure-specific clinical POMI rates (mean; range) were significantly different between regions for EVAR (0.4%; 0-1.1%, P = 0.01) and INFRA (1.4%; 0.5-2.9%, P = 0.01), but not for CEA (0.4%; 0-0.8%, P = 0.53) or OAAA (1.6%; 0-3.8%, P = 0.23). Procedure-specific troponin-only POMI rates (mean; range) were significantly different between regions for all procedures: CEA (0.4%; 0.1-1.2%, P < 0.001), EVAR (0.7%; 0-2.1%, P < 0.001), INFRA (1.3%; 0.4-2.5%, P = 0.001), and OAAA (2.5%; 0-8.5%, P < 0.001). After risk adjustment, regional variation was again noted with 3 regions having higher and 4 regions having lower than expected rates of POMI.
CONCLUSIONS
Significant variation in POMI rates following major vascular surgery exists across VQI regions even after risk adjustment. These findings may present an opportunity for focused regional quality improvement efforts. |
ST-DBSCAN: An algorithm for clustering spatial-temporal data | This paper presents a new density-based clustering algorithm, ST-DBSCAN, which is based on DBSCAN. We propose three marginal extensions to DBSCAN related with the identification of (i) core objects, (ii) noise objects, and (iii) adjacent clusters. In contrast to the existing density-based clustering algorithms, our algorithm has the ability of discovering clusters according to non-spatial, spatial and temporal values of the objects. In this paper, we also present a spatial–temporal data warehouse system designed for storing and clustering a wide range of spatial–temporal data. We show an implementation of our algorithm by using this data warehouse and present the data mining results. 2006 Elsevier B.V. All rights reserved. |
A meothod for multi-target range and velocity detection in automotive FMCW radar | FMCW(Frequency Modulation Continuous Wave) radar has many useful applications but a serious problems can occur in multi-target situations. Range-velocity processing should suppress so-called ghost targets and detect missing targets presented by beat frequency shift with Doppler frequency. In this paper, a new method is proposed for effective identification of the correct pairs of beat frequencies received from real targets. |
Content-Based Image Retrieval: Color-selection exploited | This research presents a new color selection interface that facilitates query-by-color in Content-Based Image Retrieval (CBIR). Existing CBIR color selection interfaces, are being judged as non-intuitive and difficult to use. Our interface copes with these problems of usability. It is based on 11 color categories, used by all people, while thinking of and perceiving color. In addition, its usability is supported by Fitts’ law. The design of the color selection interface provides fast, unambiguous, intuitive, and accurate color selection. |
Improving Neural Machine Translation with Conditional Sequence Generative Adversarial Nets | This paper proposes an approach for applying GANs to NMT. We build a conditional sequence generative adversarial net which comprises of two adversarial sub models, a generator and a discriminator. The generator aims to generate sentences which are hard to be discriminated from human-translated sentences ( i.e., the golden target sentences); And the discriminator makes efforts to discriminate the machine-generated sentences from humantranslated ones. The two sub models play a mini-max game and achieve the win-win situation when they reach a Nash Equilibrium. Additionally, the static sentence-level BLEU is utilized as the reinforced objective for the generator, which biases the generation towards high BLEU points. During training, both the dynamic discriminator and the static BLEU objective are employed to evaluate the generated sentences and feedback the evaluations to guide the learning of the generator. Experimental results show that the proposed model consistently outperforms the traditional RNNSearch and the newly emerged state-ofthe-art Transformer on English-German and Chinese-English translation tasks. |
A high efficiency low cost direct battery balancing circuit using a multi-winding transformer with reduced switch count | This paper presents a simple circuit for balancing battery cells. The circuit is composed of one low voltage MOSFET for each battery cell and a multi-winding transformer for a group of battery cells. Only one identical gate drive signal for all the MOSFETs is needed to control the balance current. Energy can be directly transferred from higher voltage cells to lower voltage cells. The balancing circuit can achieve ideal balance results as long as the multi-winding transformer has ideal symmetry. Other circuit components' asymmetry would only affect the balance speed. Simulation results are given to show the effectiveness. A circuit for balancing a four-cell battery group is tested. The experiment shows the energy transfer efficiency is up to 93% between cell energy transfers. |
TagATune: A Game for Music and Sound Annotation | Annotations of audio files can be used to search and index music and sound databases, provide data for system evaluation, and generate training data for machine learning. Unfortunately, the cost of obtaining a comprehensive set of annotations manually is high. One way to lower the cost of labeling is to create games with a purpose that people will voluntarily play, producing useful metadata as a by-product. TagATune is an audio-based online game that aims to extract descriptions of sounds and music from human players. This paper presents the rationale, design and preliminary results from a pilot study using a prototype of TagATune to label a subset of the FreeSound database. |
Review: the role of inflammation in depression. | The role of inflammation in major depressive disorder (MDD) has been of growing interest over the past two decades and evidence suggests it plays a role in depression. Evidence linking inflammation to MDD comes from three different observations (a) elevated levels of inflammatory markers in patients with MDD, even in the absence of illness, (b) co-occurrence of MDD with inflammatory illnesses and (c) increased risk of MDD with cytokine treatment. Cytokines have been found to influence almost every pathway involved in the pathogenesis of depression including alterations to the expression of neurotransmitters, neuroendocrine function, synaptic plasticity and basal ganglia. The similarities between cytokine-induced sickness behaviour and MDD further support a role of inflammation in depression as well as the anti-inflammatory effects of successful antidepressant treatment. This account describes the inflammatory mechanisms thought to be involved in MDD and the evidence for this. |
Capsular contracture rate in a low-risk population after primary augmentation mammaplasty. | BACKGROUND
The safety of augmentation mammaplasty has increased dramatically in the past 20 years. Capsular contracture (CC) is the most commonly reported complication of augmentation mammaplasty.
OBJECTIVES
The authors report the incidence of CC in a low-risk patient population after primary augmentation.
METHODS
The authors retrospectively reviewed the charts of 856 consecutive patients who underwent primary augmentation mammaplasty between 1999 and 2009. This series did not include patients who underwent breast augmentation-mastopexy, secondary augmentation, revision, and/or reconstruction. Data points included demographics, functional and aesthetic outcomes, complications, and revision rate/type.
RESULTS
The overall incidence of CC in 856 patients was 2.8%. Average follow-up time was 14.9 months. Antibiotic irrigation decreased CC rates from 3.9% to 0.4% (P = .004). Tobacco users had higher rates of contracture than nonsmokers (5.5% vs 1.9%; P = .036). Saline implants had a higher CC rate than silicone gel (4.3% vs 1.3%; P = .032). Using multivariate logistic regression, CC was 7.89 times more likely in saline implants than in silicone gel (P = .027, 95% confidence interval, 1.26-49.00).
CONCLUSIONS
Based on our findings, it is apparent that the early CC rate in primary augmentation can be less than 1%. To avoid CC, we advocate an inframmamary approach, submuscular implant placement, and antibiotic irrigation of the breast pocket.
LEVEL OF EVIDENCE
3. |
Scalable Tucker Factorization for Sparse Tensors - Algorithms and Discoveries | Given sparse multi-dimensional data (e.g., (user, movie, time; rating) for movie recommendations), how can we discover latent concepts/relations and predict missing values? Tucker factorization has been widely used to solve such problems with multi-dimensional data, which are modeled as tensors. However, most Tucker factorization algorithms regard and estimate missing entries as zeros, which triggers a highly inaccurate decomposition. Moreover, few methods focusing on an accuracy exhibit limited scalability since they require huge memory and heavy computational costs while updating factor matrices. In this paper, we propose P-Tucker, a scalable Tucker factorization method for sparse tensors. P-Tucker performs alternating least squares with a row-wise update rule in a fully parallel way, which significantly reduces memory requirements for updating factor matrices. Furthermore, we offer two variants of P-Tucker: a caching algorithm P-Tucker-Cache and an approximation algorithm P-Tucker-Approx, both of which accelerate the update process. Experimental results show that P-Tucker exhibits 1.7-14.1x speed-up and 1.4-4.8x less error compared to the state-of-the-art. In addition, P-Tucker scales near linearly with the number of observable entries in a tensor and number of threads. Thanks to P-Tucker, we successfully discover hidden concepts and relations in a large-scale real-world tensor, while existing methods cannot reveal latent features due to their limited scalability or low accuracy. |
Autocatalysed oxidation of etophylline by permanganate in aqueous sulphuric acid medium- kinetics and mechanistic study | II . As the concentration of acid increases the rate of the reaction increases. The order with respect to acid concentration is less than unity. Based on the experimental results a suitable mechanism is proposed. The influence of temperature on the rate of reaction is studied. The activation parameters and thermodynamic quantities have been determined with respect to slow step of the mechanism. |
PDE-Net 2.0: Learning PDEs from Data with A Numeric-Symbolic Hybrid Deep Network | Partial differential equations (PDEs) are commonly derived based on empirical observations. However, recent advances of technology enable us to collect and store massive amount of data, which offers new opportunities for data-driven discovery of PDEs. In this paper, we propose a new deep neural network, called PDE-Net 2.0, to discover (time-dependent) PDEs from observed dynamic data with minor prior knowledge on the underlying mechanism that drives the dynamics. The design of PDE-Net 2.0 is based on our earlier work [1] where the original version of PDE-Net was proposed. PDE-Net 2.0 is a combination of numerical approximation of differential operators by convolutions and a symbolic multi-layer neural network for model recovery. Comparing with existing approaches, PDE-Net 2.0 has the most flexibility and expressive power by learning both differential operators and the nonlinear response function of the underlying PDE model. Numerical experiments show that the PDE-Net 2.0 has the potential to uncover the hidden PDE of the observed dynamics, and predict the dynamical behavior for a relatively long time, even in a noisy environment. |
A Fully Automated Framework for Control of Linear Systems from Temporal Logic Specifications | We consider the following problem: given a linear system and a linear temporal logic (LTL) formula over a set of linear predicates in its state variables, find a feedback control law with polyhedral bounds and a set of initial states so that all trajectories of the closed loop system satisfy the formula. Our solution to this problem consists of three main steps. First, we partition the state space in accordance with the predicates in the formula, and construct a transition system over the partition quotient, which captures our capability of designing controllers. Second, using a procedure resembling model checking, we determine runs of the transition system satisfying the formula. Third, we generate the control strategy. Illustrative examples are included. |
ASSETS AND CONSTRAINTS RELATING TO THE LOCATION DECISIONS OF SMALL MANUFACTURING BUSINESSES IN VERMONT | The goal of this research is to identify the assets and constraints that exist specific to small business manufacturers in Vermont. To satisfy this goal, the study examines factors that influence location decisions as well as identifying what obstacles business owners have experienced. The idea for this project originated in response to the troubled economic condition of several Vermont communities, where unemployment rates are unusually high and income unusually low. Understanding what obstacles business faces might enable future ideas on how to solve these problems. Once assets are identified, they can be capitalized on, leading to more successful business operations. Preliminary data was collected through a telephone interview survey with Vermont small business owners. The data was analyzed in order to discover incentives and obstacles that existed for Vermont manufacturers as a whole, as well as in specific industries. Results from the survey suggested that Vermont's largest asset is that it offers an excellent quality of life. However, results alluded to several difficulties that owners are faced with, particularly complying with certain rules and regulations, obtaining adequate finances, a low-skilled workforce, and a weak communication network for small businesses. |
An Adaptive PMU Based Fault Detection / Location Technique for Transmission Lines Part I : Theory and Algorithms | An adaptive fault detection/location technique based on Phasor Measurement Unit (PMU) for an EHV/UHV transmission line is presented in this two paper set. This paper is Part I of this set. A fault detection/location index in terms of Clarke components of the synchronized voltage and current phasors is derived. The line parameter estimation algorithm is also developed to solve the uncertainty of parameters caused by aging of transmission lines. This paper also proposes a new Discrete Fourier Transform (DFT) based algorithm (termed as Smart Discrete Fourier Transform, SDFT) to eliminate system noise and measurement errors such that extremely accurate fundamental frequency components can be extracted for calculation of fault detection/location index. The EMTP was used to simulate a high voltage transmission line with faults at various locations. To simulate errors involved in measurements, Gaussian-type noise has been added to the raw output data generated by EMTP. Results have shown that the new DFT based method can extract exact phasors in the presence of frequency deviation and harmonics. The parameter estimation algorithm can also trace exact parameters very well. The accuracy of both new DFT based method and parameter estimation algorithm can achieve even up to 99.999% and 99.99% respectively, and will be presented in Part II. The accuracy of fault location estimation by the proposed technique can achieve even up to 99.9% in the performance evaluation, which is also presented in Part II. |
Extensive unilateral nevus comedonicus without genetic abnormality. | Nevus comedonicus is considered a genodermatosis characterized by the presence of multiple groups of dilated pilosebaceous orifices filled with black keratin plugs, with sharply unilateral distribution mostly on the face, neck, trunk, upper arms. Lesions can appear at any age, frequently before the age of 10 years, but they are usually present at birth. We present a 2.7-year-old girl with a very severe form of nevus comedonicus. She exhibited lesions located initially at the left side of the body with a linear characteristic, following Blascko lines T1/T2, T5, T7, S1 /S2, but progressively developed lesions on the right side of the scalp and left gluteal area. |
Solution to Detect, Classify, and Report Illicit Online Marketing and Sales of Controlled Substances via Twitter: Using Machine Learning and Web Forensics to Combat Digital Opioid Access | BACKGROUND
On December 6 and 7, 2017, the US Department of Health and Human Services (HHS) hosted its first Code-a-Thon event aimed at leveraging technology and data-driven solutions to help combat the opioid epidemic. The authors—an interdisciplinary team from academia, the private sector, and the US Centers for Disease Control and Prevention—participated in the Code-a-Thon as part of the prevention track.
OBJECTIVE
The aim of this study was to develop and deploy a methodology using machine learning to accurately detect the marketing and sale of opioids by illicit online sellers via Twitter as part of participation at the HHS Opioid Code-a-Thon event.
METHODS
Tweets were collected from the Twitter public application programming interface stream filtered for common prescription opioid keywords in conjunction with participation in the Code-a-Thon from November 15, 2017 to December 5, 2017. An unsupervised machine learning–based approach was developed and used during the Code-a-Thon competition (24 hours) to obtain a summary of the content of the tweets to isolate those clusters associated with illegal online marketing and sale using a biterm topic model (BTM). After isolating relevant tweets, hyperlinks associated with these tweets were reviewed to assess the characteristics of illegal online sellers.
RESULTS
We collected and analyzed 213,041 tweets over the course of the Code-a-Thon containing keywords codeine, percocet, vicodin, oxycontin, oxycodone, fentanyl, and hydrocodone. Using BTM, 0.32% (692/213,041) tweets were identified as being associated with illegal online marketing and sale of prescription opioids. After removing duplicates and dead links, we identified 34 unique “live” tweets, with 44% (15/34) directing consumers to illicit online pharmacies, 32% (11/34) linked to individual drug sellers, and 21% (7/34) used by marketing affiliates. In addition to offering the “no prescription” sale of opioids, many of these vendors also sold other controlled substances and illicit drugs.
CONCLUSIONS
The results of this study are in line with prior studies that have identified social media platforms, including Twitter, as a potential conduit for supply and sale of illicit opioids. To translate these results into action, authors also developed a prototype wireframe for the purposes of detecting, classifying, and reporting illicit online pharmacy tweets selling controlled substances illegally to the US Food and Drug Administration and the US Drug Enforcement Agency. Further development of solutions based on these methods has the potential to proactively alert regulators and law enforcement agencies of illegal opioid sales, while also making the online environment safer for the public. |
1 ( Language , Function and Cognition , 2011-12 ) Introduction to Systemic Functional Linguistics for Discourse Analysis | This course provides a basic introduction to Systemic Functional Linguistics (SFL) particularly in regards to those aspects most applicable to analyzing discourse. |
GENERATION AND TESTING OF RANDOM NUMBERS FOR CRYPTOGRAPHIC APPLICATIONS | Random number sequences are of crucial importance in almost every aspect of modern digital cryptography, having a significant impact on the strength of cryptographic primitives in securing secret information by rendering it unknown, unguessable, unpredictable and irreproducible for an adversary. Although the major importance of high performance and high quality randomness generators in cryptography attracts an increasing interest from the research community, unfortunately this tendency is not always shared among security application developers. Thus many available cryptographic systems continue to be compromised due to the utilization of inadequate randomness generators. In this context, the present paper aims to accentuate the crucial importance of random numbers in cryptography and to suggest a set of efficient and practical methods for the generation and testing of random number sequences intended for cryptographic applications. |
Air pollution and multiple acute respiratory outcomes. | Short-term effects of air pollutants on respiratory mortality and morbidity have been consistently reported but usually studied separately. To more completely assess air pollution effects, we studied hospitalisations for respiratory diseases together with out-of-hospital respiratory deaths. A time-stratified case-crossover study was carried out in six Italian cities from 2001 to 2005. Daily particulate matter (particles with a 50% cut-off aerodynamic diameter of 10 μm (PM10)) and nitrogen dioxide (NO2) associations with hospitalisations for respiratory diseases (n = 100 690), chronic obstructive pulmonary disease (COPD) (n = 38 577), lower respiratory tract infections (LRTI) among COPD patients (n = 9886) and out-of-hospital respiratory deaths (n = 5490) were estimated for residents aged ≥35 years. For an increase of 10 μg·m(-3) in PM10, we found an immediate 0.59% (lag 0-1 days) increase in hospitalisations for respiratory diseases and a 0.67% increase for COPD; the 1.91% increase in LRTI hospitalisations lasted longer (lag 0-3 days) and the 3.95% increase in respiratory mortality lasted 6 days. Effects of NO2 were stronger and lasted longer (lag 0-5 days). Age, sex and previous ischaemic heart disease acted as effect modifiers for different outcomes. Analysing multiple rather than single respiratory events shows stronger air pollution effects. The temporal relationship between the pollutant increases and hospitalisations or mortality for respiratory diseases differs. |
Mirtazapine for anorexia nervosa with depression. | OBJECTIVE
To report the use of Mirtazapine in the treatment of anorexia nervosa with depression primarily regarding its propensity for weight gain.
METHOD
We present an outpatient case report of anorexia nervosa with depression. The patient's subsequent progress was recorded.
RESULTS
The patient gained 2.5 kg within 3 months to eventually attain a body mass index of 15 after 5 months. Her depression achieved full remission at 6 weeks of treatment.
CONCLUSIONS
Mirtazapine is the choice medication in this case. However, treating depression requires caution, given these patients' physical vulnerability. Controlled trials of Mirtazapine for anorexia nervosa are needed. |
Effect of combination exercise training on metabolic syndrome parameters in postmenopausal women with breast cancer. | CONTEXT
Studies have shown that physical activity or exercise training may decrease the metabolic syndrome.
AIM
The aim of the present study is to clarify the effect of combination exercise training on metabolic syndrome parameters in postmenopausal women with breast cancer.
SETTING AND DESIGN
Twenty nine postmenopausal women (58.27 +/- 6.31 years) with breast cancer were divided into two groups randomly as experimental group (n=14) and control group (n=15).
MATERIALS AND METHODS
Subjects of experimental group were performed 15 weeks combination exercise training including walking (2 sessions per week) and resistance training (2 sessions per week that was different from walking days). Before and after 15 weeks, fasting insulin and glucose, insulin resistance, high-density lipoprotein cholesterol (HDL-C) and triglyceride (TG) were calculated. Also, Vo2peak, rest heart rate (RHR), systolic blood pressure (SBP), body weight (BW), body mass index (BMI) and waist to hip ratio (WHR) were measured in two groups.
STATISTICAL ANALYSIS USED
Mean values of two groups in pre and post test were compared by independent and paired t-test for all measurements (P ≤ 0.05).
RESULTS
Significant differences were observed for VO2peak, RHR, BW, BMI, WHR, SBP, fasting insulin and glucose, HDL-C and TG between experimental and control groups after 15 weeks (P< 0.05).
CONCLUSIONS
Combination exercise training can improve metabolic syndrome parameters in postmenopausal women with breast cancer. |
Assessment of Still and Moving Images in the Diagnosis of Gastric Lesions Using Magnifying Narrow-Band Imaging in a Prospective Multicenter Trial | OBJECTIVES
Magnifying narrow-band imaging (M-NBI) is more accurate than white-light imaging for diagnosing small gastric cancers. However, it is uncertain whether moving M-NBI images have additional effects in the diagnosis of gastric cancers compared with still images.
DESIGN
A prospective multicenter cohort study.
METHODS
To identify the additional benefits of moving M-NBI images by comparing the diagnostic accuracy of still images only with that of both still and moving images. Still and moving M-NBI images of 40 gastric lesions were obtained by an expert endoscopist prior to this prospective multicenter cohort study. Thirty-four endoscopists from ten different Japanese institutions participated in the prospective multicenter cohort study. Each study participant was first tested using only still M-NBI images (still image test), then tested 1 month later using both still and moving M-NBI images (moving image test). The main outcome was a difference in the diagnostic accuracy of cancerous versus noncancerous lesions between the still image test and the moving image test.
RESULTS
Thirty-four endoscopists were analysed. There were no significant difference of cancerous versus noncancerous lesions between still and moving image tests in the diagnostic accuracy (59.9% versus 61.5%), sensitivity (53.4% versus 55.9%), and specificity (67.0% versus 67.6%). And there were no significant difference in the diagnostic accuracy between still and moving image tests of demarcation line (65.4% versus 65.5%), microvascular pattern (56.7% versus 56.9%), and microsurface pattern (48.1% versus 50.9%). Diagnostic accuracy showed no significant difference between the still and moving image tests in the subgroups of endoscopic findings of the lesions.
CONCLUSIONS
The addition of moving M-NBI images to still M-NBI images does not improve the diagnostic accuracy for gastric lesions. It is reasonable to concentrate on taking sharp still M-NBI images during endoscopic observation and use them for diagnosis.
TRIAL REGISTRATION
Umin.ac.jp UMIN-CTR000008048. |
Action observation and robotic agents: Learning and anthropomorphism | The 'action observation network' (AON), which is thought to translate observed actions into motor codes required for their execution, is biologically tuned: it responds more to observation of human, than non-human, movement. This biological specificity has been taken to support the hypothesis that the AON underlies various social functions, such as theory of mind and action understanding, and that, when it is active during observation of non-human agents like humanoid robots, it is a sign of ascription of human mental states to these agents. This review will outline evidence for biological tuning in the AON, examining the features which generate it, and concluding that there is evidence for tuning to both the form and kinematic profile of observed movements, and little evidence for tuning to belief about stimulus identity. It will propose that a likely reason for biological tuning is that human actions, relative to non-biological movements, have been observed more frequently while executing corresponding actions. If the associative hypothesis of the AON is correct, and the network indeed supports social functioning, sensorimotor experience with non-human agents may help us to predict, and therefore interpret, their movements. |
Deep and Confident Prediction for Time Series at Uber | Reliable uncertainty estimation for time series prediction is critical in many fields, including physics, biology, and manufacturing. At Uber, probabilistic time series forecasting is used for robust prediction of number of trips during special events, driver incentive allocation, as well as real-time anomaly detection across millions of metrics. Classical time series models are often used in conjunction with a probabilistic formulation for uncertainty estimation. However, such models are hard to tune, scale, and add exogenous variables to. Motivated by the recent resurgence of Long Short Term Memory networks, we propose a novel end-to-end Bayesian deep model that provides time series prediction along with uncertainty estimation. We provide detailed experiments of the proposed solution on completed trips data, and successfully apply it to large-scale time series anomaly detection at Uber. |
Introducing Thing Descriptions and Interactions: An Ontology for the Web of Things | The Internet of Things (IoT) and the Web are closely related to each other. On the one hand, the Semantic Web has been including vocabularies and semantic models for the Internet of Things. On the other hand, the so-called Web of Things (WoT) advocates architectures relying on established Web technologies and RESTful interfaces for the IoT. In this paper, we present a vocabulary for WoT that aims at defining IoT concepts using terms from the Web. Notably, it includes two concepts identified as the core WoT resources: Thing Description (TD) and Interaction, that have been first elaborated by the W3C interest group for WoT. Our proposal is built upon the ontological pattern Identifier, Resource, Entity (IRE) that was originally designed for the Semantic Web. To better analyze the alignments our proposal allows, we reviewed existing IoT models as a vocabulary graph, complying with the approach of Linked Open Vocabularies (LOV). |
Classification cost: An empirical comparison among traditional classifier, Cost-Sensitive Classifier, and MetaCost | Loan fraud is a critical factor in the insolvency of financial institutions, so companies make an effort to reduce the loss from fraud by building a model for proactive fraud prediction. However, there are still two critical problems to be resolved for the fraud detection: (1) the lack of cost sensitivity between type I error and type II error in most prediction models, and (2) highly skewed distribution of class in the dataset used for fraud detection because of sparse fraud-related data. The objective of this paper is to examine whether classification cost is affected both by the cost-sensitive approach and by skewed distribution of class. To that end, we compare the classification cost incurred by a traditional cost-insensitive classification approach and two cost-sensitive classification approaches, Cost-Sensitive Classifier (CSC) and MetaCost. Experiments were conducted with a credit loan dataset from a major financial institution in Korea, while varying the distribution of class in the dataset and the number of input variables. The experiments showed that the lowest classification cost was incurred when the MetaCost approach was used and when non-fraud data and fraud data were balanced. In addition, the dataset that includes all delinquency variables was shown to be most effective on reducing the classification cost. 2011 Elsevier Ltd. All rights reserved. |
Reliability, validity and normative data for the Danish Beck Youth Inventories. | This study examines reliability and validity and establish Danish norms for the Danish version of the Beck Youth Inventories (BYI) (Beck, Beck & Jolly, 2001), which consists of five self-report scales; Self-Concept (BSCI), Anxiety (BAI), Depression (BDI), Anger (BANI) and Disruptive Behavior (BDBI). A total of 1,116 school children and 128 clinical children, aged 7-14, completed BYI. Internal consistency coefficients were high. Most test-retest correlations were >0.70. A test-retest difference was found for BAI. Exploratory and confirmatory factor analysis indicated that the five factor structure of the instrument was justified. The BSCI, BAI and BDI discriminated moderately between the norming sample and the clinical group, and the latter group included more children who exceeded the 90th percentile of the norming sample. Diagnostic groups scored higher on relevant scales than norms. Only BSCI and BDI differentiated between diagnostic groups. The BYI showed acceptable internal consistency and test-retest stability, except for BAI. The BYI did not adequately differentiate between internalizing disorders. |
The relationship of palliative transurethral resection of the prostate with disease progression in patients with prostate cancer. | OBJECTIVES
To test, in a prostate-cancer population-based database, the validity of the finding that in single-institution series, palliative transurethral resection of prostate (TURP) is associated with an increased risk of progression.
PATIENTS AND METHODS
Using the Surveillance Epidemiology and END Results Registry, we identified men who had a TURP subsequent to their diagnosis of prostate cancer, from 1998 or 1999. The outcome of interest was disease progression, as defined by the initiation of androgen-deprivation therapy or procedures indicating progressive urinary obstruction. Multivariable logistic regression analysis was used to assess the adjusted odds of signal events related to disease progression adjusting for the concurrent effect of the covariates.
RESULTS
There were 29,361 men with prostate cancer and 2742 (9.3%) had a TURP after the diagnosis. These men had a mean age of 75 years and were unlikely to undergo definitive primary treatment. Men receiving TURP were more likely to undergo orchidectomy than men who did not have a TURP (odds ratio 1.64; 95% confidence interval 1.03-2.60) even after adjusting for differences in cancer-directed treatment, tumour stage and grade, prostate-specific antigen level, race, and age at diagnosis. These men were also more likely to have malignant urinary obstruction (ureteric and bladder outlet) than were men who did not have TURP.
CONCLUSION
The requirement for TURP is an adverse prognostic marker even when this is adjusted for classical tumour characteristics. Although the exact reasons for this finding are unclear, consideration should be given to adjuvant treatment in patients undergoing TURP. |
How to detect the Cuckoo Sandbox and to Strengthen it? | Nowadays a lot of malware are analyzed with virtual machines. The Cuckoo sandbox (Cuckoo DevTeam: Cuckoo sandbox. http://www.cuckoosandbox.org , 2013) offers the possibility to log every actions performed by the malware on the virtual machine. To protect themselves and to evande detection, malware need to detect whether they are in an emulated environment or in a real one. With a few modifications and tricks on Cuckoo and the virtual machine we can try to prevent malware to detect that they are under analyze, or at least make it harder. It is not necessary to apply all the modifications, because it may produce a significant overhead and if malware checks his execution time, it may detect an anomaly and consider that it is running in a virtual machine. The present paper will show how a malware can detect the Cuckoo sandbox and how we can counter that. |
Blind Image Deblurring Using Dark Channel Prior | We present a simple and effective blind image deblurring method based on the dark channel prior. Our work is inspired by the interesting observation that the dark channel of blurred images is less sparse. While most image patches in the clean image contain some dark pixels, these pixels are not dark when averaged with neighboring highintensity pixels during the blur process. This change in the sparsity of the dark channel is an inherent property of the blur process, which we both prove mathematically and validate using training data. Therefore, enforcing the sparsity of the dark channel helps blind deblurring on various scenarios, including natural, face, text, and low-illumination images. However, sparsity of the dark channel introduces a non-convex non-linear optimization problem. We introduce a linear approximation of the min operator to compute the dark channel. Our look-up-table-based method converges fast in practice and can be directly extended to non-uniform deblurring. Extensive experiments show that our method achieves state-of-the-art results on deblurring natural images and compares favorably methods that are well-engineered for specific scenarios. |
Diagnostic Performance of a Smartphone‐Based Photoplethysmographic Application for Atrial Fibrillation Screening in a Primary Care Setting | BACKGROUND
Diagnosing atrial fibrillation (AF) before ischemic stroke occurs is a priority for stroke prevention in AF. Smartphone camera-based photoplethysmographic (PPG) pulse waveform measurement discriminates between different heart rhythms, but its ability to diagnose AF in real-world situations has not been adequately investigated. We sought to assess the diagnostic performance of a standalone smartphone PPG application, Cardiio Rhythm, for AF screening in primary care setting.
METHODS AND RESULTS
Patients with hypertension, with diabetes mellitus, and/or aged ≥65 years were recruited. A single-lead ECG was recorded by using the AliveCor heart monitor with tracings reviewed subsequently by 2 cardiologists to provide the reference standard. PPG measurements were performed by using the Cardiio Rhythm smartphone application. AF was diagnosed in 28 (2.76%) of 1013 participants. The diagnostic sensitivity of the Cardiio Rhythm for AF detection was 92.9% (95% CI] 77-99%) and was higher than that of the AliveCor automated algorithm (71.4% [95% CI 51-87%]). The specificities of Cardiio Rhythm and the AliveCor automated algorithm were comparable (97.7% [95% CI: 97-99%] versus 99.4% [95% CI 99-100%]). The positive predictive value of the Cardiio Rhythm was lower than that of the AliveCor automated algorithm (53.1% [95% CI 38-67%] versus 76.9% [95% CI 56-91%]); both had a very high negative predictive value (99.8% [95% CI 99-100%] versus 99.2% [95% CI 98-100%]).
CONCLUSIONS
The Cardiio Rhythm smartphone PPG application provides an accurate and reliable means to detect AF in patients at risk of developing AF and has the potential to enable population-based screening for AF. |
Short-Circuit Fault Diagnosis for Three-Phase Inverters Based on Voltage-Space Patterns | This paper introduces a fault detection and isolation (FDI) method for faulty metal-oxide-semiconductor field-effect transistors in a three-phase pulsewidth-modulated (PWM) voltage source inverter. Short-circuit switch faults are the leading cause of failure in power converters. It is extremely vital to detect them in the early stages to prevent unwanted shutdown and catastrophic failures in motor drives and power generation systems. Against the common FDI methods for power electronic inverters that use phase currents and PWM gate control signals, the proposed method only uses the inverter output voltages. This method analyzes the PWM switching signals in a time-free domain that is called the voltage space. For a healthy inverter, the projection of the state transitions in the voltage space results in a cubic pattern. Each short-circuit switch fault uniquely changes the voltage-space pattern that allows isolating the faulty switch. The fault detection time is only within one PWM carrier period, which is significantly faster than current-based conventional methods. The FDI result does not depend on the load, the PWM switching frequency, and the feedback loop. This method can address the reliability problem of multilevel inverters in renewable electrical generation systems and can dramatically reduce the number of required sensors. |
Feminizing genitoplasty for congenital adrenal hyperplasia: what happens at puberty? | PURPOSE
We document the postpubertal outcome of feminizing genitoplasty.
MATERIALS AND METHODS
A total of 14 girls, mean age 13.1 years, with congenital adrenal hyperplasia were assessed under anesthesia by a pediatric urologist, plastic/reconstructive surgeon and gynecologist. Of these patients 13 had previously undergone feminizing genitoplasty in early childhood at 4 different specialist centers in the United Kingdom.
RESULTS
The outcome of clitoral surgery was unsatisfactory (clitoral atrophy or prominent glans) in 6 girls, including 3 whose genitoplasty had been performed by 3 different specialist pediatric urologists. Additional vaginal surgery was necessary for normal comfortable intercourse in 13 patients. Fibrosis and scarring were most evident in those who had undergone aggressive attempts at vaginal reconstruction in infancy.
CONCLUSIONS
These disappointing results, even in the hands of specialists, highlight the importance of late followup and challenge the prevailing assumption that total correction can be achieved with a single stage operation in infancy. Although simple exteriorization of a low vagina can reasonably be combined with cosmetic correction of virilized external genitalia in infancy, we now believe that in some cases it may be best to defer definitive reconstruction of the intermediate or high vagina until after puberty. The psychological issues surrounding sexuality in these patients are inadequately researched and poorly understood. |
Age differences in the enjoyment of incongruity-resolution and nonsense humor during adulthood. | This study tested a model of the development of incongruity-resolution and nonsense humor during adulthood. Subjects were 4,292 14- to 66-year-old Germans. Twenty jokes and cartoons representing structure-based humor categories of incongruity resolution and nonsense were rated for funniness and aversiveness. Humor structure preferences were also assessed with a direct comparison task. The results generally confirmed the hypotheses. Incongruity-resolution humor increased in funniness and nonsense humor decreased in funniness among progressively older subjects after the late teens. Aversiveness of both forms of humor generally decreased over the ages sampled. Age differences in humor appreciation were strongly correlated with age differences in conservatism. An especially strong parallel was found between age differences in appreciation of incongruity-resolution humor and age differences in conservatism. |
SPOT the Drug! An Unsupervised Pattern Matching Method to Extract Drug Names from Very Large Clinical Corpora | Although structured electronic health records are becoming more prevalent, much information about patient health is still recorded only in unstructured text. “Understanding” these texts has been a focus of natural language processing (NLP) research for many years, with some remarkable successes, yet there is more work to be done. Knowing the drugs patients take is not only critical for understanding patient health (e.g., for drug-drug interactions or drug-enzyme interaction), but also for secondary uses, such as research on treatment effectiveness. Several drug dictionaries have been curated, such as RxNorm, FDA's Orange Book, or NCI, with a focus on prescription drugs. Developing these dictionaries is a challenge, but even more challenging is keeping these dictionaries up-to-date in the face of a rapidly advancing field-it is critical to identify grapefruit as a “drug” for a patient who takes the prescription medicine Lipitor, due to their known adverse interaction. To discover other, new adverse drug interactions, a large number of patient histories often need to be examined, necessitating not only accurate but also fast algorithms to identify pharmacological substances. In this paper we propose a new algorithm, SPOT, which identifies drug names that can be used as new dictionary entries from a large corpus, where a “drug” is defined as a substance intended for use in the diagnosis, cure, mitigation, treatment, or prevention of disease. Measured against a manually annotated reference corpus, we present precision and recall values for SPOT. SPOT is language and syntax independent, can be run efficiently to keep dictionaries up-to-date and to also suggest words and phrases which may be misspellings or uncatalogued synonyms of a known drug. We show how SPOT's lack of reliance on NLP tools makes it robust in analyzing clinical medical text. SPOT is a generalized bootstrapping algorithm, seeded with a known dictionary and automatically extracting the context within which each drug is mentioned. We define three features of such context: support, confidence and prevalence. Finally, we present the performance tradeoffs depending on the thresholds chosen for these features. |
Strategy , Choice of Performance Measures , and Performance | We examine the relationship between quality-based manufacturing strategy and the use of different types of performance measures, as well as their separate and joint effects on performance. A key part of our investigation is the distinction between financial and both objective and subjective nonfinancial measures. Our results support the view that performance measurement diversity benefits performance as we find that, regardless of strategy, firms with more extensive performance measurement systems—especially those that include objective and subjective nonfinancial measures—have higher performance. But our findings also partly support the view that the strategy-measurement ‘‘fit’’ affects performance. We find that firms that emphasize quality in manufacturing use more of both objective and subjective nonfinancial measures. However, there is only a positive effect on performance from pairing a qualitybased manufacturing strategy with extensive use of subjective measures, but not with objective nonfinancial measures. INTRODUCTION Performance measures play a key role in translating an organization’s strategy into desired behaviors and results (Campbell et al. 2004; Chenhall and Langfield-Smith 1998; Kaplan and Norton 2001; Lillis 2002). They also help to communicate expectations, monitor progress, provide feedback, and motivate employees through performancebased rewards (Banker et al. 2000; Chenhall 2003; Ittner and Larcker 1998b; Ittner et al. 1997; Ittner, Larcker, and Randall 2003). Traditionally, firms have primarily used financial measures for these purposes (Balkcom et al. 1997; Kaplan and Norton 1992). But with the ‘‘new’’ competitive realities of increased customization, flexibility, and responsiveness, and associated advances in manufacturing practices, both academics and practitioners have argued that traditional financial performance measures are no longer adequate for these functions (Dixon et al. 1990; Fisher 1992; Ittner and Larcker 1998a; Neely 1999). Indeed, many We acknowledge the helpful suggestions by Tom Groot, Jim Hesford, Ranjani Krishnan, Fred Lindahl, Helene Loning, Michal Matejka, Ken Merchant, Frank Moers, Mark Peecher, Mike Shields, Sally Widener, workshop participants at the University of Illinois, the 2002 AAA Management Accounting Meeting in Austin, the 2002 World Congress of Accounting Educators in Hong Kong, and the 2003 AAA Annual Meeting in Honolulu. An earlier version of this paper won the best paper award at the 9th World Congress of Accounting Educators in Hong Kong (2002). 186 Van der Stede, Chow, and Lin Behavioral Research in Accounting, 2006 accounting researchers have identified the continued reliance on traditional management accounting systems as a major reason why many new manufacturing initiatives perform poorly (Banker et al. 1993; Ittner and Larcker 1995). In light of this development in theory and practice, the current study seeks to advance understanding of the role that performance measurement plays in executing strategy and enhancing organizational performance. It proposes and empirically tests three hypotheses about the performance effects of performance measurement diversity; the relation between quality-based manufacturing strategy and firms’ use of different types of performance measures; and the joint effects of strategy and performance measurement on organizational performance. The distinction between objective and subjective performance measures is a pivotal part of our investigation. Prior empirical research has typically only differentiated between financial and nonfinancial performance measures. We go beyond this dichotomy to further distinguish between nonfinancial measures that are quantitative and objectively derived (e.g., defect rates), and those that are qualitative and subjectively determined (e.g., an assessment of the degree of cooperation or knowledge sharing across departmental borders). Making this finer distinction between types of nonfinancial performance measures contributes to recent work in accounting that has begun to focus on the use of subjectivity in performance measurement, evaluation, and incentives (e.g., Bushman et al. 1996; Gibbs et al. 2004; Ittner, Larcker, and Meyer 2003; MacLeod and Parent 1999; Moers 2005; Murphy and Oyer 2004). Using survey data from 128 manufacturing firms, we find that firms with more extensive performance measurement systems, especially ones that include objective and subjective nonfinancial measures, have higher performance. This result holds regardless of the firm’s manufacturing strategy. As such, our finding supports the view that performance measurement diversity, per se, is beneficial. But we also find evidence that firms adjust their use of performance measures to strategy. Firms that emphasize quality in manufacturing tend to use more of both objective and subjective nonfinancial measures, but without reducing the number of financial measures. Interestingly, however, combining quality-based strategies with extensive use of objective nonfinancial measures is not associated with higher performance. This set of results is consistent with Ittner and Larcker (1995) who found that quality programs are associated with greater use of nontraditional (i.e., nonfinancial) measures and reward systems, but combining nontraditional measures with extensive quality programs does not improve performance. However, by differentiating between objective and subjective nonfinancial measures—thereby going beyond Ittner and Larcker (1995) and much of the extant accounting literature—we find that performance is higher when the performance measures used in conjunction with a quality-based manufacturing strategy are of the subjective type. Finally, we find that among firms with similar quality-based strategies, those with less extensive performance measurement systems have lower performance, whereas those with more extensive performance measurement systems do not. In the case of subjective performance measures, firms that use them more extensively than firms with similar qualitybased strategies actually have significantly higher performance. Thus, a ‘‘mismatch’’ between performance measurement and strategy is associated with lower performance only when firms use fewer measures than firms with similar quality-based strategies, but not when they use more. The paper proceeds as follows. The next section builds on the extant literature to formulate three hypotheses. The third section discusses the method, sample, and measures. Strategy, Choice of Performance Measures, and Performance 187 Behavioral Research in Accounting, 2006 The fourth section presents the results. The fifth section provides a summary, discusses the study’s limitations, and suggests possible directions for future research. HYPOTHESES Although there is widespread agreement on the need to expand performance measurement, two different views exist on the nature of the desirable change (Ittner, Larcker, and Randall 2003; Ruddle and Feeny 2000). In this section, we engage the relevant literatures to develop three hypotheses. Collectively, the hypotheses provide the basis for comparing the two prevailing schools of thought on how performance measurement should be improved; that of performance measurement diversity regardless of strategy versus that of performance measurement alignment with strategy (Ittner, Larcker, and Randall 2003). The Performance Measurement Diversity View A number of authors have argued that broadening the set of performance measures, per se, enhances organizational performance (e.g., Edvinsson and Malone 1997; Lingle and Schiemann 1996). The premise is that managers have an incentive to concentrate on those activities for which their performance is measured, often at the expense of other relevant but non-measured activities (Hopwood 1974), and greater measurement diversity can reduce such dysfunctional effects (Lillis 2002). Support for this view is available from economicsbased agency studies. Datar et al. (2001), Feltham and Xie (1994), Hemmer (1996), Holmstrom (1979), and Lambert (2001), for example, have demonstrated that in the absence of measurement costs, introducing incentives based on nonfinancial measures can improve contracting by incorporating information on managerial actions that are not fully captured by financial measures. Analytical studies have further identified potential benefits from using performance measures that are subjectively derived. For example, Baiman and Rajan (1995) and Baker et al. (1994) have shown that subjective measures can help to mitigate distortions in managerial effort by ‘‘backing out’’ dysfunctional behavior induced by incomplete objective performance measures, as well as reduce noise in the overall performance evaluation. However, the literature also has noted potential drawbacks from measurement diversity. It increases system complexity, thus taxing managers’ cognitive abilities (Ghosh and Lusch 2000; Lipe and Salterio 2000, 2002). It also increases the burden of determining relative weights for different measures (Ittner and Larcker 1998a; Moers 2005). Finally, multiple measures are also potentially conflicting (e.g., manufacturing efficiency and customer responsiveness), leading to incongruence of goals, at least in the short run (Baker 1992; Holmstrom and Milgrom 1991), and organizational friction (Lillis 2002). Despite these potential drawbacks, there is considerable empirical support for increased measurement diversity. For example, in a study of time-series data in 18 hotels, Banker et al. (2000) found that when nonfinancial measures are included in the compensation contract, managers more closely aligned their efforts to those measures, resulting in increased performance. Hoque and James (2000) and Scott and Tiessen (1999) also have found positive relations between firm performance and increased use of different types of performance measures (e.g., financial and nonfinancial). These resul |
A statistical study of magnetic tunnel junctions for high-density spin torque transfer-MRAM (STT-MRAM) | We have demonstrated a robust magnetic tunnel junction (MTJ) with a resistance-area product RA=8 Omega-mum2 that simultaneously satisfies the statistical requirements of high tunneling magnetoresistance TMR > 15sigma(Rp), write threshold spread sigma(Vw)/<Vw> <7.1%, breakdown-to-write voltage margin over 0.5 V, read-induced disturbance rate below 10-9, and sufficient write endurance, and is free of unwanted write-induced magnetic reversal. The statistics suggest that a 64 Mb chip at the 90-nm node is feasible. |
A Novel Method of Determining Parameters of CLAHE Based on Image Entropy | Histogram equalization, which stretches the dynamic range of intensity, is the most common method for enhancing the contrast of image. Contrast Limited Adaptive Histogram Equalization (CLAHE), proposed by K. Zuierveld, has two key parameters: block size and clip limit. These parameters are mainly used to control image quality, but have been heuristically determined by users. In this paper, we propose a novel method of determining two parameters of the CLAHE using entropy of image. The key idea is based on the characteristics of entropy curves: clip limit vs entropy and block size vs entropy. Clip limit and block size are determined at the point with maximum curvature on entropy curve. Experimental results show that the proposed method improves images with very low contrast. |
Mild dehydration: a risk factor of constipation? | Constipation defined as changes in the frequency, volume, weight, consistency and ease of passage of the stool occurs in any age group. The most important factors known to promote constipation are reduced physical activity and inadequate dietary intake of fibres, carbohydrates and fluids. Fluid losses induced by diarrhoea and febrile illness alter water balance and promote constipation. When children increase their water consumption above their usual intake, no change in stool frequency and consistency was observed. The improvement of constipation by increasing water intake, therefore, may be effective in children only when voluntary fluid consumption is lower-than-normal for the child's age and activity level. In the elderly, low fluid intake, which may be indicative of hypohydration, was a cause of constipation and a significant relationship between liquid deprivation from 2500 to 500 ml per day and constipation was reported. Dehydration is also observed when saline laxatives are used for the treatment of constipation if fluid replacement is not maintained and may affect the efficacy of the treatment. While sulphate in drinking water does not appear to have a significant laxative effect, fluid intake and magnesium sulphate-rich mineral waters were shown to improve constipation in healthy infants. In conclusion, fluid loss and fluid restriction and thus de-or hypohydration increase constipation. It is thus important to maintain euhydration as a prevention of constipation. |
Simulation of Unsteady Laminar Flow around a Circular Cylinder Morteza Bayareh | Abstract In this paper, unsteady laminar flow around a circular cylinder has been studied. Navier-stokes equations solved by Simple C algorithm exerted to specified structured and unstructured grids. Equations solved by staggered method and discretization of those done by upwind method. The mean drag coefficient, lift coefficient and strouhal number are compared from current work at three different Reynolds numbers with experimental and numerical values. |
Design a single band microstrip patch antenna at 60 GHz millimeter wave for 5G application | This proposed paper, a single band microstrip patch antenna for 5G wireless application is presented. This proposed antenna is suitable for the millimeter wave frequency. The single band antenna consist of new H slot and E slot loaded on the radiating patch with the 50 ohms microstrip line feeding used. This single band antenna is simulated on a Rogers RT5880 dielectric substrate have relative permittivity 2.2, loss tangent 0.0009, and height 1.6mm. The antenna is simulated by Electromagnetic simulation, computer software technology Microwave studio. The proposed single band antenna and simulated result on return loss, VSWR, surface current and 3D radiation pattern is presented. The simulated antenna shows the return loss −40.99dB at 60 GHz millimeter wave 5G wireless application presented. |
Improved Graph Clustering | Graph clustering involves the task of dividing nodes into clusters, so that the edge density is higher within clusters as opposed to across clusters. A natural, classic, and popular statistical setting for evaluating solutions to this problem is the stochastic block model, also referred to as the planted partition model. In this paper, we present a new algorithm-a convexified version of maximum likelihood-for graph clustering. We show that, in the classic stochastic block model setting, it outperforms existing methods by polynomial factors when the cluster size is allowed to have general scalings. In fact, it is within logarithmic factors of known lower bounds for spectral methods, and there is evidence suggesting that no polynomial time algorithm would do significantly better. We then show that this guarantee carries over to a more general extension of the stochastic block model. Our method can handle the settings of semirandom graphs, heterogeneous degree distributions, unequal cluster sizes, unaffiliated nodes, partially observed graphs, planted clique/coloring, and so on. In particular, our results provide the best exact recovery guarantees to date for the planted partition, planted k-disjoint-cliques and planted noisy coloring models with general cluster sizes; in other settings, we match the best existing results up to logarithmic factors. |
Enhanced memory consolidation in mice lacking the circadian modulators Sharp1 and -2 caused by elevated Igf2 signaling in the cortex. | The bHLH transcription factors SHARP1 and SHARP2 are partially redundant modulators of the circadian system. SHARP1/DEC2 has been shown to control sleep length in humans and sleep architecture is also altered in double mutant mice (S1/2(-/-)). Because of the importance of sleep for memory consolidation, we investigated the role of SHARP1 and SHARP2 in cognitive processing. S1/2(-/-) mice show enhanced cortex (Cx)-dependent remote fear memory formation as well as improved reversal learning, but do not display alterations in hippocampus (Hi)-dependent recent fear memory formation. SHARP1 and SHARP2 single null mutants do not display any cognitive phenotype supporting functional redundancy of both factors. Molecular and biochemical analyses revealed elevated insulin-related growth factor 2 (IGF2) signaling and increased phosphorylation of MAPK and S6 in the Cx but not the Hi of S1/2(-/-) mice. No changes were detected in single mutants. Moreover, adeno-associated virus type 2-mediated IGF2 overexpression in the anterior cingulate cortex enhanced remote fear memory formation and the analysis of forebrain-specific double null mutants of the Insulin and IGF1 receptors revealed their essential function for memory formation. Impaired fear memory formation in aged S1/2(-/-) mice indicates that elevated IGF2 signaling in the long term, however, has a negative impact on cognitive processing. In summary, we conclude that the bHLH transcription factors SHARP1 and SHARP2 are involved in cognitive processing by controlling Igf2 expression and associated signaling cascades. Our analyses provide evidence that the control of sleep and memory consolidation may share common molecular mechanisms. |
A Novel MDR1 GT1292-3TG (Cys431Leu) Genetic Variation and Its Effect on P-glycoprotein Biologic Functions | P-glycoprotein (P-gp) is a membrane-bound transporter protein that is encoded by the human multidrug resistance gene MDR1 (ABCB1). P-gp recognizes a wide range of xenobiotics, is pivotal in mediating cancer drug resistance, and plays an important role in limiting drug penetration across the blood–brain barrier. MDR1 genetic variation can lead to changes in P-gp function and may have implications on drug pharmacokinetics. We have identified a novel MDR1 GT1292-3TG (Cys431Leu) genetic variation through systematic profiling of subjects with leukemia. The cellular and transport function of this variation was investigated with recombinant human embryonic kidney cells expressing MDR1. Compared with the wild type, MDR1 GT1292-3TG recombinant cells exhibited a lower drug resistance phenotype for a panel of chemotherapeutic agents. When compared with wild type, MDR1 GT1292-3TG recombinant cells exposed exhibited a 75% decrease in IC50 for doxorubicin (162.6 ± 17.4 to 37.9 ± 2.6 nM) and a 50% decrease in IC50 for paclitaxel (155.7 ± 27.5 to 87.7 ± 9.2 nM), vinblastine (128.0 ± 15.9 to 65.9 ± 5.1 nM), and vincristine (593.7 ± 61.8 to 307.3 ± 17.0 nM). The effects of the Cys431Leu variation, due to MDR1 GT1292-3TG nucleotide transition, on P-gp-dependent intracellular substrate accumulation appeared to be substrate dependent where doxorubicin, vinblastine, and paclitaxel exhibit an increased accumulation (p < 0.05), while verapamil and Hoechst33342 exhibit a decreased intracellular concentration compared with wild type (p < 0.05). Collectively, these data suggest MDR1 GT1292-3TG variation of P-gp may reduce drug resistance and that subjects with this genotype undergoing chemotherapy with drugs that are transported by P-gp could potentially be more responsive to therapy than those with MDR1 wild-type genotype. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.