title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Sharing Visual Features for Multiclass and Multiview Object Detection | We consider the problem of detecting a large number of different classes of objects in cluttered scenes. Traditional approaches require applying a battery of different classifiers to the image, at multiple locations and scales. This can be slow and can require a lot of training data since each classifier requires the computation of many different image features. In particular, for independently trained detectors, the (runtime) computational complexity and the (training-time) sample complexity scale linearly with the number of classes to be detected. We present a multitask learning procedure, based on boosted decision stumps, that reduces the computational and sample complexity by finding common features that can be shared across the classes (and/or views). The detectors for each class are trained jointly, rather than independently. For a given performance level, the total number of features required and, therefore, the runtime cost of the classifier, is observed to scale approximately logarithmically with the number of classes. The features selected by joint training are generic edge-like features, whereas the features chosen by training each class separately tend to be more object-specific. The generic features generalize better and considerably reduce the computational cost of multiclass object detection |
Automatic 3D bi-ventricular segmentation of cardiac images by a shape-constrained multi-task deep learning approach | Deep learning approaches have achieved state-of-the-art performance in cardiac magnetic resonance (CMR) image segmentation. However, most approaches have focused on learning image intensity features for segmentation, whereas the incorporation of anatomical shape priors has received less attention. In this paper, we combine a multi-task deep learning approach with atlas propagation to develop a shape-constrained bi-ventricular segmentation pipeline for short-axis CMR volumetric images. The pipeline first employs a fully convolutional network (FCN) that learns segmentation and landmark localisation tasks simultaneously. The architecture of the proposed FCN uses a 2.5D representation, thus combining the computational advantage of 2D FCNs networks and the capability of addressing 3D spatial consistency without compromising segmentation accuracy. Moreover, the refinement step is designed to explicitly enforce a shape constraint and improve segmentation quality. This step is effective for overcoming image artefacts (e.g. due to different breath-hold positions and large slice thickness), which preclude the creation of anatomically meaningful 3D cardiac shapes. The proposed pipeline is fully automated, due to network’s ability to infer landmarks, which are then used downstream in the pipeline to initialise atlas propagation. We validate the pipeline on 1831 healthy subjects and 649 subjects with pulmonary hypertension. Extensive numerical experiments on the two datasets demonstrate that our proposed method is robust and capable of producing accurate, high-resolution and anatomically smooth bi-ventricular 3D models, despite the artefacts in input CMR volumes. |
Asymmetric Loss Functions and Deep Densely-Connected Networks for Highly-Imbalanced Medical Image Segmentation: Application to Multiple Sclerosis Lesion Detection | Fully convolutional deep neural networks have been asserted to be fast and precise frameworks with great potential in image segmentation. One of the major challenges in training such networks raises when the data are unbalanced, which is common in many medical imaging applications, such as lesion segmentation, where lesion class voxels are often much lower in numbers than non-lesion voxels. A trained network with unbalanced data may make predictions with high precision and low recall, being severely biased toward the non-lesion class which is particularly undesired in most medical applications where false negatives are actually more important than false positives. Various methods have been proposed to address this problem, including two-step training, sample re-weighting, balanced sampling, and more recently, similarity loss functions and focal loss. In this paper, we fully trained convolutional deep neural networks using an asymmetric similarity loss function to mitigate the issue of data imbalance and achieve much better tradeoff between precision and recall. To this end, we developed a 3D fully convolutional densely connected network (FC-DenseNet) with large overlapping image patches as input and an asymmetric similarity loss layer based on Tversky index (using $F_\beta $ scores). We used large overlapping image patches as inputs for intrinsic and extrinsic data augmentation, a patch selection algorithm, and a patch prediction fusion strategy using B-spline weighted soft voting to account for the uncertainty of prediction in patch borders. We applied this method to multiple sclerosis (MS) lesion segmentation based on two different datasets of MSSEG 2016 and ISBI longitudinal MS lesion segmentation challenge, where we achieved average Dice similarity coefficients of 69.9% and 65.74%, respectively, achieving top performance in both the challenges. We compared the performance of our network trained with $F_\beta $ loss, focal loss, and generalized Dice loss functions. Through September 2018, our network trained with focal loss ranked first according to the ISBI challenge overall score and resulted in the lowest reported lesion false positive rate among all submitted methods. Our network trained with the asymmetric similarity loss led to the lowest surface distance and the best lesion true positive rate that is arguably the most important performance metric in a clinical decision support system for lesion detection. The asymmetric similarity loss function based on $F_\beta $ scores allows training networks that make a better balance between precision and recall in highly unbalanced image segmentation. We achieved superior performance in MS lesion segmentation using a patch-wise 3D FC-DenseNet with a patch prediction fusion strategy, trained with asymmetric similarity loss functions. |
Spatial Filtering for EEG-Based Regression Problems in Brain–Computer Interface (BCI) | Electroencephalogram (EEG) signals are frequently used in brain–computer interfaces (BCIs), but they are easily contaminated by artifacts and noise, so preprocessing must be done before they are fed into a machine learning algorithm for classification or regression. Spatial filters have been widely used to increase the signal-to-noise ratio of EEG for BCI classification problems, but their applications in BCI regression problems have been very limited. This paper proposes two common spatial pattern (CSP) filters for EEG-based regression problems in BCI, which are extended from the CSP filter for classification, by using fuzzy sets. Experimental results on EEG-based response speed estimation from a large-scale study, which collected 143 sessions of sustained-attention psychomotor vigilance task data from 17 subjects during a 5-month period, demonstrate that the two proposed spatial filters can significantly increase the EEG signal quality. When used in LASSO and <inline-formula><tex-math notation="LaTeX">$k$</tex-math></inline-formula>-nearest neighbors regression for user response speed estimation, the spatial filters can reduce the root-mean-square estimation error by <inline-formula><tex-math notation="LaTeX">$10.02-19.77\%$</tex-math></inline-formula>, and at the same time increase the correlation to the true response speed by <inline-formula><tex-math notation="LaTeX">$19.39-86.47\%$</tex-math></inline-formula>. |
The impact of physical activity and fitness on academic achievement and cognitive performance in children | The potential for physical activity and fitness to improve cognitive function, learning and academic achievement in children has received attention by researchers and policy makers. This paper reports a systematic approach to identification, analysis and review of published studies up to early 2009. A threestep search method was adopted to identify studies that used measures of physical activity or fitness to assess either degree of association with or effect on a) academic achievement and b) cognitive performance. A total of 18 studies including one randomised control trial, six quasi-experimental and 11 correlational studies were included for data extraction. No studies meeting criteria that examined the links between physical activity and cognitive function were found. Weak positive associations were found between both physical activity and fitness and academic achievement and fitness and elements of cognitive function, but this was not supported by intervention studies. There is insufficient evidence to conclude that additional physical education time increases academic achievement; however there is no evidence that it is detrimental. The quality and depth of the evidence base is limited. Further research with rigour beyond correlational studies is essential. |
PIC a Different Word: A Simple Model for Lexical Substitution in Context | The Lexical Substitution task involves selecting and ranking lexical paraphrases for a target word in a given sentential context. We present PIC, a simple measure for estimating the appropriateness of substitutes in a given context. PIC outperforms another simple, comparable model proposed in recent work, especially when selecting substitutes from the entire vocabulary. Analysis shows that PIC improves over baselines by incorporating frequency biases into predictions. |
Entity Linking with a Paraphrase Flavor | The task of Named Entity Linking is to link entity mentions in the document to their correct entries in a knowledge base and to cluster NIL mentions. Ambiguous, misspelled, and incomplete entity mention names are the main challenges in the linking process. We propose a novel approach that combines two state-of-the-art models — for entity disambiguation and for paraphrase detection — to overcome these challenges. We consider name variations as paraphrases of the same entity mention and adopt a paraphrase model for this task. Our approach utilizes a graph-based disambiguation model based on Personalized Page Rank, and then refines and clusters its output using the paraphrase similarity between entity mention strings. It achieves a competitive performance of 80.5% in B+F clustering score on diagnostic TAC EDL 2014 data. |
The TangibleK Robotics Program: Applied Computational Thinking for Young Children. | This article describes the TangibleK robotics program for young children. Based on over a decade of research, this program is grounded on the belief that teaching children about the human-made world, the realm of technology and engineering, is as important as teaching them about the natural world, numbers, and letters. The TangibleK program uses robotics as a tool to engage children in developing computational thinking and learning about the engineering design process. It includes a research-based, developmentally appropriate robotics kit that children can use to make robots and program their behaviors. The curriculum has been piloted in kindergarten classrooms and in summer camps and lab settings. The author presents the theoretical framework that informs TangibleK and the “powerful ideas” from computer science and robotics on which the curriculum is based, linking them to other disciplinary areas and developmental characteristics of early childhood. The article concludes with a description of classroom pedagogy and activities, along with assessment tools used to evaluate learning outcomes. |
A Location-Based Mobile Crowdsensing Framework Supporting a Massive Ad Hoc Social Network Environment | This article addresses one of the key challenges of engaging a massive ad hoc crowd by providing sustainable incentives. The incentive model is based on a context-aware cyber-physical spatio-temporal serious game with the help of a mobile crowd sensing mechanism. To this end, this article describes a framework that can create an ad hoc social network of millions of people and provide context-aware serious-game services as an incentive. While interacting with different services, the massive crowd shares a rich trail of geo-tagged multimedia data, which acts as a crowdsourcing eco-system. The incentive model has been tested on the mass crowd at the Hajj since 2014. From our observations, we conclude that the framework provides a sustainable incentive mechanism that can solve many real-life problems such as reaching a person in a crowd within the shortest possible time, isolating significant events, finding lost individuals, handling emergency situations, helping pilgrims to perform ritual events based on location and time, and sharing geo-tagged multimedia resources among a community of interest within the crowd. The framework allows an ad hoc social network to be formed within a very large crowd, a community of interests to be created for each person, and information to be shared with the right community of interests. We present the communication paradigm of the framework, the serious game incentive model, and cloud-based massive geo-tagged social network architecture. |
Cu,Zn-superoxide dismutase-driven free radical modifications: copper- and carbonate radical anion-initiated protein radical chemistry. | The understanding of the mechanism, oxidant(s) involved and how and what protein radicals are produced during the reaction of wild-type SOD1 (Cu,Zn-superoxide dismutase) with H2O2 and their fate is incomplete, but a better understanding of the role of this reaction is needed. We have used immuno-spin trapping and MS analysis to study the protein oxidations driven by human (h) and bovine (b) SOD1 when reacting with H2O2 using HSA (human serum albumin) and mBH (mouse brain homogenate) as target models. In order to gain mechanistic information about this reaction, we considered both copper- and CO3(*-) (carbonate radical anion)-initiated protein oxidation. We chose experimental conditions that clearly separated SOD1-driven oxidation via CO(*-) from that initiated by copper released from the SOD1 active site. In the absence of (bi)carbonate, site-specific radical-mediated fragmentation is produced by SOD1 active-site copper. In the presence of (bi)carbonate and DTPA (diethylenetriaminepenta-acetic acid) (to suppress copper chemistry), CO(*-) produced distinct radical sites in both SOD1 and HSA, which caused protein aggregation without causing protein fragmentation. The CO(*-) produced by the reaction of hSOD1 with H2O2 also produced distinctive DMPO (5,5-dimethylpyrroline-N-oxide) nitrone adduct-positive protein bands in the mBH. Finally, we propose a biochemical mechanism to explain CO(*-) production from CO2, enhanced protein radical formation and protection by (bi)carbonate against H2O2-induced fragmentation of the SOD1 active site. Our present study is important for establishing experimental conditions for studying the molecular mechanism and targets of oxidation during the reverse reaction of SOD1 with H2O2; these results are the first step in analysing the critical targets of SOD1-driven oxidation during pathological processes such as neuroinflammation. |
Emotional Prosody Measurement (EPM): a voice-based evaluation method for psychological therapy effectiveness. | The voice embodies three sources of information: speech, the identity, and the emotional state of the speaker (i.e., emotional prosody). The latter feature is resembled by the variability of the F0 (also named fundamental frequency of pitch) (SD F0). To extract this feature, Emotional Prosody Measurement (EPM) was developed, which consists of 1) speech recording, 2) removal of speckle noise, 3) a Fourier Transform to extract the F0-signal, and 4) the determination of SD F0. After a pilot study in which six participants mimicked emotions by their voice, the core experiment was conducted to see whether EPM is successful. Twenty-five patients suffering from a panic disorder with agoraphobia participated. Two methods (story-telling and reliving) were used to trigger anxiety and were compared with comparable but more relaxed conditions. This resulted in a unique database of speech samples that was used to compare the EPM with the Subjective Unit of Distress to validate it as measure for anxiety/stress. The experimental manipulation of anxiety proved to be successful and EPM proved to be a successful evaluation method for psychological therapy effectiveness. |
Federated Meta-Learning for Recommendation | Recommender systems have been widely studied from the machine learning perspective, where it is crucial to share information among users while preserving user privacy. In this work, we present a federated meta-learning framework for recommendation in which user information is shared at the level of algorithm, instead of model or data adopted in previous approaches. In this framework, user-specific recommendation models are locally trained by a shared parameterized algorithm, which preserves user privacy and at the same time utilizes information from other users to help model training. Interestingly, the model thus trained exhibits a high capacity at a small scale, which is energyand communicationefficient. Experimental results show that recommendation models trained by meta-learning algorithms in the proposed framework outperform the state-of-the-art in accuracy and scale. For example, on a production dataset, a shared model under Google Federated Learning (McMahan et al., 2017) with 900,000 parameters has prediction accuracy 76.72%, while a shared algorithm under federated meta-learning with less than 30,000 parameters achieves accuracy of 86.23%. |
An Efficient Cryptographic Protocol Verifier Based on Prolog Rules | We present a new automatic cryptographic protocol verifier based on a simple representation of the protocol by Prolog rules, and on a new efficient algorithm that determines whether a fact can be proved from these rules or not. This verifier proves secrecy properties of the protocols. Thanks to its use of unification, it avoids the problem of the state space explosion. Another advantage is that we do not need to limit the number of runs of the protocol to analyze it. We have proved the correctness of our algorithm, and have implemented it. The experimental results show that many examples of protocols of the literature, including Skeme [24], can be analyzed by our tool with very small resources: the analysis takes from less than 0.1 s for simple protocols to 23 s for the main mode of Skeme. It uses less than 2 Mb of memory in our tests. |
Towards Computer-Assisted Flamenco Transcription: An Experimental Comparison of Automatic Transcription Algorithms as Applied to A Cappella Singing | This article deals with automatic transcription of flamenco music recordings—more specifically, a cappella singing. We first study the specifics of flamenco singing and propose a transcription system based on fundamental frequency and energy estimation, which incorporates an iterative strategy for note segmentation and labeling. The proposed approach is evaluated on a music collection of 72 performances, including a variety of singers and recording conditions, and the presence or absence of percussion, background voices, and noise. We obtain satisfying results for the different approaches tested, and our system outperforms a state-of-the-art approach designed for other singing styles. In this study, we discuss the difficulties found in transcribing flamenco singing and in evaluating the obtained transcriptions, we analyze the influence of the different steps of the algorithm, and we state the main limitations of our approach and discuss challenges for future studies. |
Aripiprazole once-monthly for treatment of schizophrenia: double-blind, randomised, non-inferiority study. | BACKGROUND
Long-acting injectable formulations of antipsychotics are treatment alternatives to oral agents.
AIMS
To assess the efficacy of aripiprazole once-monthly compared with oral aripiprazole for maintenance treatment of schizophrenia.
METHOD
A 38-week, double-blind, active-controlled, non-inferiority study; randomisation (2:2:1) to aripiprazole once-monthly 400 mg, oral aripiprazole (10-30 mg/day) or aripiprazole once-monthly 50 mg (a dose below the therapeutic threshold for assay sensitivity). (
TRIAL REGISTRATION
clinicaltrials.gov, NCT00706654.)
RESULTS
A total of 1118 patients were screened, and 662 responders to oral aripiprazole were randomised. Kaplan-Meier estimated impending relapse rates at week 26 were 7.12% for aripiprazole once-monthly 400 mg and 7.76% for oral aripiprazole. This difference (-0.64%, 95% CI -5.26 to 3.99) excluded the predefined non-inferiority margin of 11.5%. Treatments were superior to aripiprazole once-monthly 50 mg (21.80%, P < or = 0.001).
CONCLUSIONS
Aripiprazole once-monthly 400 mg was non-inferior to oral aripiprazole, and the reduction in Kaplan-Meier estimated impending relapse rate at week 26 was statistically significant v. aripiprazole once-monthly 50 mg. |
Training End-to-End Dialogue Systems with the Ubuntu Dialogue Corpus | In this paper, we analyze neural network-based dialogue systems trained in an end-to-end manner using an updated version of the recent Ubuntu Dialogue Corpus, a dataset containing almost 1 million multi-turn dialogues, with a total of over 7 million utterances and 100 million words1. This dataset is interesting because of its size, long context lengths, and technical nature; thus, it can be used to train large models directly from data with minimal feature engineering. We provide baselines in two different environments: one where models are trained to select the correct next response from a list of candidate responses, and one where models are trained to maximize the loglikelihood of a generated utterance conditioned on the context of the conversation. These are both evaluated on a recall task that we call next utterance classification (NUC), and using vector-based metrics that capture the topicality of the responses. We observe that current end-to-end models are 1. This work is an extension of a paper appearing in SIGDIAL (Lowe et al., 2015). This paper further includes results on generative dialogue models, more extensive evaluation of the retrieval models using vector-based generative metrics, and a qualitative examination of responses from the generative models and classification errors made by the Dual Encoder model. Experiments are performed on a new version of the corpus, the Ubuntu Dialogue Corpus v2, which is publicly available: https://github.com/rkadlec/ubuntu-ranking-dataset-creator. The early dataset has been updated to add features and fix bugs, which are detailed in Section 3. c ©2017 Ryan Lowe, Nissan Pow, Iulian Vlad Serban, Laurent Charlinn, Chia-Wei Liu and Joelle Pineau This is an open-access article distributed under the terms of a Creative Commons Attribution License (http ://creativecommons.org/licenses/by/3.0/). LOWE, POW, SERBAN, CHARLINN, LIU AND PINEAU unable to completely solve these tasks; thus, we provide a qualitative error analysis to determine the primary causes of error for end-to-end models evaluated on NUC, and examine sample utterances from the generative models. As a result of this analysis, we suggest some promising directions for future research on the Ubuntu Dialogue Corpus, which can also be applied to end-to-end dialogue systems in general. |
Foot-and-mouth disease vaccines. | Foot-and-mouth disease (FMD) is a highly contagious disease of cloven-hoofed animals. The disease affects many areas of the world, often causing extensive epizootics in livestock, mostly farmed cattle and swine, although sheep, goats and many wild species are also susceptible. In countries where food and farm animals are essential for subsistence agriculture, outbreaks of FMD seriously impact food security and development. In highly industrialized developed nations, FMD endemics cause economic and social devastation mainly due to observance of health measures adopted from the World Organization for Animal Health (OIE). High morbidity, complex host-range and broad genetic diversity make FMD prevention and control exceptionally challenging. In this article we review multiple vaccine approaches developed over the years ultimately aimed to successfully control and eradicate this feared disease. |
Detection of Lung Cancer Using Backpropagation Neural Networks and Genetic Algorithm | Lung Carcinoma is a disease of uncontrolled growth of cancerous cells in the tissues of the lungs. The early detection of lung cancer is the key of its cure. Early diagnosis of the disease saves enormous lives, failing in which may lead to other severe problems causing sudden fatal death. In general, a measure for early stage diagnosis mainly includes X-rays, CT-images, MRI’s, etc. In this system first we would use some techniques that are essential for the task of medical image mining such as Data Preprocessing, Training and testing of samples, Classification using Backpropagation Neural Network which would classify the digital X-ray, CT-images, MRI’s, etc. as normal or abnormal. The normal state is the one that characterizes a healthy patient. The abnormal image will be further considered for the feature analysis. Further for optimized analysis of features Genetic Algorithm will be used that would extract as well as select features on the basis of the fitness of the features extracted. The selected features would be further classified as cancerous or noncancerous for the images classified as abnormal before. Hence this system will help to draw an appropriate decision about a particular patient’s state. Keywords—BackpopagationNeuralNetworks,Classification, Genetic Algorithm, Lung Cancer, Medical Image Mining. |
Lattice-based access control models | Lattice-based access control models were developed in the early 1970s to deal with the confidentiality of military information. In the late 1970s and early 1980s, researchers applied these models to certain integrity concerns. Later, application of the models to the Chinese Wall policy, a confidentiality policy unique to the commercial sector, was demonstrated. A balanced perspective on lattice-based access control models is provided. Information flow policies, the military lattice, access control models, the Bell-LaPadula model, the Biba model and duality, and the Chinese Wall lattice are reviewed. The limitations of the models are identified.<<ETX>> |
Directional Age-Primitive Pattern (DAPP) for Human Age Group Recognition and Age Estimation | An appropriate aging description from face image is the prime influential factor in human age recognition, but still there is an absence of a specially engineered aging descriptor, which can characterize discernible facial aging cues (e.g., craniofacial growth, skin aging) from a detailed and more finer point of view. To address this issue, we propose a local face descriptor, directional age-primitive pattern (DAPP), which inherits discernible aging cue information and is functionally more robust and discriminative than existing local descriptors. We introduce three attributes for coding the DAPP description. First, we introduce Age-Primitives encoding aging related to the most crucial texture primitives, yielding a reasonable and clear aging definition. Second, we introduce an encoding concept dubbed as Latent Secondary Direction, which preserves compact structural information in the code avoiding uncertain codes. Third, a globally adaptive thresholding mechanism is initiated to facilitate more discrimination in a flat and textured region. We apply DAPP on separate age group recognition and age estimation tasks. Applying the same approach to both of these tasks is seldom explored in the literature. Carefully conducted experiments show that the proposed DAPP description outperforms the existing approaches by an acceptable margin. |
Web-Based Measure of Semantic Relatedness | Semantic relatedness measures quantify the degree in which some words or concepts are related, considering not only similarity but any possible semantic relationship among them. Relatedness computation is of great interest in different areas, such as Natural Language Processing, Information Retrieval, or the Semantic Web. Different methods have been proposed in the past; however, current relatedness measures lack some desirable properties for a new generation of Semantic Web applications: maximum coverage, domain independence, and universality. In this paper, we explore the use of a semantic relatedness measure between words, that uses the Web as knowledge source. This measure exploits the information about frequencies of use provided by existing search engines. Furthermore, taking this measure as basis, we define a new semantic relatedness measure among ontology terms. The proposed measure fulfils the above mentioned desirable properties to be used on the Semantic Web. We have tested extensively this semantic measure to show that it correlates well with human judgment, and helps solving some particular tasks, as word sense disambiguation or ontology matching. |
Randomized trial to determine the effect of nebivolol on mortality and cardiovascular hospital admission in elderly patients with heart failure (SENIORS). | AIMS
Large randomized trials have shown that beta-blockers reduce mortality and hospital admissions in patients with heart failure. The effects of beta-blockers in elderly patients with a broad range of left ventricular ejection fraction are uncertain. The SENIORS study was performed to assess effects of the beta-blocker, nebivolol, in patients >/=70 years, regardless of ejection fraction.
METHODS AND RESULTS
We randomly assigned 2128 patients aged >/=70 years with a history of heart failure (hospital admission for heart failure within the previous year or known ejection fraction </=35%), 1067 to nebivolol (titrated from 1.25 mg once daily to 10 mg once daily), and 1061 to placebo. The primary outcome was a composite of all cause mortality or cardiovascular hospital admission (time to first event). Analysis was by intention to treat. Mean duration of follow-up was 21 months. Mean age was 76 years (SD 4.7), 37% were female, mean ejection fraction was 36% (with 35% having ejection fraction >35%), and 68% had a prior history of coronary heart disease. The mean maintenance dose of nebivolol was 7.7 mg and of placebo 8.5 mg. The primary outcome occurred in 332 patients (31.1%) on nebivolol compared with 375 (35.3%) on placebo [hazard ratio (HR) 0.86, 95% CI 0.74-0.99; P=0.039]. There was no significant influence of age, gender, or ejection fraction on the effect of nebivolol on the primary outcome. Death (all causes) occurred in 169 (15.8%) on nebivolol and 192 (18.1%) on placebo (HR 0.88, 95% CI 0.71-1.08; P=0.21).
CONCLUSION
Nebivolol, a beta-blocker with vasodilating properties, is an effective and well-tolerated treatment for heart failure in the elderly. |
Learning Foreign Language through an Interactive Multimedia Program: An Experimental Study on the Effects of the Relevance Component of the ARCS Model | This experimental study investigated effects of intrinsic motivation and embedded relevance enhancement within a computer-based interactive multimedia (CBIM) lesson for English as a foreign language (EFL) learners. Subjects, categorized as having a higher or lower level of intrinsic motivation, were randomly assigned to learn concepts related to criticism using a CBIM program featuring English language text, videos, and exercises either with or without enhanced relevance components. Two dependent variables, comprehension, as measured by a posttest, and perceptions of motivation, as measured by the Modified Instructional Material Motivation Survey (MIMMS), were assessed after students completed the CBIM program. Two-way ANOVA was used to analyze the collected data. The findings indicated that (a) the use of relevance enhancement strategies facilitated students’ language learning regardless of learners’ level of intrinsic motivation, (b) more highly intrinsically motivated students performed better regardless of the specific treatments they received, (c) the effects of the two variables were additive; intrinsically motivated students who learned from the program with embedded instructional strategies performed the best overall, and (d) there was no significant interaction between the two variables. |
Fully Distributed Privacy Preserving Mini-batch Gradient Descent Learning | In fully distributed machine learning, privacy and security are important issues. These issues are often dealt with using secure multiparty computation (MPC). However, in our application domain, known MPC algorithms are not scalable or not robust enough. We propose a light-weight protocol to quickly and securely compute the sum of the inputs of a subset of participants assuming a semi-honest adversary. During the computation the participants learn no individual values. We apply this protocol to efficiently calculate the sum of gradients as part of a fully distributed mini-batch stochastic gradient descent algorithm. The protocol achieves scalability and robustness by exploiting the fact that in this application domain a “quick and dirty” sum computation is acceptable. In other words, speed and robustness takes precedence over precision. We analyze the protocol theoretically as well as experimentally based on churn statistics from a real smartphone trace. We derive a sufficient condition for preventing the leakage of an individual value, and we demonstrate the feasibility of the overhead of the protocol. |
SOC test architecture design for efficient utilization of test bandwidth | This article deals with the design of on-chip architectures for testing large system chips (SOCs) for manufacturing defects in a modular fashion. These architectures consist of wrappers and test access mechanisms (TAMs). For an SOC with specified parameters of modules and their tests, we design an architecture that minimizes the required tester vector memory depth and test application time. In this article, we formulate the test architecture design problems for both modules with fixed- and flexible-length scan chains, assuming the relevant module parameters and a maximal SOC TAM width are given. Subsequently, we derive a formulation for an architecture-independent lower bound for the SOC test time. We analyze three types of TAM under-utilization that make the theoretical lower bound unachievable in most practical architecture instances. We present a novel architecture-independent heuristic algorithm that effectively optimizes the test architecture for a given SOC. The algorithm efficiently determines the number of TAMs and their widths, the assignment of modules to TAMs, and the wrapper design per module. We show how this algorithm can be used for optimizing both test bus and TestRail architectures with either serial or parallel test schedules. Experimental results for the ITC'02 SOC Test Benchmarks show that, compared to manual best-effort engineering approaches, we can save up to 75% in test times, while compared to previously published algorithms, we obtain comparable or better test times at negligible compute time. |
Bare-Bone Particle Swarm Optimisation for Simultaneously Discretising and Selecting Features for High-Dimensional Classification | Feature selection and discretisation have shown their effectiveness for data preprocessing especially for high-dimensional data with many irrelevant features. While feature selection selects only relevant features, feature discretisation finds a discrete representation of data that contains enough information but ignoring some minor fluctuation. These techniques are usually applied in two stages, discretisation and then selection since many feature selection methods work only on discrete features. Most commonly used discretisation methods are univariate in which each feature is discretised independently; therefore, the feature selection stage may not work efficiently since information showing feature interaction is not considered in the discretisation process. In this study, we propose a new method called PSO-DFS using bare-bone particle swarm optimisation (BBPSO) for discretisation and feature selection in a single stage. The results on ten high-dimensional datasets show that PSO-DFS obtains a substantial dimensionality reduction for all datasets. The classification performance is significantly improved or at least maintained on nine out of ten datasets by using the transformed “small” data obtained from PSO-DFS. Compared to applying the two-stage approach which uses PSO for feature selection on the discretised data, PSO-DFS achieves better performance on six datasets, and similar performance on three datasets with a much smaller number of features selected. |
Click “Like” to Change Your Behavior: A Mixed Methods Study of College Students’ Exposure to and Engagement With Facebook Content Designed for Weight Loss | BACKGROUND
Overweight or obesity is prevalent among college students and many gain weight during this time. Traditional face-to-face weight loss interventions have not worked well in this population. Facebook is an attractive tool for delivering weight loss interventions for college students because of its popularity, potential to deliver strategies found in successful weight loss interventions, and ability to support ongoing adaptation of intervention content.
OBJECTIVE
The objective of this study was to describe participant exposure to a Facebook page designed to deliver content to overweight/obese college students in a weight loss randomized controlled trial (N=404) and examine participant engagement with behavior change campaigns for weight loss delivered via Facebook.
METHODS
The basis of the intervention campaign model were 5 self-regulatory techniques: intention formation, action planning, feedback, goal review, and self-monitoring. Participants were encouraged to engage their existing social network to meet their weight loss goals. A health coach moderated the page and modified content based on usage patterns and user feedback. Quantitative analyses were conducted at the Facebook post- and participant-level of analysis. Participant engagement was quantified by Facebook post type (eg, status update) and interaction (eg, like) and stratified by weight loss campaign (sequenced vs nonsequenced). A subset of participants were interviewed to evaluate the presence of passive online engagement or "lurking."
RESULTS
The health coach posted 1816 unique messages to the study's Facebook page over 21 months, averaging 3.45 posts per day (SD 1.96, range 1-13). In all, 72.96% (1325/1816) of the posts were interacted with at least once (eg, liked). Of these, approximately 24.75% (328/1325) had 1-2 interactions, 23.39% (310/1325) had 3-5 interactions, 25.13% (333/1325) had 6-8 interactions, and 41 posts had 20 or more interactions (3.09%, 41/1325). There was significant variability among quantifiable (ie, visible) engagement. Of 199 participants in the final intervention sample, 32 (16.1%) were highly active users and 62 (31.2%) never visibly engaged with the intervention on Facebook. Polls were the most popular type of post followed by photos, with 97.5% (79/81) and 80.3% (386/481) interacted with at least once. Participants visibly engaged less with posts over time (partial r=-.33; P<.001). Approximately 40% of the participants interviewed (12/29, 41%) reported passively engaging with the Facebook posts by reading but not visibly interacting with them.
CONCLUSIONS
Facebook can be used to remotely deliver weight loss intervention content to college students with the help of a health coach who can iteratively tailor content and interact with participants. However, visible engagement with the study's Facebook page was highly variable and declined over time. Whether the level of observed engagement is meaningful in terms of influencing changes in weight behaviors and outcomes will be evaluated at the completion of the overall study. |
Calculation of the electronegativity of fluorine from thermochemical data | 1.
A method for calculating electronegativities from a combination of data on bond energies has been proposed.
2.
The electronegativity of fluorine is less than the generally assumed value, to wit, xF=3.5 instead of 4.0 on the Pauling scale. |
Creative Interdisciplinary UAV Design | Interdisciplinary design is an important aspect of engineering work. Opportunities for collaboration between disciplines exist at the undergraduate level through engineering competitions and senior design courses. To be successful, the various groups must be aware of the needed synergy and must develop cross-disciplinary communication. This article describes the collaborative design process for an unmanned aerial vehicle (UAV) between an IEEE competition team and an aerospace engineering senior design team. |
WSN for forest monitoring to prevent illegal logging | Illegal logging is in these days widespread problem. In this paper we propose the system based on principles of WSN for monitoring the forest. Acoustic signal processing and evaluation system described in this paper is dealing with the detection of chainsaw sound with autocorrelation method. This work is describing first steps in building the integrated system. |
A Weighted Correlation Index for Rankings with Ties | Understanding the correlation between two different scores for the same set of items is a common problem in graph analysis and information retrieval. The most commonly used statistics that quantifies this correlation is Kendall's tau; however, the standard definition fails to capture that discordances between items with high rank are more important than those between items with low rank. Recently, a new measure of correlation based on average precision has been proposed to solve this problem, but like many alternative proposals in the literature it assumes that there are no ties in the scores. This is a major deficiency in a number of contexts, and in particular when comparing centrality scores on large graphs, as the obvious baseline, indegree, has a very large number of ties in social networks and web graphs. We propose to extend Kendall's definition in a natural way to take into account weights in the presence of ties. We prove a number of interesting mathematical properties of our generalization and describe an O(n\log n) algorithm for its computation. We also validate the usefulness of our weighted measure of correlation using experimental data on social networks and web graphs. |
Sentence Pair Scoring: Towards Unified Framework for Text Comprehension | We review the task of Sentence Pair Scoring, popular in the literature in various forms — viewed as Answer Sentence Selection, Semantic Text Scoring, Next Utterance Ranking, Recognizing Textual Entailment, Paraphrasing or e.g. a component of Memory Networks. We argue that all such tasks are similar from the model perspective and propose new baselines by comparing the performance of common IR metrics and popular convolutional, recurrent and attentionbased neural models across many Sentence Pair Scoring tasks and datasets. We discuss the problem of evaluating randomized models, propose a statistically grounded methodology, and attempt to improve comparisons by releasing new datasets that are much harder than some of the currently used well explored benchmarks. We introduce a unified open source software framework with easily pluggable models and tasks, which enables us to experiment with multi-task reusability of trained sentence models. |
Real time face detection/monitor using raspberry pi and MATLAB | In this paper, we are implementing facial monitoring system by embedding face detection and face tracking algorithm found in MATLAB with the GPIO pins of Raspberry pi B by using RasPi command such that the array of LEDS follows the facial movement by detecting the face using Haar classifier, tracking its position in the range assigned using the eigenfeatures of the face, which are detected by eigenvectors of MATLAB and by face tracking, which is been carried by geometrical transformation so that motion and gesture of the face can be followed. By doing so we are opening up new way of facial tracking on a live streaming by the help of Viola Jones algorithm and an IR camera. |
Full-Wave Calibration of Time- and Frequency-Domain Ground-Penetrating Radar in Far-Field Conditions | Full-wave modeling of ground-penetrating radar (GPR) data using Green's functions for wave propagation in planar layered media and antenna characteristic global reflection and transmission functions for describing far-field antenna effects, including antenna-medium interactions, has shown a great potential for nondestructive characterization of soils and materials. The accuracy of the retrieved parameters in the GPR data inversion depends on the accuracy of the GPR external calibration. In this research we studied the stability and the repeatability of two different GPR systems, namely, frequency- and time-domain systems. A combination of a vector network analyzer and 800-5200 MHz horn antenna was used as a frequency-domain GPR (FD-GPR) whereas a GSSI GPR system using a 900 MHz bowtie antenna was used as a time-domain GPR (TD-GPR). Both GPR systems including their antennas were calibrated several times using measurements with the antennas at different heights over a perfect electric conductor (PEC) in the laboratory as well as over a water layer. In addition, measurements were performed over a thin water layer and a relatively thick sandy soil layer as validating medium. The results showed that the FD-GPR is relatively stable while the TD-GPR presents a significant drift which can be accounted for using corrections based on the air direct-coupling waves (free-space measurements). Water- and PEC-based calibrations provided very similar results for the GPR calibration functions. Inversions for the water layer and the sandy soil layer provided reliable results and showed a high degree of the repeatability for both radar systems. The error on the calibration based on inaccurate antenna heights over PEC showed the significant errors on the inversion results for the directive antenna (horn antenna) but less error for the bowtie antenna. This analysis demonstrated the general validity of the proposed far-field radar modeling approach, not only with respect to frequency and time domain radars but as well with respect to the calibrating medium. |
Bitwise Source Separation on Hashed Spectra: An Efficient Posterior Estimation Scheme Using Partial Rank Order Metrics | This paper proposes an efficient bitwise solution to the single-channel source separation task. Most dictionary-based source separation algorithms rely on iterative update rules during the run time, which becomes computationally costly especially when we employ an overcomplete dictionary and sparse encoding that tend to give better separation results. To avoid such cost we propose a bitwise scheme on hashed spectra that leads to an efficient posterior probability calculation. For each source, the algorithm uses a partial rank order metric to extract robust features that form a binarized dictionary of hashed spectra. Then, for a mixture spectrum, its hash code is compared with each source's hashed dictionary in one pass. This simple voting-based dictionary search allows a fast and iteration-free estimation of ratio masking at each bin of a signal spectrogram. We verify that the proposed BitWise Source Separation (BWSS) algorithm produces sensible source separation results for the single-channel speech denoising task, with 6–8 dB mean SDR. To our knowledge, this is the first dictionary based algorithm for this task that is completely iteration-free in both training and testing. |
Using Deep Q-Learning to Control Optimization Hyperparameters | We present a novel definition of the reinforcement learning state, actions and reward function that allows a deep Q-network (DQN) to learn to control an optimization hyperparameter. Using Q-learning with experience replay, we train two DQNs to accept a state representation of an objective function as input and output the expected discounted return of rewards, or q-values, connected to the actions of either adjusting the learning rate or leaving it unchanged. The two DQNs learn a policy similar to a line search, but differ in the number of allowed actions. The trained DQNs in combination with a gradient-based update routine form the basis of the Q-gradient descent algorithms. To demonstrate the viability of this framework, we show that the DQN’s q-values associated with optimal action converge and that the Q-gradient descent algorithms outperform gradient descent with an Armijo or nonmonotone line search. Unlike traditional optimization methods, Q-gradient descent can incorporate any objective statistic and by varying the actions we gain insight into the type of learning rate adjustment strategies that are successful for neural network optimization. |
Investigating the use of "Grounded Theory" in information systems research | "Grounded Theory" has been employed quite widely in studies of information systems (IS) phenomena. A survey of IS literature reveals conflict in the understanding and use of "grounded theory". The term "grounded theory" is often used as a catch phrase to denote usage of a grounded theory approach to conducting research. A variety of grounded theory approaches have been employed in IS research. The purpose of this investigation was to establish the alternative approaches employed, and the extent to which each was used. To achieve this purpose, a comprehensive review of IS studies that employed "grounded theory" was carried out. Articles from the commonly ranked top 50 IS-centric journals were used as the frame of reference. These journals most closely represent the status quo in IS research. Articles for the period 1985 to 2007 were examined. The analysis revealed four main grounded theory approaches in IS research. These can be classified as the "Glaserian" grounded theory approach, the "Straussian" grounded theory approach, the use of "grounded theory" as part of a mixed methodology, and the simple application of grounded theory techniques, typically for data analysis purposes. The latter has been the most common application of "grounded theory" in IS research. The "Glaserian" approach was the least often employed, with many studies opting for the "Straussian" approach. These and other findings are discussed and implications drawn. |
Flow-Based Propagators for the SEQUENCE and Related Global Constraints | Abstract. We propose new filtering algorithms for the S EQUENCE constraint and some extensions of the S EQUENCEconstraint based on network flows. We enforce domain consistency on the S EQUENCEconstraint inO(n) time down a branch of the search tree. This improves upon the best exist ing domain consistency algorithm by a factor of O(log n). The flows used in these algorithms are derived from a linear program. Some of them differ from th e flows used to propagate global constraints like G CC since the domains of the variables are encoded as costs on the edges rather than capacities. Such flows are efficient for maintaining bounds consistency over large domains and may b e useful for other global constraints. |
QR code location based reverse car-searching route recommendation model | To deal with the reverse car-searching issue in large buildings and parking lots, a Quick Response Code (QR code) based reverse car-searching route recommendation model is designed. By scanning the deployed QR codes, a Smartphone can pinpoint the host location and parking location efficiently. Based on the submitted location information, the central control system can finally return the recommended routes, which facilitates a host to reach the parking location effectively. In our model, the reverse car-searching route is divided into two parts: choosing the optimal exports (elevator) and computing the shortest walking distance route. Based on the optimal export selection algorithm and regional shortest path algorithm, our model can choose the prior exports (elevator) effectively, and then recommend the optimal walking route in the buildings and parking lots. The simulation shows that this low-cost system can effectively solve the reverse car-searching problem in large buildings and parking lots, save the driver's car-searching time and improve the utilization rate of parking facilities. |
Developing a Compensation Strategy | Excerpt] The management of change remains the challenge of the 1990s. The objectives of this change are to foster better performance, control costs, and enhance flexibility--all necessary to successfully compete in fierce markets. All managers are challenged by the pace and magnitude of this change. Human resource managers are not excepted, being confronted daily with questions about how to manage employees to support changes in technology, changes in organization structures, and changes in business strategy. And employees themselves are changing: in their values and expectations, their demographic diversity, their education, and their willingness to accept change. |
Second-order Temporal Pooling for Action Recognition | Deep learning models for video-based action recognition usually generate features for short clips (consisting of a few frames); such clip-level features are aggregated to video-level representations by computing statistics on these features. Typically zero-th (max) or the first-order (average) statistics are used. In this paper, we explore the benefits of using second-order statistics.Specifically, we propose a novel end-to-end learnable feature aggregation scheme, dubbed temporal correlation pooling that generates an action descriptor for a video sequence by capturing the similarities between the temporal evolution of clip-level CNN features computed across the video. Such a descriptor, while being computationally cheap, also naturally encodes the co-activations of multiple CNN features, thereby providing a richer characterization of actions than their first-order counterparts. We also propose higher-order extensions of this scheme by computing correlations after embedding the CNN features in a reproducing kernel Hilbert space. We provide experiments on benchmark datasets such as HMDB-51 and UCF-101, fine-grained datasets such as MPII Cooking activities and JHMDB, as well as the recent Kinetics-600. Our results demonstrate the advantages of higher-order pooling schemes that when combined with hand-crafted features (as is standard practice) achieves state-of-the-art accuracy. |
Kannada, Telugu and Devanagari Handwritten Numeral Recognition with Probabilistic Neural Network: A Novel Approach | In this paper, a novel approach for Kannada, Telugu and Devanagari handwritten numerals recognition based on global and local structural features is proposed. Probabilistic Neural Network (PNN) Classifier is used to classify the Kannada, Telugu and Devanagari numerals separately. Algorithm is validated with Kannada, Telugu and Devanagari numerals dataset by setting various radial values of PNN classifier under different experimental setup. The experimental results obtained are encouraging and comparable with other methods found in literature survey. The novelty of the proposed method is free from thinning and size |
A Comprehensive Memory Modeling Tool and Its Application to the Design and Analysis of Future Memory Hierarchies | In this paper we introduce CACTI-D, a significant enhancement of CACTI 5.0. CACTI-D adds support for modeling of commodity DRAM technology and support for main memory DRAM chip organization. CACTI-D enables modeling of the complete memory hierarchy with consistent models all the way from SRAM based L1 caches through main memory DRAMs on DIMMs. We illustrate the potential applicability of CACTI-D in the design and analysis of future memory hierarchies by carrying out a last level cache study for a multicore multithreaded architecture at the 32nm technology node. In this study we use CACTI-D to model all components of the memory hierarchy including L1, L2, last level SRAM, logic process based DRAM or commodity DRAM L3 caches, and main memory DRAM chips. We carry out architectural simulation using benchmarks with large data sets and present results of their execution time, breakdown of power in the memory hierarchy, and system energy-delay product for the different system configurations. We find that commodity DRAM technology is most attractive for stacked last level caches, with significantly lower energy-delay products. |
A survey of socially interactive robots | This paper reviews “socially interactive robots”: robots for which social human–robot interaction is important. We begin by discussing the context for socially interactive robots, emphasizing the relationship to other research fields and the different forms of “social robots”. We then present a taxonomy of design methods and system components used to build socially interactive robots. Finally, we describe the impact of these robots on humans and discuss open issues. An expanded version of this paper, which contains a survey and taxonomy of current applications, is available as a technical report [T. Fong, I. Nourbakhsh, K. Dautenhahn, A survey of socially interactive robots: concepts, design and applications, Technical Report No. CMU-RI-TR-02-29, Robotics Institute, Carnegie Mellon University, 2002]. © 2003 Elsevier Science B.V. All rights reserved. |
Pilot study evaluating the interaction between paclitaxel and protease inhibitors in patients with human immunodeficiency virus-associated Kaposi’s sarcoma: an Eastern Cooperative Oncology Group (ECOG) and AIDS Malignancy Consortium (AMC) trial | Paclitaxel, a cytotoxic agent metabolized by cytochrome P450 hepatic enzymes, is active for the treatment of human immunodeficiency (HIV) associated Kaposi’s sarcoma. Protease inhibitors are commonly used to treat HIV infection and are known to inhibit cytochrome P450. We sought to determine whether protease inhibitors alter the pharmacokinetics of paclitaxel. Patients with advanced HIV-associated KS received paclitaxel (100 mg/m2) by intravenous infusion over 3 h, and plasma samples were collected to measure paclitaxel concentration. The area under the curve (AUC) was calculated using a combination of the log and linear trapezoidal rule, and clearance was calculated as the dose/AUC. Pharmacokinetics were compared with respect to antiretroviral therapy and toxicity, Thirty-four patients received paclitaxel, of whom 20 had no prior paclitaxel therapy and were assessable for response. Twenty-seven had pharmacokinetic studies performed. Paclitaxel exposure was higher in patients taking protease inhibitors compared to those who were not taking protease inhibitors. The increased exposure did not correlate with efficacy or toxicity. Of the 20 patients assessable for response, 6 (30%) had an objective response and median progression-free survival was 7.8 months (95% confidence interval, 5.6, 21.0 months). Despite higher exposure to paclitaxel, patients on protease inhibitors did not experience enhanced toxicity or efficacy. |
Identification of Move Method Refactoring Opportunities | Placement of attributes/methods within classes in an object-oriented system is usually guided by conceptual criteria and aided by appropriate metrics. Moving state and behavior between classes can help reduce coupling and increase cohesion, but it is nontrivial to identify where such refactorings should be applied. In this paper, we propose a methodology for the identification of Move Method refactoring opportunities that constitute a way for solving many common feature envy bad smells. An algorithm that employs the notion of distance between system entities (attributes/methods) and classes extracts a list of behavior-preserving refactorings based on the examination of a set of preconditions. In practice, a software system may exhibit such problems in many different places. Therefore, our approach measures the effect of all refactoring suggestions based on a novel entity placement metric that quantifies how well entities have been placed in system classes. The proposed methodology can be regarded as a semi-automatic approach since the designer will eventually decide whether a suggested refactoring should be applied or not based on conceptual or other design quality criteria. The evaluation of the proposed approach has been performed considering qualitative, metric, conceptual, and efficiency aspects of the suggested refactorings in a number of open-source projects. |
Pattern Reconfigurable Antenna Based on Morphing Bistable Composite Laminates | In this paper, a novel pattern reconfigurable antenna based on morphing bistable composite laminates is presented. The bistable asymmetric glass-fiber reinforced polymer (GFRP) composite laminates have two stable configurations with curvatures of opposite signs. The antenna pattern is reconfigured by transforming the configuration of the bistable GFRP laminate which acts as the substrate of the antenna. The coplanar waveguide transmission lines feeding technique is used for the microstrip quasi-Yagi antenna. A prototype of the proposed antenna is fabricated using a semi-automatic screen printer and an autoclave. The transformation between the two stable states of the proposed antenna using Ni/Ti shape memory alloy springs is investigated experimentally. The out-of-plane displacements, reflection coefficients and radiation patterns for the two stable configurations of the antenna are measured, which agree well with the simulated results. The main beam direction is 89° and 59° for the two stable configurations, respectively. In addition, the influences of various bending radii on the radiation patterns are investigated to gain a thorough understanding of the reconfigurable mechanism of the proposed antenna. Finally, a two-element array of such an antenna is presented and measured. The proposed antenna provides a potential application in multifunctional, conformal, morphing, and integrated structures. |
Reinforcement Learning for Predictive Analytics in Smart Cities | The digitization of our lives cause a shift in the data production as well as in the required data management. Numerous nodes are capable of producing huge volumes of data in our everyday activities. Sensors, personal smart devices as well as the Internet of Things (IoT) paradigm lead to a vast infrastructure that covers all the aspects of activities in modern societies. In the most of the cases, the critical issue for public authorities (usually, local, like municipalities) is the efficient management of data towards the support of novel services. The reason is that analytics provided on top of the collected data could help in the delivery of new applications that will facilitate citizens’ lives. However, the provision of analytics demands intelligent techniques for the underlying data management. The most known technique is the separation of huge volumes of data into a number of parts and their parallel management to limit the required time for the delivery of analytics. Afterwards, analytics requests in the form of queries could be realized and derive the necessary knowledge for supporting intelligent applications. In this paper, we define the concept of a Query Controller (QC) that receives queries for analytics and assigns each of them to a processor placed in front of each data partition. We discuss an intelligent process for query assignments that adopts Machine Learning (ML). We adopt two learning schemes, i.e., Reinforcement Learning (RL) and clustering. We report on the comparison of the two schemes and elaborate on their combination. Our aim is to provide an efficient framework to support the decision making of the QC that should swiftly select the appropriate processor for each query. We provide mathematical formulations for the discussed problem and present simulation results. Through a comprehensive experimental evaluation, we reveal the advantages of the proposed models and describe the outcomes results while comparing them with a deterministic framework. |
BzTree: A High-Performance Latch-free Range Index for Non-Volatile Memory | Storing a database (rows and indexes) entirely in non-volatile memory (NVM) potentially enables both high performance and fast recovery. To fully exploit parallelism on modern CPUs, modern main-memory databases use latch-free (lock-free) index structures, e.g. Bw-tree or skip lists. To achieve high performance NVMresident indexes also need to be latch-free. This paper describes the design of the BzTree, a latch-free B-tree index designed for NVM. The BzTree uses a persistent multi-word compare-and-swap operation (PMwCAS) as a core building block, enabling an index design that has several important advantages compared with competing index structures such as the Bw-tree. First, the BzTree is latch-free yet simple to implement. Second, the BzTree is fast showing up to 2x higher throughput than the Bw-tree in our experiments. Third, the BzTree does not require any special-purpose recovery code. Recovery is near-instantaneous and only involves rolling back (or forward) any PMwCAS operations that were in-flight during failure. Our end-to-end recovery experiments of BzTree report an average recovery time of 145 μs. Finally, the same BzTree implementation runs seamlessly on both volatile RAM and NVM, which greatly reduces the cost of code maintenance. PVLDB Reference Format: Joy Arulraj, Justin Levandoski, Umar Farooq Minhas,Per-Ake Larson. BzTree: A High-Performance Latch-free Range Index for Non-Volatile Memory. PVLDB, 11(4): xxxx-yyyy, 2018. DOI: https://doi.org/10.1145/3164135.3164147 |
Experience with SCRAM, a SCenario Requirements Analysis Method | A method of scenario based requirements engineering is described that uses a combination of early prototypes, scenario scripts and design rationale to elicit and validate user requirements. Experience in using the method on an EU project, Multimedia Broker, is reported. Quantitative data on requirements sessions is analysed to assess user participation and quality of requirements captured. The method worked well but there were problems in the use of design rationale and control of turn taking in RE sessions. Lessons learned in using the method are summarised and future improvements for the method are discussed. |
Do effects of price discounts and nutrition education on food purchases vary by ethnicity, income and education? Results from a randomised, controlled trial. | BACKGROUND
Reducing health inequalities requires interventions that work as well, if not better, among disadvantaged populations. The aim of this study was to determine if the effects of price discounts and tailored nutrition education on supermarket food purchases (percentage energy from saturated fat and healthy foods purchased) vary by ethnicity, household income and education.
METHOD
A 2×2 factorial trial of 1104 New Zealand shoppers randomised to receive a 12.5% discount on healthier foods and/or tailored nutrition education (or no intervention) for 6 months.
RESULTS
There was no overall association of price discounts or nutrition education with percentage energy from saturated fat, or nutrition education with healthy food purchasing. There was an association of price discounts with healthy food purchasing (0.79 kg/week increase; 95% CI 0.43 to 1.16) that varied by ethnicity (p=0.04): European/other 1.02 kg/week (n=755; 95% CI 0.60 to 1.43); Pacific 1.20 kg/week (n=101; 95% CI 0.06 to 2.34); Māori -0.15 kg/week (n=248; 95% CI -1.10 to 0.80). This association of price discounts with healthy food purchasing did not vary by household income or education.
CONCLUSIONS
While a statistically significant variation by ethnicity in the effect of price discounts on food purchasing was found, the authors caution against a causal interpretation due to likely biases (eg, attrition) that differentially affected Māori and Pacific people. The study highlights the challenges in generating valid evidence by social groups for public health interventions. The null findings for tailored nutritional education across all social groups suggest that structural interventions (such as price) may be more effective. |
Making isovists syntactic: isovist integration analysis | Isovists and isovist fields are of interest to space syntax in that they offer a way of addressing the relationship between the viewer and their immediate spatial environment, however, in the form described by Benedikt (Benedikt, 1979) they are essentially non-syntactic. All the measures he proposes and then plots as fields are locally defined, and are independent of the state of the field in other locations. This paper presents a method for integrating isovists, based on the connectivities of a set of isovists represented as a graph, and allows global relational measures to be developed which are attributable to each viewer location, but which are essentially relational and so syntactic in their definition. We make a comparison of axial line and convex space integration analysis with the isovist integration analysis using case studies of both building and urban configurations. We show that isovist integration displays an excellent correlation with observed people movement, including a more detailed illustration of space usage than conventional space syntax analysis. |
Social Theory and the Sociological Discipline(s) - European Sociological Association's Social Theory Conference | One cost of sociology s growth and its institutional success are fragmentation and specialization. However, the continual splicing off into new themes and subfields, and frequent cutting off from traditional links with the classical founders, discipline-wide issues, and subfield-transcending questions are often criticized at ESA and ISA meetings. It ultimately contradicts sociology s self-understanding in two important ways: first, its widespread post-Kuhnian philosophical foundation and, second, its public role in society. Consequently, it is increasingly important to remind ourselves what the identity of sociology is and look for unifying links that inspire the breadth of sociological studies, namely, for social theory: How does social theory keep sociology and social sciences together? And, in particular, how does it do that in practice?One cost of sociology s growth and its institutional success are fragmentation and specialization. However, the continual splicing off into new themes and subfields, and frequent cutting off from traditional links with the classical founders, discipline-wide issues, and subfield-transcending questions are often criticized at ESA and ISA meetings. It ultimately contradicts sociology s self-understanding in two important ways: first, its widespread post-Kuhnian philosophical foundation and, second, its public role in society. Consequently, it is increasingly important to remind ourselves what the identity of sociology is and look for unifying links that inspire the breadth of sociological studies, namely, for social theory: How does social theory keep sociology and social sciences together? And, in particular, how does it do that in practice? |
Enhanced CSI Acquisition for FDD Multi-User Massive MIMO Systems | Massive multiple-input multiple-output (MIMO) is envisioned to meet the growing spectral efficiency (SE) requirement in next generation cellular systems. However, it has been recognized that the unaffordable channel state information reference signal (CSI-RS) overhead required for acquiring CSI is a major challenge in frequency division duplexing (FDD) massive MIMO systems. To tackle the challenge of this CSI-RS overhead limitation, certain attempts, which have been designed for the scenarios with dispersive locations of users and sparse nature of channels, suffer from inaccurate channel estimation when the nature of channels for different users becomes highly correlated and rich. In this paper, we propose an enhanced CSI acquisition scheme for FDD multi-user massive MIMO systems, which exploits beamformed CSI-RS transmission mechanism to reduce the CSI-RS overhead. We present two limited feedback algorithms which provide different tradeoffs between system performance and CSI feedback overhead, where the multistage CSI acquisition strategy is adopted to avoid prohibitive computational complexity and amount of feedback requirement. The simulation results show that our CSI acquisition scheme, without additional CSI-RS overhead and enormous CSI feedback overhead, achieves higher SE performance as compared to conventional scheme. |
An Improved Automatic Algorithm for Global Eddy Tracking Using Satellite Altimeter Data | In this paper, we propose a new hybrid mesoscale eddy tracking method to enhance the eddy tracking accuracy from global satellite altimeter data. This method integrates both physical and geometric eddy properties (including the distance between eddies, the area and amplitude of eddy, and the shape of the eddy edge) via the output of detection and the calculation of Hausdorff distance, which could describe the similarity between eddy boundaries. We applied the proposed hybrid method to several previously reported eddies and compared the results with those from two traditional tracking methods. A quantitative comparison indicates that the hybrid algorithm can better reveal eddy signals in terms of their spatial scale, amplitude, lifespan, and splitting. The hybrid method was used for global mesoscale eddies tracking from 1993 to 2015. Global distributions of net eddy numbers revealed that the sources of eddies are located along the eastern boundaries of the world oceans, while the sinks of eddies are located along the western boundaries. The lifespan distribution of eddies exhibited steep growth from high and low latitudes to middle latitudes. A clear divergent pathway demonstrates that cyclonic/anticyclonic eddies tend to travel poleward/equatorward in the world oceans. |
Neuromuskuläres Defizit bei chronischer Sprunggelenkinstabilität | Die peroneale Reaktionszeit (PRT) dient zur Beurteilung eines neuromuskulären Defizits bei der chronisch funktionellen Sprunggelenkinstabilität. Ziel dieser Studie ist es, die PRT an einem großen Patientenkollektiv mit chronischer Sprunggelenkinstabilität zu bestimmen, da unklar ist, ob bei diesen Patienten dieser Parameter als Ausdruck eines neuromuskulären Defizits grundsätzlich erhöht ist. 186 Patienten durchliefen einen diagnostischen Algorithmus aus Anamnese, klinischer Untersuchung, Röntgendiagnostik und Bestimmung der PRT auf einer Kippplattform. Eine verlängerte PRT ließ sich bei der überwiegenden Mehrzahl der Patienten (n = 143; 77 %) nachweisen. Bei 41 % (n = 77) zeigte sich im Rahmen der radiologischen Stressdiagnostik ein signifikanter Seitenunterschied zwischen betroffenem und gesundem Bein bei der Taluskippung (p = 0,002) und beim Talusvorschub (p = 0,04). Von diesen 77 Patienten hatten nur 15 Patienten (8 %) ein radiologisch nachgewiesenes rein mechanisches Problem. Als eine Folge von rezidivierenden Umknicktraumen der Sprunggelenke ist in den meisten Fällen von einem posttraumatischen Defizit der Propriozeptivität auszugehen. Als Konsequenz ergibt sich ein konservativer Therapieansatz mit gezieltem Training zur Schulung der neuromuskulären und propriozeptiven Defizite. The peroneal reaction time (PRT) is used in the assessment of neuromuscular deficits in chronic functional ankle instability. Powered by the Editorial Manager® and Preprint Manager® from Aries Systems Corporation the present study was conducted to determine the PRT in a large collective of patients with chronic ankle instability because it is unclear if this parameter of neuromuscular deficit is prolonged. In this study 186 patients underwent a diagnostic algorithm consisting of anamnesis, clinical examination, X-ray and determination of the PRT on a tilting platform. A prolonged PRT as a manifestation of a neuromuscular deficit could be detected in the majority of the patients (n = 143, 77 %). Comparing the affected and healthy legs 77 patients (41 %) showed a significant difference in talar shift (p = 0.002) and talar tilt (p = 0.04) in the radiological stress views. Of these 77 patients only 15 (8 %) showed radiological evidence of a mechanical problem. As a consequence of recurring ankle sprains a post-traumatic deficit in proprioception has to be expected in most cases. In general a conservative therapy approach should be followed including specific training to improve neuromuscular and proprioceptive deficits. |
Mapping eye movements to cognitive processes | Eye movements provide a rich and informative window into a personÕs thoughts and intentions. In recent years researchers have increasingly employed eye movements to study cognition in psychological experiments, to understand behavior in user interfaces, and even to control computers through eye-based input devices. Unfortunately, like speech and handwriting, eye movements generate vast amounts of data with significant individual variability and equipment noise. Thus, the analysis of eye-movement dataÑthat is, determining what people are thinking based on where they are lookingÑcan be extremely tedious and time-consuming. Typical eye-movement data sets are simply too large and complex to be analyzed by hand or by naive automated methods. This thesis formalizes a new class of algorithms that provide fast and robust analysis of eye-movement data. Specifically, the thesis describes three novel algorithms for tracing eye movementsÑmapping eye-movement protocols to the sequential predictions of a cognitive process model. Two algorithms, fixation tracing and point tracing, employ hidden Markov models to determine the best probabilistic interpretation of the data given the model. The third algorithm, target tracing, extends an existing tracing algorithm based on sequence matching to eye movements. The thesis also formalizes several algorithms for identifying fixations in raw eye-movement protocols and provides a working system, EyeTracer, that embodies the proposed tracing and fixation-identification algorithms. To demonstrate the power of the proposed algorithms, the thesis applies them in three real-world domains: equation solving, reading, and eye typing. The equation-solving studies show how the algorithms can code, or interpret, eye-movement protocols as accurately as expert human coders in significantly less time. The studies also illustrate how the algorithms facilitate the prototyping and refinement of cognitive models. The reading study demonstrates how the algorithms help to evaluate and compare two existing computational models of reading and clear up temporal aspects of reading data using sequential aspects of the data. The eye-typing study shows how the algorithms can interpret eye movements in real time and help eliminate usability restrictions imposed by existing eye-based interfaces. |
Streaming and Exploration of Dynamically Changing Dense 3D Reconstructions in Immersive Virtual Reality | We introduce a novel framework that enables large-scale dense 3D scene reconstruction, data streaming over the network and immersive exploration of the reconstructed environment using virtual reality. The system is operated by two remote entities, where one entity – for instance an autonomous aerial vehicle – captures and reconstructs the environment as well as transmits the data to another entity – such as human observer – that can immersivly explore the 3D scene, decoupled from the view of the capturing entity. The performance evaluation revealed the framework’s capabilities to perform RGB-D data capturing, dense 3D reconstruction, streaming and dynamic scene updating in real time for indoor environments up to a size of 100m2, using either a state-of-the-art mobile computer or a workstation. Thereby, our work provides a foundation for enabling immersive exploration of remotely captured and incrementally reconstructed dense 3D scenes, which has not shown before and opens up new research aspects in future. |
Estimation of Composite Load Model Parameters Using an Improved Particle Swarm Optimization Method | Power system loads are one of the crucial elements of modern power systems and, as such, must be properly modelled in stability studies. However, the static and dynamic characteristics of a load are commonly unknown, extremely nonlinear, and are usually time varying. Consequently, a measurement-based approach for determining the load characteristics would offer a significant advantage since it could update the parameters of load models directly from the available system measurements. For this purpose and in order to accurately determine load model parameters, a suitable parameter estimation method must be applied. The conventional approach to this problem favors the use of standard nonlinear estimators or artificial intelligence (AI)-based methods. In this paper, a new solution for determining the unknown load model parameters is proposed-an improved particle swarm optimization (IPSO) method. The proposed method is an AI-type technique similar to the commonly used genetic algorithms (GAs) and is shown to provide a promising alternative. This paper presents a performance comparison of IPSO and GA using computer simulations and measured data obtained from realistic laboratory experiments. |
Graphene photonics and optoelectronics | 611 Electrons propagating through the bidimensional structure of graphene have a linear relation between energy and momentum, and thus behave as massless Dirac fermions1–3. Consequently, graphene exhibits electronic properties for a two-dimensional (2D) gas of charged particles described by the relativistic Dirac equation, rather than the non-relativistic Schrödinger equation with an eff ective mass1,2, with carriers mimicking particles with zero mass and an eff ective ‘speed of light’ of around 106 m s–1. Graphene exhibits a variety of transport phenomena that are characteristic of 2D Dirac fermions, such as specifi c integer and fractional quantum Hall eff ects4,5, a ‘minimum’ conductivity of ~4e2/h even when the carrier concentration tends to zero1, and Shubnikov–de Haas oscillations with a π phase shift due to Berry’s phase1. Mobilities (μ) of up to 106 cm2 V–1 s–1 are observed in suspended samples. Th is, combined with near-ballistic transport at room temperature, makes graphene a potential material for nanoelectronics6,7, particularly for high-frequency applications8. Graphene also shows remarkable optical properties. For example, it can be optically visualized, despite being only a single atom thick9,10. Its transmittance (T) can be expressed in terms of the fi ne-structure constant11. Th e linear dispersion of the Dirac electrons makes broadband applications possible. Saturable absorption is observed as a consequence of Pauli blocking12,13, and nonequilibrium carriers result in hot luminescence14–17. Chemical and physical treatments can also lead to luminescence18–21. Th ese properties make it an ideal photonic and optoelectronic material. |
Nurse moral distress and cancer pain management: an ethnography of oncology nurses in India. | BACKGROUND
The majority of cancer patients in low- and middle-income countries (LMICs) present with late-stage, incurable disease, and basic tools to alleviate patient suffering-such as morphine-are often absent. Oncology nurses must cope with many challenges and may experience moral distress (MD), yet little research has examined this experience in LMICs.
OBJECTIVE
This ethnographic study explored the experience of MD with oncology nurses (n = 37) and other providers (n = 22) in India and its potential relationship to opioid availability.
METHODS
Data (semistructured interviews and field observations) were collected at a 300-bed government cancer hospital in urban South India over 9 months. Dedoose v.4.5.91 supported analysis of transcripts using a coding schema that mapped to an Integrated Model of Nurse Moral Distress and concepts that emerged from field notes.
RESULTS
Primary themes included "We feel bad," "We are alone and afraid," "We are helpless," and "We leave it." A weak link between MD and opioid availability was found.
CONCLUSIONS
Participants described significant work-related distress, but the moral dimension to this distress was less clear as some key aspects of the Integrated Model of Nurse Moral Distress were not supported. The concept of MD may have limited applicability in settings where alternative courses of action are unknown, or not feasible, and where differing social and cultural norms influence moral sensitivity.
IMPLICATIONS FOR PRACTICE
Improving job-related conditions is prerequisite to creating an environment where MD can manifest. Educational initiatives in LMICs must account for the role of the oncology nurse and their contextual moral and professional obligations. |
Is Sentiment Analysis Domain-Dependent | The purpose of this research carried out within applied linguistics is to consider the dependency of the sentiment lexicon and other sentiment analysis tools on the domain under study. For the experiment, we used the REGEX algorithm including the sentiment lexicon and formal grammar rules applied with the certain priorities. These rules and the corresponding syntactic models are similar to regular expressions which detect certain text elements, simplify each sentence and present the text as a formal model. The reviews in Russian from three domains (Bank Service Quality, Hotel Service Quality and Sightseeing) are analyzed; F1 measure is used as the efficiency criterion. The experiment does not reveal the domain-dependency of the algorithm applied. It is determined that the system generally detects positive reviews better than negative ones. When negative opinions are expressed, there is a tendency to use non-standard vocabulary and syntax. |
Clinical outcomes of severe malnutrition in a high tuberculosis and HIV setting. | OBJECTIVE
Case death rates for severe childhood malnutrition remain stubbornly elevated in high HIV prevalence settings, despite the implementation of WHO guidelines. This study examined case death and other clinical outcomes in malnourished children with and without HIV infection.
METHODS
A prospective, observational study was undertaken at three tertiary hospitals in Johannesburg, South Africa. All severely malnourished children had their HIV status established, and anthropometric, clinical and diagnostic findings and admission outcomes were analysed.
FINDINGS
Just over half (51%) of the 113 severely malnourished children were HIV infected, but 31/58 (54%) of these children had their positive status diagnosed only after admission. Marasmic children were significantly more likely to be HIV infected (OR 9.7, 95% CI 3.5 to 29.1). Tuberculosis (TB) was strongly suspected and treated in 27 children (24%) although confirmed in only five (4%). The overall case death rate was 11.5%. HIV infection, pallor and shock were significant predictors of death. HIV-infected children were six times more likely to die compared with HIV-negative children (19% vs 3.6%, OR 6.2, 95% CI 1.2 to 59). HIV-'affected' children (HIV negative but exposed) and HIV-negative children had similar outcomes.
CONCLUSION
HIV infection significantly increases severe malnutrition case death. WHO guidelines for the management of severe malnutrition in high HIV prevalence settings need to be modified to include routine HIV and TB testing and offer guidance on the criteria and timing of TB treatment and highly active antiretroviral therapy initiation. |
E-RECRUITMENT: A ROADMAP TOWARDS E- HUMAN RESOURCE MANAGEMENT | The only vital value for an enterprise is the experience, skills, innovativeness and insights of its people. Human resources are the key components in every organization. It represents total knowledge, talent, and attitude, creative ability, aptitude and belief of an individual involved in the affairs of an organization. Management of human resources is an integral part for every concern. It is associated with the people at work and their relationships within and outside the enterprise. Recruitment of efficient staff is one of the important activities as it generates the human capital for the concern. In the recent years, the field of human resource management has undergone numerous technological advancements. Internet has made an impact on the overall functioning of human resource department. HR processes and procedures have been supported by everything from complicated file-folder systems to automation, going from usage multiple systems and databases to a single version of the whole system. It has progressed with frequent innovations viz; Human Resource Information System, Virtual Human Resources and Electronic Human Resource Management (E-HRM) etc. E-HRM means conducting of business transactions by using internet along with other technologies. In other words, E-HRM is a way of implementing HRM strategies, policies and practices in an organization through a directed support of web technology based channels. It influences every area of human resource management. E-Recruitment refers to posting vacancies on the corporate website or on an online recruitment vendors’ website. It allows applicants to send their resumes electronically through an email or in some other electronic format. The e-recruitment methods and systems have helped to reduce much of the routine administrative tasks involved in recruitment. The study tries to identify the overall concept of e-recruitment. It aims at collecting information regarding methods viz; e-mails, corporate websites and commercial job boards etc. of e-recruitment. It includes the general advantages and disadvantages of e-recruitment. |
Cloud radio access network: Virtualizing wireless access for dense heterogeneous systems | Cloud radio access network (C-RAN) refers to the visualization of base station functionalities by means of cloud computing. This results in a novel cellular architecture in which low-cost wireless access points, known as radio units or remote radio heads, are centrally managed by a reconfigurable centralized "cloud", or central, unit. C-RAN allows operators to reduce the capital and operating expenses needed to deploy and maintain dense heterogeneous networks. This critical advantage, along with spectral efficiency, statistical multiplexing and load balancing gains, make C-RAN well positioned to be one of the key technologies in the development of 5G systems. In this paper, a succinct overview is presented regarding the state of the art on the research on C-RAN with emphasis on fronthaul compression, baseband processing, medium access control, resource allocation, system-level considerations and standardization efforts. |
Media Freedom, Political Knowledge, and Participation | A lexis de Tocqueville ([1835–1840] 1988, p. 517) once remarked, “Only a newspaper can put the same thought at the same time before a thousand readers.” In the twenty-first century, a similar claim holds true for television, radio, and the Internet, which provide information to millions of viewers and listeners across the globe. Given the importance of the media, governments may seek to control or influence the flow of media-provided information reaching their citizens. This control can be direct, such as when states monopolize media ownership in their nations, or indirect, such as when they exert financial pressure on private media outlets to cover news in a certain way (Leeson and Coyne, 2005). This paper examines the relationship between media freedom from government control and citizens’ political knowledge, political participation, and voter turnout. To explore these connections, I examine media freedom and citizens’ political knowledge in 13 central and eastern European countries with data from Freedom House’s Freedom of the Press report and the European Commission’s Candidate Countries Eurobarometer survey. Next, I consider media freedom and citizens’ political participation in 60 countries using data from the World Values Survey. Finally, I investigate media freedom and voter turnout in these same 60 or so countries with data from the International Institute for Democracy and Electoral Assistance. I find that where government owns a larger share of media outlets and infrastructure, regulates the media industry more, and does more to control the content of news, citizens are more politically ignorant and apathetic. Where the media is less regulated and there is greater private ownership in the media industry, citizens are more politically knowledgeable and active. |
Structured Bayesian Pruning via Log-Normal Multiplicative Noise | Dropout-based regularization methods can be regarded as injecting random noise with pre-defined magnitude to different parts of the neural network during training. It was recently shown that Bayesian dropout procedure not only improves generalization but also leads to extremely sparse neural architectures by automatically setting the individual noise magnitude per weight. However, this sparsity can hardly be used for acceleration since it is unstructured. In the paper, we propose a new Bayesian model that takes into account the computational structure of neural networks and provides structured sparsity, e.g. removes neurons and/or convolutional channels in CNNs. To do this we inject noise to the neurons outputs while keeping the weights unregularized. We establish the probabilistic model with a proper truncated log-uniform prior over the noise and truncated log-normal variational approximation that ensures that the KL-term in the evidence lower bound is computed in closed-form. The model leads to structured sparsity by removing elements with a low SNR from the computation graph and provides significant acceleration on a number of deep neural architectures. The model is easy to implement as it can be formulated as a separate dropout-like layer. |
Review of Literature on Heat Transfer Enhancement in Compact Heat Exchangers | This paper features a broad discussion on the application of enhanced heat transfer surfaces to compact heat exchangers. The motivation for heat transfer enhancement is discussed, and the principles behind compact heat exchangers are summarized. Next, various methods for evaluating and comparing different types of heat transfer enhancement devices using ftrst and/or second law analysis are presented. Finally, the following plate-fm enhancement geometries are discussed: rectangular and triangular plain ftns, offset strip ftns, louvered fms, and vortex generators. MOTIVATION FOR HEAT TRANSFER ENHANCEMENT For well over a century, efforts have been made to produce more efficient heat exchangers by employing various methods of heat transfer enhancement. The study of enhanced heat transfer has gained serious momentum during recent years, however, due to increased demands by industry for heat exchange equipment that is less expensive to build and operate than standard heat exchange devices. Savings in materials and energy use also provide strong motivation for the development of improved methods of enhancement. When designing cooling systems for automobiles and spacecraft, it is imperative that the heat exchangers are especially compact and lightweight. Also, enhancement devices are necessary for the high heat duty exchangers found in power plants (i. e. air-cooled condensers, nuclear fuel rods). These applications, as well as numerous others, have led to the development of various enhanced heat transfer surfaces. In general, enhanced heat transfer surfaces can be used for three purposes: (1) to make heat exchangers more compact in order to reduce their overall volume, and possibly their cost, (2) to reduce the pumping power required for a given heat transfer process, or (3) to increase the overall UA value of the heat exchanger. A higher UA value can be exploited in either of two ways: (1) to obtain an increased heat exchange rate for ftxed fluid inlet temperatures, or (2) to reduce the mean temperature difference for the heat exchange; this increases the thermodynamic process efficiency, which can result in a saving of operating costs. Enhancement techniques can be separated into two categories: passive and active. Passive methods require no direct application of external power. Instead, passive techniques employ special surface geometries or fluid additives which cause heat transfer enhancement. On the other hand, active schemes such as electromagnetic ftelds and surface vibration do require external power for operation [1]. The majority of commercially interesting enhancement techniques are passive ones. Active techniques have attracted little commercial interest because of the costs involved, and the problems that are associated with vibration or acoustic noise [2]. This paper deals only with gas-side heat transfer enhancement using special surface geometries. |
Overview of Grounding and Configuration Options for Meshed HVDC Grids | This paper provides an overview and comparison of the possible grounding and configuration options for meshed HVDC grids. HVDC grids are expected to play a key role in the development of future power systems. Nevertheless, the type of grounding and the base configuration for the grid have not yet been determined. Various studies related to multiterminal HVDC or meshed HVDC grids often assume one specific configuration and grounding scheme and take it for granted. However, since a large number of options exist, an overview is needed to balance the pros and cons. In this paper, the influence of the different grounding options on fault behavior is investigated for point-to-point connections. Furthermore, the impact of the grounding type on the system fault behavior is investigated with electromagnetic transient simulations. Next, the suitability of a configuration to serve as a base configuration in a meshed dc grid is investigated and compared in terms of extensibility and flexibility. In this evaluation, the grounding type, the number, and location of grounding points in a grid are considered as well. Finally, an overview of the most important conclusions is given in a summarizing table. |
Firefly Algorithms for Multimodal Optimization | Nature-inspired algorithms are among the most powerful algorithms for optimization. This paper intends to provide a detailed description of a new Firefly Algorithm (FA) for multimodal optimization applications. We will compare the proposed firefly algorithm with other metaheuristic algorithms such as particle swarm optimization (PSO). Simulations and results indicate that the proposed firefly algorithm is superior to existing metaheuristic algorithms. Finally we will discuss its applications and implications for further research. |
Major bleeding risk among non‐valvular atrial fibrillation patients initiated on apixaban, dabigatran, rivaroxaban or warfarin: a “real‐world” observational study in the United States | BACKGROUND
Limited data are available about the real-world safety of non-vitamin K antagonist oral anticoagulants (NOACs).
OBJECTIVES
To compare the major bleeding risk among newly anticoagulated non-valvular atrial fibrillation (NVAF) patients initiating apixaban, warfarin, dabigatran or rivaroxaban in the United States.
METHODS AND RESULTS
A retrospective cohort study was conducted to compare the major bleeding risk among newly anticoagulated NVAF patients initiating warfarin, apixaban, dabigatran or rivaroxaban. The study used the Truven MarketScan(®) Commercial & Medicare supplemental US database from 1 January 2013 through 31 December 2013. Major bleeding was defined as bleeding requiring hospitalisation. Cox model estimated hazard ratios (HRs) of major bleeding were adjusted for age, gender, baseline comorbidities and co-medications. Among 29 338 newly anticoagulated NVAF patients, 2402 (8.19%) were on apixaban; 4173 (14.22%) on dabigatran; 10 050 (34.26%) on rivaroxaban; and 12 713 (43.33%) on warfarin. After adjusting for baseline characteristics, initiation on warfarin [adjusted HR (aHR): 1.93, 95% confidence interval (CI): 1.12-3.33, P=.018] or rivaroxaban (aHR: 2.19, 95% CI: 1.26-3.79, P=.005) had significantly greater risk of major bleeding vs apixaban. Dabigatran initiation (aHR: 1.71, 95% CI: 0.94-3.10, P=.079) had a non-significant major bleeding risk vs apixaban. When compared with warfarin, apixaban (aHR: 0.52, 95% CI: 0.30-0.89, P=.018) had significantly lower major bleeding risk. Patients initiating rivaroxaban (aHR: 1.13, 95% CI: 0.91-1.41, P=.262) or dabigatran (aHR: 0.88, 95% CI: 0.64-1.21, P=.446) had a non-significant major bleeding risk vs warfarin.
CONCLUSION
Among newly anticoagulated NVAF patients in the real-world setting, initiation with rivaroxaban or warfarin was associated with a significantly greater risk of major bleeding compared with initiation on apixaban. When compared with warfarin, initiation with apixaban was associated with significantly lower risk of major bleeding. Additional observational studies are required to confirm these findings. |
Efficient discovery of overlapping communities in massive networks. | Detecting overlapping communities is essential to analyzing and exploring natural networks such as social networks, biological networks, and citation networks. However, most existing approaches do not scale to the size of networks that we regularly observe in the real world. In this paper, we develop a scalable approach to community detection that discovers overlapping communities in massive real-world networks. Our approach is based on a Bayesian model of networks that allows nodes to participate in multiple communities, and a corresponding algorithm that naturally interleaves subsampling from the network and updating an estimate of its communities. We demonstrate how we can discover the hidden community structure of several real-world networks, including 3.7 million US patents, 575,000 physics articles from the arXiv preprint server, and 875,000 connected Web pages from the Internet. Furthermore, we demonstrate on large simulated networks that our algorithm accurately discovers the true community structure. This paper opens the door to using sophisticated statistical models to analyze massive networks. |
The State-of-the-Art in Predictive Visual Analytics | Predictive analytics embraces an extensive range of techniques including statistical modeling, machine learning, and data mining and is applied in business intelligence, public health, disaster management and response, and many other fields. To date, visualization has been broadly used to support tasks in the predictive analytics pipeline. Primary uses have been in data cleaning, exploratory analysis, and diagnostics. For example, scatterplots and bar charts are used to illustrate class distributions and responses. More recently, extensive visual analytics systems for feature selection, incremental learning, and various prediction tasks have been proposed to support the growing use of complex models, agent-specific optimization, and comprehensive model comparison and result exploration. Such work is being driven by advances in interactive machine learning and the desire of end-users to understand and engage with the modeling process. In this state-of-the-art report, we catalogue recent advances in the visualization community for supporting predictive analytics. First, we define the scope of predictive analytics discussed in this article and describe how visual analytics can support predictive analytics tasks in a predictive visual analytics (PVA) pipeline. We then survey the literature and categorize the research with respect to the proposed PVA pipeline. Systems and techniques are evaluated in terms of their supported interactions, and interactions specific to predictive analytics are discussed. We end this report with a discussion of challenges and opportunities for future research in predictive visual analytics. |
Roger Griffin, Modernism and Fascism. The Sense of a Beginning under Mussolini and Hitler | Con Modernism and Fascism, Roger Griffin, già autore di parecchi rilevanti saggi e libri sul fenomeno internazionale del fascismo, propone una lettura della cultura fascista in chiave «modernista», rintracciando le numerose espressioni «moderniste» di fascismo e nazismo. Il libro è diviso in due parti, la prima di natura piuttosto metodologica, la seconda più concreta e «applicata» (sulla presenza del modernismo nel fascismo italiano e nel nazismo tedesco). |
GXNOR-Net: Training deep neural networks with ternary weights and activations without full-precision memory under a unified discretization framework | Although deep neural networks (DNNs) are being a revolutionary power to open up the AI era, the notoriously huge hardware overhead has challenged their applications. Recently, several binary and ternary networks, in which the costly multiply-accumulate operations can be replaced by accumulations or even binary logic operations, make the on-chip training of DNNs quite promising. Therefore there is a pressing need to build an architecture that could subsume these networks under a unified framework that achieves both higher performance and less overhead. To this end, two fundamental issues are yet to be addressed. The first one is how to implement the back propagation when neuronal activations are discrete. The second one is how to remove the full-precision hidden weights in the training phase to break the bottlenecks of memory/computation consumption. To address the first issue, we present a multi-step neuronal activation discretization method and a derivative approximation technique that enable the implementing the back propagation algorithm on discrete DNNs. While for the second issue, we propose a discrete state transition (DST) methodology to constrain the weights in a discrete space without saving the hidden weights. Through this way, we build a unified framework that subsumes the binary or ternary networks as its special cases, and under which a heuristic algorithm is provided at the website https://github.com/AcrossV/Gated-XNOR. More particularly, we find that when both the weights and activations become ternary values, the DNNs can be reduced to sparse binary networks, termed as gated XNOR networks (GXNOR-Nets) since only the event of non-zero weight and non-zero activation enables the control gate to start the XNOR logic operations in the original binary networks. This promises the event-driven hardware design for efficient mobile intelligence. We achieve advanced performance compared with state-of-the-art algorithms. Furthermore, the computational sparsity and the number of states in the discrete space can be flexibly modified to make it suitable for various hardware platforms. |
Evaluating GPGPU Memory Performance Through the C-AMAT Model | General Purpose Graphics Processing Units (GPGPU) have become a popular platform to accelerate high performance applications. Although they provide exceptional computing power, GPGPU impose significant pressure on the off-chip memory system. Evaluating, understanding, and improving GPGPU data access delay has become an important research topic in high-performance computing. In this study, we utilize the newly proposed GPGPU/C-AMAT (Concurrent Average Memory Access Time) model to quantitatively evaluate GPGPU memory performance. Specifically, we extend the current C-AMAT model to include a GPGPU-specific modeling component and then provide its evaluation results. |
Problem Based Learning: An Instructional Model and Its Constructivist Framework. | It is said that there’s nothing so practical as good theory. It may also be said that there’s nothing so theoretically interesting as good practice1. This is particularly true of efforts to relate constructivism as a theory of learning to the practice of instruction. Our goal in this paper is to provide a clear link between the theoretical principles of constructivism, the practice of instructional design, and the practice of teaching. We will begin with a basic characterization of constructivism identifying what we believe to be the central principles in learning and understanding. We will then identify and elaborate on eight instructional principles for the design of a constructivist learning environment. Finally, we will examine what we consider to be one of the best exemplars of a constructivist learning environment -Problem Based Learning as described by Barrows (1985, 1986, 1992). |
A Survey on Facial Expression Recognition Techniques | These Human facial expressions convey a lot of information visually rather than articulately. Facial expression recognition plays a crucial role in the area of human-machine interaction. Recognition of facial expression by computer with high recognition rate is still a challenging task. Facial Expression Recognition usually performed in three-stages consisting of face detection, feature extraction, and expression classification. This paper presents a survey of the current work done in the field of facial expression recognition techniques with various face detection, feature extraction and classification methods used by them and their performance. |
Generalization in Reinforcement Learning: Safely Approximating the Value Function | To appear in G Tesauro D S Touretzky and T K Leen eds Advances in Neural Information Processing Systems MIT Press Cambridge MA A straightforward approach to the curse of dimensionality in re inforcement learning and dynamic programming is to replace the lookup table with a generalizing function approximator such as a neu ral net Although this has been successful in the domain of backgam mon there is no guarantee of convergence In this paper we show that the combination of dynamic programming and function approx imation is not robust and in even very benign cases may produce an entirely wrong policy We then introduce Grow Support a new algorithm which is safe from divergence yet can still reap the bene ts of successful generalization |
Thermal-infrared based drivable region detection | Drivable region detection is challenging since various types of road, occlusion or poor illumination condition have to be considered in a outdoor environment, particularly at night. In the past decade, Many efforts have been made to solve these problems, however, most of the already existing methods are designed for visible light cameras, which are inherently inefficient under low light conditions. In this paper, we present a drivable region detection algorithm designed for thermal-infrared cameras in order to overcome the aforementioned problems. The novelty of the proposed method lies in the utilization of on-line road initialization with a highly scene-adaptive sampling mask. Furthermore, our prior road information extraction is tailored to enforce temporal consistency among a series of images. In this paper, we also propose a large number of experiments in various scenarios (on-road, off-road and cluttered road). A total of about 6000 manually annotated images are made available in our website for the research community. Using this dataset, we compared our method against multiple state-of-the-art approaches including convolutional neural network (CNN) based methods to emphasize the robustness of our approach under challenging situations. |
ISO-TimeML Event Extraction in Persian Text | Recognizing TimeML events and identifying their attributes, are important tasks in natural language processing (NLP). Several NLP applications like question answering, information retrieval, summarization, and temporal information extraction need to have some knowledge about events of the input documents. Existing methods developed for this task are restricted to limited number of languages, and for many other languages including Persian, there has not been any effort yet. In this paper, we introduce two different approaches for automatic event recognition and classification in Persian. For this purpose, a corpus of events has been built based on a specific version of ISO-TimeML for Persian. We present the specification of this corpus together with the results of applying mentioned approaches to the corpus. Considering these methods are the first effort towards Persian event extraction, the results are comparable to that of successful methods in English. TITLE AND ABSTRACT IN PERSIAN اھداديور جارختسا زا یسراف نوتم فيرعت رب انب ISO-TimeML نتفاي اھداديور یگژيو و اھنآ یاھ ساسا رب TimeML زا یکي لئاسم هزوح رد مھم یعيبط یاھ نابز شزادرپ ی تسا . نابز شزادرپ یاھدربراک زا یرايسب هناماس دننام یعيبط یاھ و یزاس هص2خ ،تاع2طا جارختسا ،خساپ و شسرپ یاھ ات دنراد زاين ینامز تاع2طا جارختسا هرابرد یشناد یاھداديور رد دوجوم نوتم یدورو شور .دنشاب هتشاد هک یياھ نيا دروم رد نونکات هدش داجيا هلئسم نابز دنچ هب دودحم ، صاخ نابز زا یرايسب رد و تسا اھ هلمج زا ،یسراف نابز یراک نونکات هدشن ماجنا هطبار نيا رد یسراف نابز رد اھداديور جارختسا یارب فلتخم شور ود ام ،هلاقم نيا رد .تسا یم هئارا .ميھد یارب هرکيپ ،راک نيا اب قباطم یا ISO-TimeML ، سن هتبلا هخ دش هتخاس ،نآ یسراف صاخ ی ام . ناشن ار ،نآ یور رب لصاح جياتن و هرکيپ نيا تاصخشم یم ميھد شور جياتن . هئارا یاھ هدش هلاقم نيا رد ناونع هب ، شور نيلوا هدايپ یاھ اب ،یسراف نابز یور رب هدش یزاس .تسا هسياقم لباق یسيلگنا نابز رد قفوم یاھ شور |
Effect of selected ions from lyotropic series on lipid oxidation rate | Abstract Organic or inorganic salts, commonly present in foods as natural components or ingredients, can affect the hydrophobic/hydrophilic interactions among food components. In particular, modifying the physicochemical equilibrium in the media, the ionic species forming salts could affect the kinetics of chemical reactions occurring in foods. The aim of the present research was to study the influence of different ionic species from lyotropic series on the kinetics of lipid oxidation. For this purpose, salts containing antichaotropic (carbonate and acetate), neutral (Na + , K + ) and chaotropic ions (Cl − ) were added to soybean oil. Results indicate that potassium carbonate and potassium acetate present a strong antioxidant capacity, whereas no effect was detected for NaCl and KCl. The salt antioxidant activity was prevalently attributed to the antichaotropic anionic species present in the media, which could interact with hydroperoxides by virtue of their ability to form hydrogen bonds. These results appear to be of considerable interest for controlling the development of rancidity in foods. |
Fitt's law as an explicit time/error trade-off | The widely-held view that Fitts' law expresses a speed/accuracy trade-off is presumably correct, but it is vague. We outline a simple resource-allocation theory of Fitts' law in which movement time and error trade for each other. The theory accounts quite accurately for the data of Fitts' (1954) seminal study, as well as some fresh data of our own. In both data sets we found the time/error trade-off to obey a power law. Our data, which we could analyze more thoroughly than Fitts', are consistent with a square-root function with a single adjustable constant. We suggest that the resource-allocation framework should help combine information and energy considerations to allow a more complete account of Fitts' law. |
LOF: Identifying Density-Based Local Outliers | For many KDD applications, such as detecting criminal activities in E-commerce, finding the rare instances or the outliers, can be more interesting than finding the common patterns. Existing work in outlier detection regards being an outlier as a binary property. In this paper, we contend that for many scenarios, it is more meaningful to assign to each object a degree of being an outlier. This degree is called the local outlier factor (LOF) of an object. It is local in that the degree depends on how isolated the object is with respect to the surrounding neighborhood. We give a detailed formal analysis showing that LOF enjoys many desirable properties. Using real-world datasets, we demonstrate that LOF can be used to find outliers which appear to be meaningful, but can otherwise not be identified with existing approaches. Finally, a careful performance evaluation of our algorithm confirms we show that our approach of finding local outliers can be practical. |
Motion signal processing | Techniques from the image and signal processing domain can be successfully applied to designing, modifying, and adapting animated motion. For this purpose, we introduce multiresolution motion filtering, multitarget motion interpolation with dynamic timewarping, waveshaping and motion displacement mapping. The techniques are well-suited for reuse and adaptation of existing motion data such as joint angles, joint coordinates or higher level motion parameters of articulated figures with many degrees of freedom. Existing motions can be modified and combined interactively and at a higher level of abstraction than conventional systems support. This general approach is thus complementary to keyframing, motion capture, and procedural animation. |
Smart health: A context-aware health paradigm within smart cities | The new era of mobile health ushered in by the wide adoption of ubiquitous computing and mobile communications has brought opportunities for governments and companies to rethink their concept of healthcare. Simultaneously, the worldwide urbanization process represents a formidable challenge and attracts attention toward cities that are expected to gather higher populations and provide citizens with services in an efficient and human manner. These two trends have led to the appearance of mobile health and smart cities. In this article we introduce the new concept of smart health, which is the context-aware complement of mobile health within smart cities. We provide an overview of the main fields of knowledge that are involved in the process of building this new concept. Additionally, we discuss the main challenges and opportunities that s-Health would imply and provide a common ground for further research. |
The benefits of constructing leads from fragment hits. | 'Fragments' refer to particularly small molecular starting points in medicinal chemistry. The small size of fragments requires adapted techniques for their screening and subsequent elaboration. The detection of the weak binding affinity of fragments for their target, and associated screening issues, have been debated at length. Since it is now clear that fragments can be developed into clinical candidates, the discussion is shifting to the design of good-quality lead compounds from fragment hits. The increasing ability to control and tailor this construction process highlights the potential benefits of fragment-based drug discovery. |
Citrus fruit and cancer risk in a network of case–control studies | Citrus fruit has shown a favorable effect against various cancers. To better understand their role in cancer risk, we analyzed data from a series of case–control studies conducted in Italy and Switzerland. The studies included 955 patients with oral and pharyngeal cancer, 395 with esophageal, 999 with stomach, 3,634 with large bowel, 527 with laryngeal, 2,900 with breast, 454 with endometrial, 1,031 with ovarian, 1,294 with prostate, and 767 with renal cell cancer. All cancers were incident and histologically confirmed. Controls were admitted to the same network of hospitals for acute, nonneoplastic conditions. Odds ratios (OR) were estimated by multiple logistic regression models, including terms for major identified confounding factors for each cancer site, and energy intake. The ORs for the highest versus lowest category of citrus fruit consumption were 0.47 (95% confidence interval, CI, 0.36–0.61) for oral and pharyngeal, 0.42 (95% CI, 0.25–0.70) for esophageal, 0.69 (95% CI, 0.52–0.92) for stomach, 0.82 (95% CI, 0.72–0.93) for colorectal, and 0.55 (95% CI, 0.37–0.83) for laryngeal cancer. No consistent association was found with breast, endometrial, ovarian, prostate, and renal cell cancer. Our findings indicate that citrus fruit has a protective role against cancers of the digestive and upper respiratory tract. |
Haemodialysis: effects of acute decrease in preload on tissue Doppler imaging indices of systolic and diastolic function of the left and right ventricles. | AIMS
Conventional echocardiographic (ECHO) parameters of left ventricular (LV) and right ventricular (RV) systolic and diastolic function have been shown to be load-dependent; however, the impact of preload reduction on tissue Doppler (TD) parameters of LV and RV function is incompletely understood. The aim of this study was to examine the effect of acute preload reduction by haemodialysis (HD) on conventional (ECHO) and TD imaging (TDI) indices of systolic and diastolic function of the left and right ventricles.
METHODS AND RESULTS
Seventeen chronically uremic patients (age 31 +/- 10 years), without overt heart disease underwent conventional 2D and Doppler ECHO together with measurement of longitudinal mitral and tricuspid annular motion velocities. Fluid volume removed by HD was 2706 +/- 1047 cm(3). Haemodialysis led to reduction in LV end-diastolic volume (P < 0.0001), end-systolic volume (P < 0.001), peak early (E wave) transmitral flow velocity (P = 0.0001), and the ratio of early to late Doppler velocities of diastolic mitral inflow (P = 0.021). For the LV, early diastolic (E0) TDI velocities and the ratio of early to late TDI diastolic velocities (E0/A0) only on the septal side of the mitral annulus decreased significantly after HD (P = 0.0001 and P = 0.009, respectively). In a subgroup of seven patients who sustained significantly larger fluid volume loses following HD, E0 and the ratio of E0/A0 at the lateral side of mitral annulus also decreased suggesting a greater resistance of the lateral annulus to preload changes. Systolic velocities decreased after HD on both sides of mitral annulus (septal 6.90 +/- 1.10 vs. 5.97 +/- 1.48 cm/s, P = 0.006; lateral 8.68 +/- 2.67 vs. 6.94 +/- 1.52 cm/s, P = 0.011). For the RV, systolic tricuspid annular velocities decreased (13.45 +/- 1.47 vs.11.73 +/- 1.90 cm/s, P = 0.002) together with early diastolic velocities after HD (13.95 +/- 2.90 vs.10.62 +/- 2.45 cm/s, P = 0.0001). Both systolic and early diastolic tricuspid annular velocities correlated directly with fluid removal (P < 0.01).
CONCLUSION
This study shows that both systolic and diastolic TDI velocities of the LV and RV are preload-dependent. However, the lateral mitral annulus is more resistant to preload changes than either the septal mitral annulus or the lateral tricuspid annulus. |
The image of India as a Travel Destination and the attitude of viewers towards Indian TV Dramas | For a few decades now, various television stations in Indonesia have been broadcasting foreign drama series including those from a range of Asian countries, such Korea, India, Turkey, Thailand and the Philippines . This study aims to explore attitude towards Asian drama in those countries and the destination image of the country where the drama emanates from as perceived by the audiences. This study applied a mixed-methodology approach in order to explore particularly attitudes towards foreign television drama productions. There is a paucity of study exploring the attitudes of audiences towards Indian television dramas and a limited study focussing on the image of India as a preferred travel destination. Data was collected using an online instrument and participants were selected as a convenience sample. The attitude towards foreign television dramas was measured using items that were adapted from the qualitative study results, whereas for measuring destination image, an existing scale was employed. This study found that the attitudes of audiences towards Indian drama and their image of India had no dimension (one factor). The study also reported that attitude towards Indian dramas had a significant impact on the image of India as a travel destination and vice-versa. Recommendations for future study and tourism marketing are discussed. |
Continuous and Cuffless Blood Pressure Monitoring Based on ECG and SpO2 Signals ByUsing Microsoft Visual C Sharp | BACKGROUND
One of the main problems especially in operating room and monitoring devices is measurement of Blood Pressure (BP) by sphygmomanometer cuff. Objective :In this study we designed a new method to measure BP changes continuously for detecting information between cuff inflation times by using vital signals in monitoring devices. This will be achieved by extraction of the time difference between each cardiac cycle and a relative pulse wave.
METHODS
Finger pulse and ECG signals in lead I were recorded by a monitoring device. The output of monitoring device wasinserted in a computer by serial network communication. A software interface (Microsoft Visual C#.NET ) was used to display and process the signals in the computer. Time difference between each cardiac cycle and pulse signal was calculated throughout R wave detection in ECG and peak of pulse signal by the software. The relation between time difference in two waves and BP was determined then the coefficients of equation were obtained in different physical situations. The results of estimating BP were compared with the results of sphygmomanometer method and the error rate was calculated.
RESULTS
In this study, 25 subjects participated among them 15 were male and 10 were female. The results showed that BP was linearly related to time difference. Average of coefficient correlation was 0.9±0.03 for systolic and 0.82±0.04 for diastolic blood pressure. The highest error percentage was calculated 8% for male and 11% for female group. Significant difference was observed between the different physical situation and arm movement changes. The relationship between time difference and age was estimated in a linear relationship with a correlation coefficient of 0.76.
CONCLUSION
By determining linear relation values with high accuracy, BP can be measured with insignificant error. Therefore it can be suggested as a new method to measure the blood pressure continuously. |
A Validation Study of a Smartphone-Based Finger Tapping Application for Quantitative Assessment of Bradykinesia in Parkinson’s Disease | BACKGROUND
Most studies of smartphone-based assessments of motor symptoms in Parkinson's disease (PD) focused on gait, tremor or speech. Studies evaluating bradykinesia using wearable sensors are limited by a small cohort size and study design. We developed an application named smartphone tapper (SmT) to determine its applicability for clinical purposes and compared SmT parameters to current standard methods in a larger cohort.
METHODS
A total of 57 PD patients and 87 controls examined with motor UPDRS underwent timed tapping tests (TT) using SmT and mechanical tappers (MeT) according to CAPSIT-PD. Subjects were asked to alternately tap each side of two rectangles with an index finger at maximum speed for ten seconds. Kinematic measurements were compared between the two groups.
RESULTS
The mean number of correct tapping (MCoT), mean total distance of finger movement (T-Dist), mean inter-tap distance, and mean inter-tap dwelling time (IT-DwT) were significantly different between PD patients and controls. MCoT, as assessed using SmT, significantly correlated with motor UPDRS scores, bradykinesia subscores and MCoT using MeT. Multivariate analysis using the SmT parameters, such as T-Dist or IT-DwT, as predictive variables and age and gender as covariates demonstrated that PD patients were discriminated from controls. ROC curve analysis of a regression model demonstrated that the AUC for T-Dist was 0.92 (95% CI 0.88-0.96).
CONCLUSION
Our results suggest that a smartphone tapping application is comparable to conventional methods for the assessment of motor dysfunction in PD and may be useful in clinical practice. |
Community Aware Random Walk for Network Embedding | Social network analysis provides meaningful information about behavior of network members that can be used for diverse applications such as classification, link prediction. However, network analysis is computationally expensive because of feature learning for different applications. In recent years, many researches have focused on feature learning methods in social networks. Network embedding represents the network in a lower dimensional representation space with the same properties which presents a compressed representation of the network. In this paper, we introduce a novel algorithm named “CARE” for network embedding that can be used for different types of networks including weighted, directed and complex. Current methods try to preserve local neighborhood information of nodes, whereas the proposed method utilizes local neighborhood and community information of network nodes to cover both local and global structure of social networks. CARE builds customized paths, which are consisted of local and global structure of network nodes, as a basis for network embedding and uses the Skip-gram model to learn representation vector of nodes. Subsequently, stochastic gradient descent is applied to optimize our objective function and learn the final representation of nodes. Our method can be scalable when new nodes are appended to network without information loss. Parallelize generation of customized random walks is also used for speeding up CARE. We evaluate the performance of CARE on multi label classification and link prediction tasks. Experimental results on various networks indicate that the proposed method outperforms others in both Micro and Macro-f1 measures for different size of training data. |
Online Multi-Task Learning Using Active Sampling | One of the long-standing challenges in Artificial Intelligence for goal-directed behavior is to build a single agent which can solve multiple tasks. Recent progress in multi-task learning for goal-directed sequential tasks has been in the form of distillation based learning wherein a student network learns from multiple task-specific expert networks by mimicking the task-specific policies of the expert networks. While such approaches offer a promising solution to the multitask learning problem, they require supervision from large task-specific (expert) networks which require extensive training. We propose a simple yet efficient multi-task learning framework which solves multiple goal-directed tasks in an online or active learning setup without the need for expert supervision. |
The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis | , doi: 10.1098/rspa.1998.0193 454 1998 Proc. R. Soc. Lond. A Yen, Chi Chao Tung and Henry H. Liu Norden E. Huang, Zheng Shen, Steven R. Long, Manli C. Wu, Hsing H. Shih, Quanan Zheng, Nai-Chyuan nonlinear and non-stationary time series analysis The empirical mode decomposition and the Hilbert spectrum for References http://rspa.royalsocietypublishing.org/content/454/1971/903#related-urls Article cited in: Email alerting service here corner of the article or click Receive free email alerts when new articles cite this article sign up in the box at the top right-hand |
Tablet formulation containing meloxicam and β-cyclodextrin: Mechanical characterization and bioavailability evaluation | The purpose of this research was to evaluate β-cyclodextrin (β-CD) as a vehicle, either singly or in blends with lactose (spray-dried or monohydrate), for preparing a meloxicam tablet. Aqueous solubility of meloxicam in presence of β-CD was investigated. The tablets were prepared by direct compression and wet granulation techniques. The powder blends and the granules were evaluated for angle of repose, bulk density, compressibility index, total porosity, and drug content. The tablets were subjected to thickness, diameter, weight variation test, drug content, hardness, friability, disintegration time, and in vitro dissolution studies. The effect of β-CD on the bioavailability of meloxicam was also investigated in human volunteers using a balanced 2-way crossover study. Phase-solubility studies indicated an AL-type diagram with inclusion complex of 1∶1 molar ratio. The powder blends and granules of all formulations showed satisfactory flow properties, compressibility, and drug content. All tablet formations prepared by direct compression or wet granulation showed acceptable mechanical properties. The dissolution rate of meloxicam was significantly enhanced by inclusion of β-CD in the formulations up to 30%. The mean pharmacokinetic parameters (Cmax, Ke, and area under the curve [AUC]0−∞) were significantly increased in presence of β-CD. These results suggest that β-CD would facilitate the preparation of meloxicam tablets with acceptable mechanical properties using the direct compression technique as there is no important difference between tablets prepared by direct compression and those prepared by wet granulation. Also, β-CD is particularly useful for improving the oral bioavailablity of meloxicam. |
DAS-driven therapy versus routine care in patients with recent-onset active rheumatoid arthritis. | OBJECTIVES
To compare the efficacy of Disease Activity Score (DAS)-driven therapy and routine care in patients with recent-onset rheumatoid arthritis.
METHODS
Patients with recent-onset rheumatoid arthritis receiving traditional antirheumatic therapy from either the BeSt study, a randomised controlled trial comparing different treatment strategies (group A), or two Early Arthritis Clinics (group B) were included. In group A, systematic DAS-driven treatment adjustments aimed to achieve low disease activity (DAS < or =2.4). In group B, treatment was left to the discretion of the treating doctor. Functional ability (Health Assessment Questionnaire (HAQ)), Disease Activity Score in 28 joints (DAS28) and Sharp/van der Heijde radiographic score (SHS) were evaluated.
RESULTS
At baseline, patients in group A (n = 234) and group B (n = 201) had comparable demographic characteristics and a mean HAQ of 1.4. Group A had a longer median disease duration than group B (0.5 vs 0.4 years, p = 0.016), a higher mean DAS28 (6.1 vs 5.7, p<0.001), more rheumatoid factor-positive patients (66% vs 42%, p<0.001) and more patients with erosions (71% vs 53%, p<0.001). After 1 year, the HAQ improvement was 0.7 vs 0.5 (p = 0.029), and the percentage in remission (DAS28 <2.6) 31% vs 18% (p<0.005) in groups A and B, respectively. In group A, the median SHS progression was 2.0 (expected progression 7.0), in group B, the SHS progression was 1.0 (expected progression 4.4).
CONCLUSIONS
In patients with recent-onset rheumatoid arthritis receiving traditional treatment, systematic DAS-driven therapy results in significantly better clinical improvement and possibly improves the suppression of joint damage progression. |
Fast Edge-Based Detection and Localization of Transport Boxes and Pallets in RGB-D Images for Mobile Robot Bin Picking | Mobile manipulation tasks in shopfloor logistics require robots to grasp objects from various transport containers such as boxes and pallets. In this paper, we present an efficient processing pipeline that detects and localizes boxes and pallets in RGB-D images. Our method is based on edges in both the color image and the depth image and uses a RANSAC approach for reliably localizing the detected containers. Experiments show that the proposed method reliably detects and localizes both container types while guaranteeing low processing times. |
THE STRUCTURED EMPLOYMENT INTERVIEW : NARRATIVE AND QUANTITATIVE REVIEW OF THE | In the 20 years since frameworks of employment interview structure have been developed, a considerable body of empirical research has accumulated. We summarize and critically examine this literature by focusing on the 8 main topics that have been the focus of attention: (a) the definition of structure; (b) reducing bias through structure; (c) impression management in structured interviews; (d) measuring personality via structured interviews; (e) comparing situational versus past-behavior questions; (f) developing rating scales; (g) probing, follow-up, prompting, and elaboration on questions; and (h) reactions to structure. For each topic, we review and critique research and identify promising directions for future research. When possible, we augment the traditional narrative review with meta-analytic review and content analysis. We concluded that much is known about structured interviews, but there are still many unanswered questions. We provide 12 propositions and 19 research questions to stimulate further research on this important topic. |
Hypothesis generation guided by co-word clustering | Co-word analysis was applied to keywords assigned to MEDLINE documents contained in sets of complementary but disjoint literatures. In strategical diagrams of disjoint literatures, based on internal density and external centrality of keyword-containing clusters, intermediate terms (linking the disjoint partners) were found in regions of below-median centrality and density. Terms representing the disjoint literature themes were found in close vicinity in strategical diagrams of intermediate literatures. Based on centrality-density ratios, characteristic values were found which allow a rapid identification of clusters containing possible intermediate and disjoint partner terms. Applied to the already investigated disjoint pairs Raynaud"s Disease - Fish Oil, Migraine - Magnesium, the method readily detected known and unknown (but relevant) intermediate and disjoint partner terms. Application of the method to the literature on Prions led to Manganese as possible disjoint partner term. It is concluded that co-word clustering is a powerful method for literature-based hypothesis generation and knowledge discovery. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.