title
stringlengths
8
300
abstract
stringlengths
0
10k
Evidence for corticofugal modulation of peripheral auditory activity in humans.
Active cochlear micromechanisms, involved in auditory sensitivity, are modulated by the medial olivocochlear efferent system, which projects directly onto the organ of Corti. Both processes can be assessed non-invasively by means of evoked otoacoustic emissions. Animal experiments have revealed top-down control from the auditory cortex to peripheral auditory receptor, supported by anatomical descriptions of descending auditory pathways from auditory areas to the medial olivocochlear efferent system and organ of Corti. Through recording of evoked otoacoustic emissions during presurgical functional brain mapping for refractory epilepsy, we showed that corticofugal modulation of peripheral auditory activity also exists in humans. In 10 epileptic patients, electrical stimulation of the contralateral auditory cortex led to a significant decrease in evoked otoacoustic emission amplitude, whereas no change occurred under stimulation of non-auditory contralateral areas. These findings provide evidence of a cortico-olivocochlear pathway, originating in the auditory cortex and modulating contralateral active cochlear micromechanisms via the medial olivocochlear efferent system, in humans.
Learning Extractors from Unlabeled Text using Relevant Databases
Supervised machine learning algorithms for information extraction generally require large amounts of training data. In many cases where labeling training data is burdensome, there may, however, already exist an incomplete database relevant to the task at hand. Records from this database can be used to label text strings that express the same information. For tasks where text strings do not follow the same format or layout, and additionally may contain extra information, labeling the strings completely may be problematic. This paper presents a method for training extractors which fill in missing labels of a text sequence that is partially labeled using simple high-precision heuristics. Furthermore, we improve the algorithm by utilizing labeled fields from the database. In experiments with BibTeX records and research paper citation strings, we show a significant improvement in extraction accuracy over a baseline that only relies on the database for training data.
A reduced model for interactive hairs
Realistic hair animation is a crucial component in depicting virtual characters in interactive applications. While much progress has been made in high-quality hair simulation, the overwhelming computation cost hinders similar fidelity in realtime simulations. To bridge this gap, we propose a data-driven solution. Building upon precomputed simulation data, our approach constructs a reduced model to optimally represent hair motion characteristics with a small number of guide hairs and the corresponding interpolation relationships. At runtime, utilizing such a reduced model, we only simulate guide hairs that capture the general hair motion and interpolate all rest strands. We further propose a hair correction method that corrects the resulting hair motion with a position-based model to resolve hair collisions and thus captures motion details. Our hair simulation method enables a simulation of a full head of hairs with over 150K strands in realtime. We demonstrate the efficacy and robustness of our method with various hairstyles and driven motions (e.g., head movement and wind force), and compared against full simulation results that does not appear in the training data.
Fish oil for the reduction of atrial fibrillation recurrence, inflammation, and oxidative stress.
BACKGROUND Recent trials of fish oil for the prevention of atrial fibrillation (AF) recurrence have provided mixed results. Notable uncertainties in the existing evidence base include the roles of high-dose fish oil, inflammation, and oxidative stress in patients with paroxysmal or persistent AF not receiving conventional antiarrhythmic (AA) therapy. OBJECTIVES The aim of this study was to evaluate the influence of high-dose fish oil on AF recurrence, inflammation, and oxidative stress parameters. METHODS We performed a double-blind, randomized, placebo-controlled, parallel-arm study in 337 patients with symptomatic paroxysmal or persistent AF within 6 months of enrollment. Patients were randomized to fish oil (4 g/day) or placebo and followed, on average, for 271 ± 129 days. RESULTS The primary endpoint was time to first symptomatic or asymptomatic AF recurrence lasting >30 s. Secondary endpoints were high-sensitivity C-reactive protein (hs-CRP) and myeloperoxidase (MPO). The primary endpoint occurred in 64.1% of patients in the fish oil arm and 63.2% of patients in the placebo arm (hazard ratio: 1.10; 95% confidence interval: 0.84 to 1.45; p = 0.48). hs-CRP and MPO were within normal limits at baseline and decreased to a similar degree at 6 months (Δhs-CRP, 11% vs. -11%; ΔMPO, -5% vs. -9% for fish oil vs. placebo, respectively; p value for interaction = NS). CONCLUSIONS High-dose fish oil does not reduce AF recurrence in patients with a history of AF not receiving conventional AA therapy. Furthermore, fish oil does not reduce inflammation or oxidative stress markers in this population, which may explain its lack of efficacy. (Multi-center Study to Evaluate the Effect of N-3 Fatty Acids [OMEGA-3] on Arrhythmia Recurrence in Atrial Fibrillation [AFFORD]; NCT01235130).
Detection and Classification of Hyper-Spectral Edges
Intensity-based edge detectors cannot distinguish whether an edge is caused by material changes, shadows, surface orientation changes or by highlights. Therefore, our aim is to classify the physical cause of an edge using hyperspectra obtained by a spectrograph. Methods are presented to detect edges in hyperspectral images. In theory, the effect of varying imaging conditions is analyzed for ”raw” hyper-spectra, for normalized hyper-spectra, and for hue computed from hyper-spectra. From this analysis, an edge classifier is derived which distinguishes hyper-spectral edges into the following types: (1) a shadow or geometry edge, (2) a highlight edge, (3) a material edge.
More than bin packing: Dynamic resource allocation strategies in cloud data centers
Resource allocation strategies in virtualized data centers have received considerable attention recently as they can have substantial impact on the energy efficiency of a data center. This led to new decision and control strategies with significant managerial impact for IT service providers. We focus on dynamic environments where virtual machines need been analyzed and used to place virtual machines upon arrival. However, these placement heuristics can lead to suboptimal server utilization, because they cannot consider virtual machines, which arrive in the future. We ran extensive lab experiments and simulations with different controllers and different workloads to understand which control strategies achieve high levels of energy efficiency in different workload environments. We found that combinations of placement controllers and periodic reallocations achieve the highest energy efficiency subject to predefined service levels. While the type of placement heuristic had little impact on the average server demand, the type of virtual machine resource demand estimator used for the placement decisions had a significant impact on the overall energy efficiency. & 2015 Elsevier Ltd. All rights reserved.
The Kaldi Speech Recognition Toolkit
We describe the design of Kaldi, a free, open-source toolkit for speech recognition research. Kaldi provides a speech recognition system based on finite-state transducers (using the freely available OpenFst), together with detailed documentation and scripts for building complete recognition systems. Kaldi is written is C++, and the core library supports modeling of arbitrary phonetic-context sizes, acoustic modeling with subspace Gaussian mixture models (SGMM) as well as standard Gaussian mixture models, together with all commonly used linear and affine transforms. Kaldi is released under the Apache License v2.0, which is highly nonrestrictive, making it suitable for a wide community of users.
Personalized Recommendation for Online Social Networks Information: Personal Preferences and Location-Based Community Trends
Microblogs, such as Twitter, are a way for users to express their opinions or share pieces of interesting news by posting relatively short messages (corpus) compared with the regular blogs. The volume of corpus updates that users receive daily is overwhelming. Also, as information diffuses from one user to another, some topics become of interest to only small groups of users, thus do not become widely adopted, and could fade away quickly. This paper proposes a framework to enhance user’s interaction and experience in social networks. It first introduces a model that provides better subscription to the user through a dynamic personalized recommendation system that provides the user with the most important tweets. This paper also presents TrendFusion, an innovative model used to enhance the suggestions provided by the social media to the users. It analyzes, predicts the localized diffusion of trends in social networks, and recommends the most interesting trends to the user. Our performance evaluation demonstrates the effectiveness of the proposed recommendation system and shows that it improves the precision and recall of identifying important tweets by up to 36% and 80%, respectively. Results also show that TrendFusion accurately predicts places in which a trend will appear, with 98% recall and 80% precision.
Using Geometry to Detect Grasp Poses in 3D Point Clouds
This paper proposes a new approach to using machine learning to detect grasp poses on novel objects presented in clutter. The input to our algorithm is a point cloud and the geometric parameters of the robot hand. The output is a set of hand poses that are expected to be good grasps. There are two main contributions. First, we identify a set of necessary conditions on the geometry of a grasp that can be used to generate a set of grasp hypotheses. This helps focus grasp detection away from regions where no grasp can exist. Second, we show how geometric grasp conditions can be used to generate labeled datasets for the purpose of training the machine learning algorithm. This enables us to generate large amounts of training data and it grounds our training labels in grasp mechanics. Overall, our method achieves an average grasp success rate of 88% when grasping novels objects presented in isolation and an average success rate of 73% when grasping novel objects presented in dense clutter. This system is available as a ROS package at http://wiki.ros.org/agile_grasp.
Beneficial effects of a switch to a Lopinavir/ritonavir-containing regimen for patients with partial or no immune reconstitution with highly active antiretroviral therapy despite complete viral suppression.
The purpose of this study was to determine if switching to an Lopinavir/ritonavir (LPV/r)-containing regimen resulted in greater immune reconstitution in patients with immunologic failure despite complete viral suppression with highly active antiretroviral therapy (HAART). Twenty patients with partial or no immune response to HAART despite viral suppression were enrolled. Ten were randomized to stay on their current regimen and 10 were randomized to LPV/r plus their current NRTI backbone. T cell subsets, ex vivo apoptosis, and the percent of circulating cells with detectable intracellular HIV-1 RNA were measured. The mean increase in CD4(+) count at 6 months was 116/mm(3) (172-288) for the LPV/r-containing arm versus 32/mm(3) (264-296) for continuation regimens (p = 0.03). The number of patients with an increase ≥50 cells/mm(3) was also greater in the LPV/r arm (7/9 versus 2/10, p = 0.01). This paralleled a decrease in ex vivo apoptosis of naive CD4(+) T cells at 6 months (21.7-11.0% for the LPV/r arm versus 17.3-18.9% for the continuation arm, p = 0.04) and memory cells (21.1-14.1% for LPV/r versus 20.2-17.9% for continuation arm, NSS). Switching patients to an LPV/r-containing regimen improved CD4(+) counts in patients with prior immunologic failure, and this may be due to an effect of LPV/r on apoptosis.
Consumer trust in B 2 C e-Commerce and the importance of social presence : experiments in e-Products and e-Services
Reducing social uncertainty—understanding, predicting, and controlling the behavior of other people—is a central motivating force of human behavior. When rules and customs are not su4cient, people rely on trust and familiarity as primary mechanisms to reduce social uncertainty. The relative paucity of regulations and customs on the Internet makes consumer familiarity and trust especially important in the case of e-Commerce. Yet the lack of an interpersonal exchange and the one-time nature of the typical business transaction on the Internet make this kind of consumer trust unique, because trust relates to other people and is nourished through interactions with them. This study validates a four-dimensional scale of trust in the context of e-Products and revalidates it in the context of e-Services. The study then shows the in:uence of social presence on these dimensions of this trust, especially benevolence, and its ultimate contribution to online purchase intentions. ? 2004 Elsevier Ltd. All rights reserved.
Classifying Dialogue Acts in One-on-One Live Chats
We explore the task of automatically classifying dialogue acts in 1-on-1 online chat forums, an increasingly popular means of providing customer service. In particular, we investigate the effectiveness of various features and machine learners for this task. While a simple bag-of-words approach provides a solid baseline, we find that adding information from dialogue structure and inter-utterance dependency provides some increase in performance; learners that account for sequential dependencies (CRFs) show the best performance. We report our results from testing using a corpus of chat dialogues derived from online shopping customer-feedback data.
UML Modeling of User and Database Interaction
In this paper, we will present a design technique for user and database interaction based on UML. User interaction will be modeled by means of UML state diagrams, and database interaction by means of UML sequence diagrams. The proposed design technique establishes how to integrate both diagrams in order to describe the user interface and database interaction of a business software system. A case study of an Internet Book Shopping system will be shown to illustrate the proposal.
A Multi-tenant Architecture for Business Process Executions
Cloud computing, as a concept, promises cost savings to end-users by letting them outsource their non-critical business functions to a third party in pay-as-you-go style. However, to enable economic pay-as-you-go services, we need Cloud middleware that maximizes sharing and support near zero costs for unused applications. Multi-tenancy, which let multiple tenants (user) to share a single application instance securely, is a key enabler for building such a middleware. On the other hand, Business processes capture Business logic of organizations in an abstract and reusable manner, and hence play a key role in most organizations. This paper presents the design and architecture of a Multi-tenant Workflow engine while discussing in detail potential use cases of such architecture. Primary contributions of this paper are motivating workflow multi-tenancy, and the design and implementation of multi-tenant workflow engine that enables multiple tenants to run their workflows securely within the same workflow engine instance without modifications to the workflows.
The heel pad in plantar heel pain.
A study of heel-pad thickness and compressibility using lateral radiographs, loaded and unloaded by body-weight, was carried out on 70 patients with plantar heel pain and 200 normal subjects. The heel-pad thickness and the compressibility index (resistance to compression) were greater in the patients than in normal subjects and significantly increased with age. In normal subjects, the thickness was greater in males than in females, but there was no significant difference in the compressibility. Increased weight led to an increase in heel-pad thickness and compressibility index. The body mass index was greater in patients with plantar heel pain than in normal subjects and 40% of the patients were considered to be overweight. Increase in the compressibility index indicates loss of elasticity and an increased tendency to develop plantar heel pain.
Characteristics and anticorrosion performance of Fe-doped TiO2 films by liquid phase deposition method
Abstract Fe-doped TiO 2 thin films were fabricated by liquid phase deposition (LPD) method, using Fe(III) nitrate as both Fe element source and fluoride scavenger instead of commonly-used boric acid (H 3 BO 3 ). Scanning electron microscopy (SEM), X-ray diffraction (XRD), and UV–vis spectrum were employed to examine the effects of Fe element on morphology, structure and optical characteristics of TiO 2 films. The as-prepared films were served as photoanode applied to photogenerated cathodic protection of SUS304 stainless steel (304SS). It was observed that the photoelectrochemical properties of the as-prepared films were enhanced with the addition of Fe element compared to the undoped TiO 2 film. The highest photoactivity was achieved for Ti13Fe (Fe/Ti = 3 molar ratio) film prepared in precursor bath containing 0.02 M TiF 4  + 0.06 M Fe(NO 3 ) 3 under white-light illumination. The effective anticorrosion behaviors can be attributed to the Fe element incorporation which decreases the probability of photogenerated charge-carrier recombination and extends the light response range of Fe-doped TiO 2 films appeared to visible-light region.
Semiparametric analysis for recurrent event data with time-dependent covariates and informative censoring.
Recurrent event data analyses are usually conducted under the assumption that the censoring time is independent of the recurrent event process. In many applications the censoring time can be informative about the underlying recurrent event process, especially in situations where a correlated failure event could potentially terminate the observation of recurrent events. In this article, we consider a semiparametric model of recurrent event data that allows correlations between censoring times and recurrent event process via frailty. This flexible framework incorporates both time-dependent and time-independent covariates in the formulation, while leaving the distributions of frailty and censoring times unspecified. We propose a novel semiparametric inference procedure that depends on neither the frailty nor the censoring time distribution. Large sample properties of the regression parameter estimates and the estimated baseline cumulative intensity functions are studied. Numerical studies demonstrate that the proposed methodology performs well for realistic sample sizes. An analysis of hospitalization data for patients in an AIDS cohort study is presented to illustrate the proposed method.
A critical appraisal of logistic regression-based nomograms, artificial neural networks, classification and regression-tree models, look-up tables and risk-group stratification models for prostate cancer.
OBJECTIVE To evaluate several methods of predicting prostate cancer-related outcomes, i.e. nomograms, look-up tables, artificial neural networks (ANN), classification and regression tree (CART) analyses and risk-group stratification (RGS) models, all of which represent valid alternatives. METHODS We present four direct comparisons, where a nomogram was compared to either an ANN, a look-up table, a CART model or a RGS model. In all comparisons we assessed the predictive accuracy and performance characteristics of both models. RESULTS Nomograms have several advantages over ANN, look-up tables, CART and RGS models, the most fundamental being a higher predictive accuracy and better performance characteristics. CONCLUSION These results suggest that nomograms are more accurate and have better performance characteristics than their alternatives. However, ANN, look-up tables, CART analyses and RGS models all rely on methodologically sound and valid alternatives, which should not be abandoned.
InterPro in 2017—beyond protein family and domain annotations
InterPro (http://www.ebi.ac.uk/interpro/) is a freely available database used to classify protein sequences into families and to predict the presence of important domains and sites. InterProScan is the underlying software that allows both protein and nucleic acid sequences to be searched against InterPro's predictive models, which are provided by its member databases. Here, we report recent developments with InterPro and its associated software, including the addition of two new databases (SFLD and CDD), and the functionality to include residue-level annotation and prediction of intrinsic disorder. These developments enrich the annotations provided by InterPro, increase the overall number of residues annotated and allow more specific functional inferences.
A new NSF thrust: computer impact on society
The recently formed (Nov. 9, 1972) Computer Impact on Society Section in NSF's Office of Computing Activities reflected a growing need to understand the wide and deep impact which computers and associated information technology are having on our social organizations and way of life. The program has two principal thrusts, computer impact on organizations and computer impact on the individual, to be implemented via studies and demonstrations. As experience is gained and the program matures, research projects of considerable scope and import may be reasonably expected which will develop ways computer science and technology can be creatively applied to meeting individual and social needs.
Short-Term Load Forecasting Methods: An Evaluation Based on European Data
This paper uses intraday electricity demand data from ten European countries as the basis of an empirical comparison of univariate methods for prediction up to a day-ahead. A notable feature of the time series is the presence of both an in-traweek and an intraday seasonal cycle. The forecasting methods considered in the study include: ARIMA modeling, periodic AR modeling, an extension for double seasonality of Holt-Winters exponential smoothing, a recently proposed alternative exponential smoothing formulation, and a method based on the principal component analysis (PCA) of the daily demand profiles. Our results show a similar ranking of methods across the 10 load series. The results were disappointing for the new alternative exponential smoothing method and for the periodic AR model. The ARIMA and PCA methods performed well, but the method that consistently performed the best was the double seasonal Holt-Winters exponential smoothing method.
Shaftesbury, Rousseau, and Kant : an introduction to the conflict between aesthetic and moral values in modern thought
Attempts to gain some historical perspective on the diverse modern conflicts between the moral and the aesthetic by examining the role of each in three major and widely influential thinkers of the 18th century: Shaftesbury, Rousseau, and Kant. Also examined are the traditions, which, in turn, influenced the philosophers both positively and negatively.
Calculating the similarity between words and sentences using a lexical database and corpus statistics
Calculating the semantic similarity between sentences is a long dealt problem in the area of natural language processing. The semantic analysis field has a crucial role to play in the research related to the text analytics. The semantic similarity differs as the domain of operation differs. In this paper, we present a methodology which deals with this issue by incorporating semantic similarity and corpus statistics. To calculate the semantic similarity between words and sentences, the proposed method follows an edge-based approach using a lexical database. The methodology can be applied in a variety of domains. The methodology has been tested on both benchmark standards and mean human similarity dataset. When tested on these two datasets, it gives highest correlation value for both word and sentence similarity outperforming other similar models. For word similarity, we obtained Pearson correlation coefficient of 0.8753 and for sentence similarity, the correlation obtained is 0.8794.
An omnidirectional mobile robot: Concept and analysis
This paper presents a novel omnidirectional wheel mechanism, referred to as MY wheel-II, based on a sliced ball structure. The wheel consists of two balls of equal diameter on a common shaft and both balls are sliced into four spherical crowns. The two sets of spherical crowns are mounted at 45° from each other to produce a combined circular profile. Compared with previous MY wheel mechanism, this improved wheel mechanism not only is more insensitive to fragments and irregularities on the floor but also has a higher payload capacity. A kinematic model of a three-wheeled prototype platform is also derived, and the problem of wheel angular velocity fluctuations caused by the specific mechanical structure is studied. The optimal scale factor (OSF) leading to a minimum of trajectory error is adopted to solve this problem. The factors influencing the OSF are investigated through simulation. In addition, the methods used for determining the OSF are discussed briefly.
On the Quality of the Initial Basin in Overspecified Neural Networks
Deep learning, in the form of artificial neural networks, has achieved remarkable practical success in recent years, for a variety of difficult machine learning applications. However, a theoretical explanation for this remains a major open problem, since training neural networks involves optimizing a highly non-convex objective function, and is known to be computationally hard in the worst case. In this work, we study the geometric structure of the associated non-convex objective function, in the context of ReLU networks and starting from a random initialization of the network parameters. We identify some conditions under which it becomes more favorable to optimization, in the sense of (i) High probability of initializing at a point from which there is a monotonically decreasing path to a global minimum; and (ii) High probability of initializing at a basin (suitably defined) with a small minimal objective value. A common theme in our results is that such properties are more likely to hold for larger (“overspecified”) networks, which accords with some recent empirical and theoretical observations.
Hate Speech on Twitter: A Pragmatic Approach to Collect Hateful and Offensive Expressions and Perform Hate Speech Detection
With the rapid growth of social networks and microblogging websites, communication between people from different cultural and psychological backgrounds has become more direct, resulting in more and more “cyber” conflicts between these people. Consequently, hate speech is used more and more, to the point where it has become a serious problem invading these open spaces. Hate speech refers to the use of aggressive, violent or offensive language, targeting a specific group of people sharing a common property, whether this property is their gender (i.e., sexism), their ethnic group or race (i.e., racism) or their believes and religion. While most of the online social networks and microblogging websites forbid the use of hate speech, the size of these networks and websites makes it almost impossible to control all of their content. Therefore, arises the necessity to detect such speech automatically and filter any content that presents hateful language or language inciting to hatred. In this paper, we propose an approach to detect hate expressions on Twitter. Our approach is based on unigrams and patterns that are automatically collected from the training set. These patterns and unigrams are later used, among others, as features to train a machine learning algorithm. Our experiments on a test set composed of 2010 tweets show that our approach reaches an accuracy equal to 87.4% on detecting whether a tweet is offensive or not (binary classification), and an accuracy equal to 78.4% on detecting whether a tweet is hateful, offensive, or clean (ternary classification).
Railway track settlements-a literature review
2
Wisteria: Nurturing Scalable Data Cleaning Infrastructure
Analysts report spending upwards of 80% of their time on problems in data cleaning. The data cleaning process is inherently iterative, with evolving cleaning workflows that start with basic exploratory data analysis on small samples of dirty data, then refine analysis with more sophisticated/expensive cleaning operators (i.e., crowdsourcing), and finally apply the insights to a full dataset. While an analyst often knows at a logical level what operations need to be done, they often have to manage a large search space of physical operators and parameters. We present Wisteria, a system designed to support the iterative development and optimization of data cleaning workflows, especially ones that utilize the crowd. Wisteria separates logical operations from physical implementations, and driven by analyst feedback, suggests optimizations and/or replacements to the analyst’s choice of physical implementation. We highlight research challenges in sampling, in-flight operator replacement, and crowdsourcing. We overview the system architecture and these techniques, then propose a demonstration designed to showcase how Wisteria can improve iterative data analysis and cleaning. The code is available at: http://www.sampleclean.org.
Projective dynamics: fusing constraint projections for fast simulation
We present a new method for implicit time integration of physical systems. Our approach builds a bridge between nodal Finite Element methods and Position Based Dynamics, leading to a simple, efficient, robust, yet accurate solver that supports many different types of constraints. We propose specially designed energy potentials that can be solved efficiently using an alternating optimization approach. Inspired by continuum mechanics, we derive a set of continuum-based potentials that can be efficiently incorporated within our solver. We demonstrate the generality and robustness of our approach in many different applications ranging from the simulation of solids, cloths, and shells, to example-based simulation. Comparisons to Newton-based and Position Based Dynamics solvers highlight the benefits of our formulation.
The impact of online store environment cues on purchase intention: Trust and perceived risk as a mediator
Purpose – The purpose of this paper is to investige whether online environment cues (web site quality and web site brand) affect customer purchase intention towards an online retailer and whether this impact is mediated by customer trust and perceived risk. The study also aimed to assess the degree of reciprocity between consumers’ trust and perceived risk in the context of an online shopping environment. Design/methodology/approach – The study proposed a research framework for testing the relationships among the constructs based on the stimulus-organism-response framework. In addition, this study developed a non-recursive model. After the validation of measurement scales, empirical analyses were performed using structural equation modelling. Findings – The findings confirm that web site quality and web site brand affect consumers’ trust and perceived risk, and in turn, consumer purchase intention. Notably, this study finds that the web site brand is a more important cue than web site quality in influencing customers’ purchase intention. Furthermore, the study reveals that the relationship between trust and perceived risk is reciprocal. Research limitations/implications – This study adopted four dimensions – technical adequacy, content quality, specific content and appearance – to measure web site quality. However, there are still many competing concepts regarding the measurement of web site quality. Further studies using other dimensional measures may be needed to verify the research model. Practical implications – Online retailers should focus their marketing strategies more on establishing the brand of the web site rather than improving the functionality of the web site. Originality/value – This study proposed a non-recursive model for empirically analysing the link between web site quality, web site brand, trust, perceived risk and purchase intention towards the online retailer.
Social temporal collaborative ranking for context aware movie recommendation
Most existing collaborative filtering models only consider the use of user feedback (e.g., ratings) and meta data (e.g., content, demographics). However, in most real world recommender systems, context information, such as time and social networks, are also very important factors that could be considered in order to produce more accurate recommendations. In this work, we address several challenges for the context aware movie recommendation tasks in CAMRa 2010: (1) how to combine multiple heterogeneous forms of user feedback? (2) how to cope with dynamic user and item characteristics? (3) how to capture and utilize social connections among users? For the first challenge, we propose a novel ranking based matrix factorization model to aggregate explicit and implicit user feedback. For the second challenge, we extend this model to a sequential matrix factorization model to enable time-aware parametrization. Finally, we introduce a network regularization function to constrain user parameters based on social connections. To the best of our knowledge, this is the first study that investigates the collective modeling of social and temporal dynamics. Experiments on the CAMRa 2010 dataset demonstrated clear improvements over many baselines.
Gradient Primal-Dual Algorithm Converges to Second-Order Stationary Solutions for Nonconvex Distributed Optimization
In this work, we study two first-order primal-dual based algorithms, the Gradient Primal-Dual Algorithm (GPDA) and the Gradient Alternating Direction Method of Multipliers (GADMM), for solving a class of linearly constrained non-convex optimization problems. We show that with random initialization of the primal and dual variables, both algorithms are able to compute second-order stationary solutions (ss2) with probability one. This is the first result showing that primal-dual algorithm is capable of finding ss2 when only using first-order information; it also extends the existing results for first-order, but primal-only algorithms. An important implication of our result is that it also gives rise to the first global convergence result to the ss2, for two classes of unconstrained distributed non-convex learning problems over multi-agent networks.
A Vision-Based Counting and Recognition System for Flying Insects in Intelligent Agriculture
Rapid and accurate counting and recognition of flying insects are of great importance, especially for pest control. Traditional manual identification and counting of flying insects is labor intensive and inefficient. In this study, a vision-based counting and classification system for flying insects is designed and implemented. The system is constructed as follows: firstly, a yellow sticky trap is installed in the surveillance area to trap flying insects and a camera is set up to collect real-time images. Then the detection and coarse counting method based on You Only Look Once (YOLO) object detection, the classification method and fine counting based on Support Vector Machines (SVM) using global features are designed. Finally, the insect counting and recognition system is implemented on Raspberry PI. Six species of flying insects including bee, fly, mosquito, moth, chafer and fruit fly are selected to assess the effectiveness of the system. Compared with the conventional methods, the test results show promising performance. The average counting accuracy is 92.50% and average classifying accuracy is 90.18% on Raspberry PI. The proposed system is easy-to-use and provides efficient and accurate recognition data, therefore, it can be used for intelligent agriculture applications.
Internet Gaming Addiction: A Systematic Review of Empirical Research
The activity of play has been ever present in human history and the Internet has emerged as a playground increasingly populated by gamers. Research suggests that a minority of Internet game players experience symptoms traditionally associated with substance-related addictions, including mood modification, tolerance and salience. Because the current scientific knowledge of Internet gaming addiction is copious in scope and appears relatively complex, this literature review attempts to reduce this confusion by providing an innovative framework by which all the studies to date can be categorized. A total of 58 empirical studies were included in this literature review. Using the current empirical knowledge, it is argued that Internet gaming addiction follows a continuum, with antecedents in etiology and risk factors, through to the development of a “full-blown” addiction, followed by ramifications in terms of negative consequences and potential treatment. The results are evaluated in light of the emergent discrepancies in findings, and the consequent implications for future research.
Behaviorism, Cognitivism, Constructivism: Comparing Critical Features From an Instructional Design Perspective
The need for a bridge between basic learning research and educational practice has long been discussed. To ensure a strong connection between these two areas, Dewey (cited in Reigeluth, 1983) called for the creation and development of a “linking science”; Tyler (1978) a “middleman position”; and Lynch (1945) for employing an “engineering analogy” as an aid for translating theory into practice. In each case, the respective author highlighted the information and potential contributions of available learning theories, the pressing problems faced by those dealing with practical learning issues, and a general lack of using the former to facilitate solutions for the latter. Th e value of such a bridging function would be its ability to translate relevant aspects of the learning theories into optimal instructional actions. As described by Reigeluth (1983), the fi eld of Instructional Design performs this role. Instructional designers have been charged with “translating principles of learning and instruction into specifi cations for instructional materials and activities” (Smith & Ragan, 1993, p. 12). To achieve this goal, two sets of skills and knowledge are needed. First, the designer must understand the position of the practitioner. In this regard, the following questions would be relevant: What are The way we defi ne learning and what we believe about the way learning occurs has important implications for situations in which we want to facilitate changes in what people know and/or do. Learning theories provide instructional designers with verifi ed instructional strategies and techniques for facilitating learning as well as a foundation for intelligent strategy selection. Yet many designers are operating under the constraints of a limited theoretical background. This paper is an attempt to familiarize designers with three relevant positions on learning (behavioral, cognitive, and constructivist) which provide structured foundations for planning and conducting instructional design activities. Each learning perspective is discussed in terms of its specifi c interpretation of the learning process and the resulting implications for instructional designers and educational practitioners. The information presented here provides the reader with a comparison of these three diff erent viewpoints and illustrates how these diff erences might be translated into practical applications in instructional situations.
A New Way for Antihelixplasty in Prominent Ear Surgery: Modified Postauricular Fascial Flap.
BACKGROUND Otoplasty procedures aim to reduce the concha-mastoid angle and recreate the antihelical fold. Here, we explained the modified postauricular fascial flap, described as a new way for recreating the antihelical fold, and reported the results of patients on whom this flap was used. MATERIALS AND METHODS The defined technique was used on 24 patients (10 females and 14 males; age, 6-27 years; mean, 16.7 years) between June 2009 and July 2012, a total of 48 procedures in total (bilateral). Follow-up ranged from 1 to 3 years (mean, 1.5 years). At the preoperative and postoperative time points (1 and 12 months after surgery), all patients were measured for upper and middle helix-head distance and were photographed. The records were analyzed statistically using t test and analysis of variance. RESULTS The procedure resulted in ears that were natural in appearance without any significant visible evidence of surgery. The operations resulted in no complications except 1 patient who developed a small skin ulcer on the left ear because of band pressure. When we compared the preoperative and postoperative upper and middle helix-head distance, there was a high significance statistically. CONCLUSIONS To introduce modified postauricular fascial flap, we used a simple and safe procedure to recreate an antihelical fold. This procedure led to several benefits, including a natural-in-appearance antihelical fold, prevention of suture extrusion and granuloma, as well as minimized risk for recurrence due to neochondrogenesis. This method may be used as a standard procedure for treating prominent ears surgically.
The relationship of nonalcoholic fatty liver disease and metabolic syndrome for colonoscopy colorectal neoplasm
Colorectal neoplasm is considered to have a strong association with nonalcoholic fatty liver disease (NAFLD) and metabolic syndrome (MetS), respectively. The relationship among NAFLD, MetS, and colorectal neoplasm was assessed in 1793 participants. Participants were divided into 4 groups based on the status of NAFLD and MetS. Relative excess risks of interaction (RERI), attributable proportion (AP), and synergy index (SI) were applied to evaluate the additive interaction. NAFLD and MetS were significantly correlated with colorectal neoplasm and colorectal cancer (CRC), respectively. The incidence of CRC in NAFLD (+) MetS (+) group was significantly higher than other 3 groups. The result of RERI, AP, and SI indicated the significant additive interaction of NAFLD and MetS on the development of CRC. NAFLD and MetS are risk factors for colorectal neoplasm and CRC, respectively. And NAFLD and MetS have an additive effect on the development of CRC.
The boron transporter BnaC4.BOR1;1c is critical for inflorescence development and fertility under boron limitation in Brassica napus.
Boron (B) is an essential micronutrient for plants, but the molecular mechanisms underlying the uptake and distribution of B in allotetraploid rapeseed (Brassica napus) are unclear. Here, we identified a B transporter of rapeseed, BnaC4.BOR1;1c, which is expressed in shoot nodes and involved in distributing B to the reproductive organs. Transgenic Arabidopsis plants containing a BnaC4.BOR1;1c promoter-driven GUS reporter gene showed strong GUS activity in roots, nodal regions of the shoots and immature floral buds. Overexpressing BnaC4.BOR1;1c in Arabidopsis wild type or in bor1-1 mutants promoted wild-type growth and rescued the bor1-1 mutant phenotype. Conversely, knockdown of BnaC4.BOR1;1c in a B-efficient rapeseed line reduced B accumulation in flower organs, eventually resulting in severe sterility and seed yield loss. BnaC4.BOR1;1c RNAi plants exhibited large amounts of disintegrated stigma papilla cells with thickened cell walls accompanied by abnormal proliferation of lignification under low-B conditions, indicating that the sterility may be a result of altered cell wall properties in flower organs. Taken together, our results demonstrate that BnaC4.BOR1;1c is a AtBOR1-homologous B transporter gene expressing in both roots and shoot nodes that is essential for the developing inflorescence tissues, which highlights its diverse functions in allotetraploid rapeseed compared with diploid model plant Arabidopsis.
The Burden of Large and Small Duct Primary Sclerosing Cholangitis in Adults and Children: A Population-Based Analysis
OBJECTIVES:The epidemiology of primary sclerosing cholangitis (PSC) has been incompletely assessed by population-based studies. We therefore conducted a population-based study to determine: (a) incidence rates of large and small duct PSC in adults and children, (b) the risk of inflammatory bowel disease on developing PSC, and (c) patterns of clinical presentation with the advent of magnetic resonance cholangiopancreatography (MRCP).METHODS:All residents of the Calgary Health Region diagnosed with PSC between 2000 and 2005 were identified by medical records, endoscopic, diagnostic imaging, and pathology databases. Demographic and clinical information were obtained. Incidence rates were determined and risks associated with PSC were reported as rate ratios (RR) with 95% confidence intervals (CI).RESULTS:Forty-nine PSC patients were identified for an age- and gender-adjusted annual incidence rate of 0.92 cases per 100,000 person-years. The incidence of small duct PSC was 0.15/100,000. In children the incidence rate was 0.23/100,000 compared with 1.11/100,000 in adults. PSC risk was similar in Crohn's disease (CD; RR 220.0, 95% CI 132.4–343.7) and ulcerative colitis (UC; RR 212.4, 95% CI 116.1–356.5). Autoimmune hepatitis overlap was noted in 10% of cases. MRCP diagnosed large duct PSC in one-third of cases. Delay in diagnosis was common (median 8.4 months). A minority had complications at diagnosis: cholangitis (6.1%), pancreatitis (4.1%), and cirrhosis (4.1%).CONCLUSIONS:Pediatric cases and small duct PSC are less common than adult large duct PSC. Surprisingly, the risk of developing PSC in UC and CD was similar. Autoimmune hepatitis overlap was noted in a significant minority of cases.
Inverse Rendering of Faces with a 3D Morphable Model
In this paper, we present a complete framework to inverse render faces with a 3D Morphable Model (3DMM). By decomposing the image formation process into geometric and photometric parts, we are able to state the problem as a multilinear system which can be solved accurately and efficiently. As we treat each contribution as independent, the objective function is convex in the parameters and a global solution is guaranteed. We start by recovering 3D shape using a novel algorithm which incorporates generalization error of the model obtained from empirical measurements. We then describe two methods to recover facial texture, diffuse lighting, specular reflectance, and camera properties from a single image. The methods make increasingly weak assumptions and can be solved in a linear fashion. We evaluate our findings on a publicly available database, where we are able to outperform an existing state-of-the-art algorithm. We demonstrate the usability of the recovered parameters in a recognition experiment conducted on the CMU-PIE database.
Citation Author Topic Model in Expert Search
This paper proposes a novel topic model, Citation-Author-Topic (CAT) model that addresses a semantic search task we define as expert search – given a research area as a query, it returns names of experts in this area. For example, Michael Collins would be one of the top names retrieved given the querySyntactic Parsing. Our contribution in this paper is two-fold. First, we model the cited author information together with words and paper authors. Such extra contextual information directly models linkage among authors and enhances the author-topic association, thus produces more coherent author-topic distribution. Second, we provide a preliminary solution to the task of expert search when the learning repository contains exclusively research related documents authored by the experts. When compared with a previous proposed model (Johri et al., 2010), the proposed model produces high quality author topic linkage and achieves over 33% error reduction evaluated by the standard MAP measurement.
Measuring changeability for generic aspect-oriented systems
Maintenance of software systems has become a major concern for software developers and users. In environments, where software changes are frequently required to improve software quality, chan-geability is an important characteristic of maintainability in ISO/IEC 9126 quality standards. Many researchers and practition-ers have proposed changeability assessment techniques for Object-Oriented Programming (OOP) and Aspect-Oriented Programming (AOP). To the best of our knowledge, no one has proposed chan-geability assessment technique for generic Aspect-Oriented (AO) Systems. AOP is an emerging technique that provides a means to clearly encapsulate and implement aspects that crosscut other modules. In this paper, we have defined a generic changeability assessment technique that takes into account two well known fami-lies of available AOP languages viz, AspectJ and CaesarJ. A co-relation analysis between changeability and dependency has been performed. Result shows that highly dependent AO systems will absorb low changeability.
Improving retention and graduate recruitment through immersive research experiences for undergraduates
Research experiences for undergraduates are considered an effective means for increasing student retention and encouraging undergraduate students to continue on to graduate school. However, managing a cohort of undergraduate researchers, with varying skill levels, can be daunting for faculty advisors. We have developed a program to engage students in research and outreach in visualization, virtual reality, networked robotics, and interactive games. Our program immerses students into the life of a lab, employing a situated learning approach that includes tiered mentoring and collaboration to enable students at all levels to contribute to research. Students work in research teams comprised of other undergraduates, graduate students and faculty, and participate in professional development and social gatherings within the larger cohort. Results from our first two years indicate this approach is manageable and effective for increasing students' ability and desire to conduct research.
Are specific language impairment and dyslexia distinct disorders?
PURPOSE The purpose of this study was to determine whether specific language impairment (SLI) and dyslexia are distinct developmental disorders. METHOD Study 1 investigated the overlap between SLI identified in kindergarten and dyslexia identified in 2nd, 4th, or 8th grades in a representative sample of 527 children. Study 2 examined phonological processing in a subsample of participants, including 21 children with dyslexia only, 43 children with SLI only, 18 children with SLI and dyslexia, and 165 children with typical language/reading development. Measures of phonological awareness and nonword repetition were considered. RESULTS Study 1 showed limited but statistically significant overlap between SLI and dyslexia. Study 2 found that children with dyslexia or a combination of dyslexia and SLI performed significantly less well on measures of phonological processing than did children with SLI only and those with typical development. Children with SLI only showed only mild deficits in phonological processing compared with typical children. CONCLUSIONS These results support the view that SLI and dyslexia are distinct but potentially comorbid developmental language disorders. A deficit in phonological processing is closely associated with dyslexia but not with SLI when it occurs in the absence of dyslexia.
On a Generalization of the Stone–Weierstrass Theorem
A categorical version of the famous theorem of Stone and Weierstrass is formulated and studied in detail. Several applications and examples are given.
Memristor-based XOR gate for full adder
Memristor technology is regarded as a potential solution to the memory bottleneck in Von Neumann Architecture by putting storage and computation integrated in the same physical location. In this paper, we proposed a nonvolatile exclusive-OR (XOR) logic gate with 5 memristors, which can execute operation in a single step. Moreover, based on the XOR logic gate, a full adder was presented and simulated by SPICE. Compared to other logic gate and full adder, the proposed circuits have benefits of simpler architecture, higher speed and lower power consumption. This paper provides a memristor-based element as a solution to the future alternative Computation-In-Memory architecture.
A comparison of issues and advantages in agile and incremental development between state of the art and an industrial case
Recent empirical studies have been conducted identifying a number of issues and advantages of incremental and agile methods. However, the majority of studies focused on one model (Extreme Programming) and small projects. To draw more general conclusions we conduct a case study in large scale development identifying issues and advantages, and compare the results with previous empirical studies on the topic. The principle results are that 1) the case study and literature agree on the benefits while new issues arise when using agile in large-scale, and 2) an empirical research framework is needed to make agile studies comparable.
Low-dimensional procedure for the characterization of human faces.
A method is presented for the representation of (pictures of) faces. Within a specified framework the representation is ideal. This results in the characterization of a face, to within an error bound, by a relatively low-dimensional vector. The method is illustrated in detail by the use of an ensemble of pictures taken for this purpose.
The Spline GARCH Model for Unconditional Volatility and its Global Macroeconomic Causes
25 years of volatility research has left the macroeconomic environment playing a minor role. This paper proposes modeling equity volatilities as a combination of macroeconomic effects and time series dynamics. High frequency return volatility is specified to be the product of a slow moving deterministic component, represented by an exponential spline, and a unit GARCH. This deterministic component is the unconditional volatility, which is then estimated for nearly 50 countries over various sample periods of daily data. Unconditional volatility is then modeled as an unbalanced panel with a variety of dependence structures. It is found to vary over time and across countries with high unconditional volatility resulting from high volatility in the macroeconomic factors GDP, inflation and short term interest rate, and with high inflation and slow growth of output. Volatility is higher for emerging markets and for markets with small numbers of listed companies and market capitalization, but also for large economies. The model allows long horizon forecasts of volatility to depend on macroeconomic developments, and delivers estimates of the volatility to be anticipated in a newly opened market.
Relationships between handwriting performance and organizational abilities among children with and without dysgraphia: a preliminary study.
Organizational ability constitutes one executive function (EF) component essential for common everyday performance. The study aim was to explore the relationship between handwriting performance and organizational ability in school-aged children. Participants were 58 males, aged 7-8 years, 30 with dysgraphia and 28 with proficient handwriting. Group allocation was based on children's scores in the Handwriting Proficiency Screening Questionnaire (HPSQ). They performed the Hebrew Handwriting Evaluation (HHE), and their parents completed the Questionnaire for Assessing Students' Organizational Abilities-for Parents (QASOA-P). Significant differences were found between the groups for handwriting performance (HHE) and organizational abilities (QASOA-P). Significant correlations were found in the dysgraphic group between handwriting spatial arrangement and the QASOA-P mean score. Linear regression indicated that the QASOA-P mean score explained 42% of variance of handwriting proficiency (HPSQ). Based on one discriminant function, 81% of all participants were correctly classified into groups. Study results strongly recommend assessing organizational difficulties in children referred for therapy due to handwriting deficiency.
DepthLGP: Learning Embeddings of Out-of-Sample Nodes in Dynamic Networks
Network embedding algorithms to date are primarily designed for static networks, where all nodes are known before learning. How to infer embeddings for out-of-sample nodes, i.e. nodes that arrive after learning, remains an open problem. The problem poses great challenges to existing methods, since the inferred embeddings should preserve intricate network properties such as high-order proximity, share similar characteristics (i.e. be of a homogeneous space) with in-sample node embeddings, and be of low computational cost. To overcome these challenges, we propose a Deeply Transformed High-order Laplacian Gaussian Process (DepthLGP) method to infer embeddings for out-of-sample nodes. DepthLGP combines the strength of nonparametric probabilistic modeling and deep learning. In particular, we design a high-order Laplacian Gaussian process (hLGP) to encode network properties, which permits fast and scalable inference. In order to further ensure homogeneity, we then employ a deep neural network to learn a nonlinear transformation from latent states of the hLGP to node embeddings. DepthLGP is general, in that it is applicable to embeddings learned by any network embedding algorithms. We theoretically prove the expressive power of DepthLGP, and conduct extensive experiments on real-world networks. Empirical results demonstrate that our approach can achieve significant performance gain over existing approaches.
Agent Based Modelling and Simulation tools: A review of the state-of-art software
The key intent of this work is to present a comprehensive comparative literature survey of the state-of-art in software agent-based computing technology and its incorporationwithin themodelling and simulation domain. The original contribution of this survey is two-fold: (1) Present a concise characterization of almost the entire spectrum of agent-based modelling and simulation tools, thereby highlighting the salient features, merits, and shortcomings of such multi-faceted application software; this article covers eighty five agent-based toolkits that may assist the system designers and developers with common tasks, such as constructing agent-based models and portraying the real-time simulation outputs in tabular/graphical formats and visual recordings. (2) Provide a usable reference that aids engineers, researchers, learners and academicians in readily selecting an appropriate agent-based modelling and simulation toolkit for designing and developing their system models and prototypes, cognizant of both their expertise and those requirements of their application domain. In a nutshell, a significant synthesis of Agent Based Modelling and Simulation (ABMS) resources has been performed in this review that stimulates further investigation into this topic. © 2017 Elsevier Inc. All rights reserved.
Advances in standardization of laboratory measurement procedures: implications for measuring biomarkers of folate and vitamin B-12 status in NHANES1234
Population studies such as NHANES analyze large numbers of laboratory measurements and are often performed in different laboratories using different measurement procedures and over an extended period of time. Correct clinical and epidemiologic interpretations of the results depend on the accuracy of those measurements. Unfortunately, considerable variability has been observed among assays for folate, vitamin B-12, and related biomarkers. In the past few decades, the science of metrology has advanced considerably, with the development of improved primary reference measurement procedures and high-level reference materials, which can serve as the basis for accurate measurement. A rigorous approach has been established for making field methods traceable to the highest-level reference measurement procedures and reference materials. This article reviews some basic principles of metrology and describes their recent application to measurements of folate and vitamin B-12.
Reduced-energy cranberry juice increases folic acid and adiponectin and reduces homocysteine and oxidative stress in patients with the metabolic syndrome.
The metabolic syndrome (MetS) comprises pathological conditions that include insulin resistance, arterial hypertension, visceral adiposity and dyslipidaemia, which favour the development of CVD. Some reports have shown that cranberry ingestion reduces cardiovascular risk factors. However, few studies have evaluated the effect of this fruit in subjects with the MetS. The objective of the present study was to assess the effect of reduced-energy cranberry juice consumption on metabolic and inflammatory biomarkers in patients with the MetS, and to verify the effects of cranberry juice concomitantly on homocysteine and adiponectin levels in patients with the MetS. For this purpose, fifty-six individuals with the MetS were selected and divided into two groups: control group (n 36) and cranberry-treated group (n 20). After consuming reduced-energy cranberry juice (0·7 litres/d) containing 0·4mg folic acid for 60 d, the cranberry-treated group showed an increase in adiponectin (P=0·010) and folic acid (P=0·033) and a decrease in homocysteine (P<0·001) in relation to baseline values and also in comparison with the controls (P<0·05). There was no significant change in the pro-inflammatory cytokines TNF-a, IL-1 and IL-6. In relation to oxidative stress measurements, decreased (P<0·05) lipoperoxidation and protein oxidation levels assessed by advanced oxidation protein products were found in the cranberry-treated group when compared with the control group. In conclusion, the consumption of cranberry juice for 60 d was able to improve some cardiovascular risk factors. The present data reinforce the importance of the inverse association between homocysteine and adiponectin and the need for more specifically designed studies on MetS patients.
MineSweeper: An In-depth Look into Drive-by Cryptocurrency Mining and Its Defense
A wave of alternative coins that can be effectively mined without specialized hardware, and a surge in cryptocurrencies' market value has led to the development of cryptocurrency mining ( cryptomining ) services, such as Coinhive, which can be easily integrated into websites to monetize the computational power of their visitors. While legitimate website operators are exploring these services as an alternative to advertisements, they have also drawn the attention of cybercriminals: drive-by mining (also known as cryptojacking ) is a new web-based attack, in which an infected website secretly executes JavaScript code and/or a WebAssembly module in the user's browser to mine cryptocurrencies without her consent. In this paper, we perform a comprehensive analysis on Alexa's Top 1 Million websites to shed light on the prevalence and profitability of this attack. We study the websites affected by drive-by mining to understand the techniques being used to evade detection, and the latest web technologies being exploited to efficiently mine cryptocurrency. As a result of our study, which covers 28 Coinhive-like services that are widely being used by drive-by mining websites, we identified 20 active cryptomining campaigns. Motivated by our findings, we investigate possible countermeasures against this type of attack. We discuss how current blacklisting approaches and heuristics based on CPU usage are insufficient, and present MineSweeper, a novel detection technique that is based on the intrinsic characteristics of cryptomining code, and, thus, is resilient to obfuscation. Our approach could be integrated into browsers to warn users about silent cryptomining when visiting websites that do not ask for their consent.
Retinal Vessel Extraction Using First-Order Derivative of Gaussian and Morphological Processing
The change in morphology, diameter, branching pattern and/or tortuosity of retinal blood vessels is an important indicator of various clinical disorders of the eye and the body. This paper reports an automated method for segmentation of blood vessels in retinal images by means of a unique combination of differential filtering and morphological processing. The centerlines are extracted by the application of first order derivative of Gaussian in four orientations and then the evaluation of derivative signs and average derivative values is made. The shape and orientation map of the blood vessel is obtained by applying a multidirectional morphological top-hat operator followed by bit plane slicing of a vessel enhanced grayscale image. The centerlines are combined with these maps to obtain the segmented vessel tree. The approach is tested on two publicly available databases and results show that the proposed algorithm can obtain robust and accurate vessel tracings with a performance comparable to other leading systems.
The Parable of Google Flu: Traps in Big Data Analysis
In February 2013, Google Flu Trends (GFT) made headlines but not for a reason that Google executives or the creators of the flu tracking system would have hoped. Nature reported that GFT was predicting more than double the proportion of doctor visits for influenza-like illness (ILI) than the Centers for Disease Control and Prevention (CDC), which bases its estimates on surveillance reports from laboratories across the United States (1, 2). This happened despite the fact that GFT was built to predict CDC reports. Given that GFT is often held up as an exemplary use of big data (3, 4), what lessons can we draw from this error?
Effects of weight lifting training combined with plyometric exercises on physical fitness, body composition, and knee extension velocity during kicking in football.
The effects of a training program consisting of weight lifting combined with plyometric exercises on kicking performance, myosin heavy-chain composition (vastus lateralis), physical fitness, and body composition (using dual-energy X-ray absorptiometry (DXA)) was examined in 37 male physical education students divided randomly into a training group (TG: 16 subjects) and a control group (CG: 21 subjects). The TG followed 6 weeks of combined weight lifting and plyometric exercises. In all subjects, tests were performed to measure their maximal angular speed of the knee during in-step kicks on a stationary ball. Additional tests for muscle power (vertical jump), running speed (30 m running test), anaerobic capacity (Wingate and 300 m running tests), and aerobic power (20 m shuttle run tests) were also performed. Training resulted in muscle hypertrophy (+4.3%), increased peak angular velocity of the knee during kicking (+13.6%), increased percentage of myosin heavy-chain (MHC) type IIa (+8.4%), increased 1 repetition maximum (1 RM) of inclined leg press (ILP) (+61.4%), leg extension (LE) (+20.2%), leg curl (+15.9%), and half squat (HQ) (+45.1%), and enhanced performance in vertical jump (all p < or = 0.05). In contrast, MHC type I was reduced (-5.2%, p < or = 0.05) after training. In the control group, these variables remained unchanged. In conclusion, 6 weeks of strength training combining weight lifting and plyometric exercises results in significant improvement of kicking performance, as well as other physical capacities related to success in football (soccer).
Implementation of an inexpensive EEG headset for the pattern recognition purpose
There are many types of bio-signals with various control application prospects. In this work possible control application domain of electroencephalographic signal obtained from an easily available, inexpensive EEG headset - Emotiv EPOC was presented. This work also involved application of an embedded system platform. That solution caused limits in choosing an appropriate signal processing method, as embedded platforms characterise with a little efficiency and low computing power. Potential implementation of the embedded platform enables to extend the possible future application of the proposed BCI. It also gives more flexibility, as the platform is able to simulate various environments. In this work traditional, statistical methods were neither used nor described.
Emotions in robot psychology
In his famous thought experiments on synthetic vehicles, Valentino Braitenberg stipulated that simple stimulus-response reactions in an organism could evoke the appearance of complex behavior, which, to the unsuspecting human observer, may even appear to be driven by emotions such as fear, aggression, and even love (Braitenberg, Vehikel. Experimente mit künstlichen Wesen, Lit Verlag, 2004). In fact, humans appear to have a strong propensity to anthropomorphize, driven by our inherent desire for predictability that will quickly lead us to discern patterns, cause-and-effect relationships, and yes, emotions, in animated entities, be they natural or artificial. But might there be reasons, that we should intentionally “implement” emotions into artificial entities, such as robots? How would we proceed in creating robot emotions? And what, if any, are the ethical implications of creating “emotional” robots? The following article aims to shed some light on these questions with a multi-disciplinary review of recent empirical investigations into the various facets of emotions in robot psychology.
ChemNet: A Transferable and Generalizable Deep Neural Network for Small-Molecule Property Prediction
With access to large datasets, deep neural networks (DNN) have achieved humanlevel accuracy in image and speech recognition tasks. However, in chemistry, availability of large standardized and labelled datasets is scarce, and many chemical properties of research interest, chemical data is inherently small and fragmented. In this work, we explore transfer learning techniques in conjunction with the existing Chemception CNN model, to create a transferable and generalizable deep neural network for small-molecule property prediction. Our latest model, ChemNet learns in a semi-supervised manner from inexpensive labels computed from the ChEMBL database. When fine-tuned to the Tox21, HIV and FreeSolv dataset, which are 3 separate chemical properties that ChemNet was not originally trained on, we demonstrate that ChemNet exceeds the performance of existing Chemception models and other contemporary DNN models. Furthermore, as ChemNet has been pre-trained on a large diverse chemical database, it can be used as a general-purpose plug-and-play deep neural network for the prediction of novel small-molecule chemical properties.
Adaptive Fuzzy Urban Traffic Flow Control Using a Cooperative Multi-Agent System based on Two Stage Fuzzy Clustering
The traffic congestion problem in urban areas is worsening since traditional traffic signal control systems cannot provide efficient traffic control. Therefore, dynamic traffic signal control in Intelligent Transportation System (ITS) recently has received increasing attention. This study devises an adaptive and cooperative multi-agent fuzzy system for a decentralized traffic signal control. To achieve this we have worked on a model which has three levels of control. Every intersection is controlled by its own traffic situation, correlated intersections recommendations and a knowledge base which provides its traffic pattern. This study focused on utilizing the prediction mechanism of our architecture, it finds most correlated intersections based on a two stage fuzzy clustering algorithm which finds most intersections effect on a specific intersection based on clustering membership degree. We have also developed a NetLogo-based traffic simulator to serve as the agents’ world. Our approach is tested with traffic control of a large connected junctions and the result obtained is promising: The average delay time can be reduced by 42.76% compared to the conventional fixed sequence traffic signal and 28.77% compared to the vehicle actuated traffic control strategy. KeywordsMAS; Intelligent Transportation System; Fuzzy Control; Fuzzy Clustring; Traffic Light Control,
Secure Coding: Building Security into the Software Development Life Cycle
any of the security properties that are outlined repeatedly in the newer regulations and standards can easily be side-stepped. Too often the culprits are unsophisticated software development techniques, a lack of securityfocused quality assurance, and scarce security training for software developers, software architects, and project managers. To meet future needs, opportunities, and threats associated with information security, security needs to be “baked in” to the overall systems development life-cycle process. Information security and privacy loom ever larger as issues for public and private sector organizations alike today. Government regulations and industry standards attempt to address these issues. Computer hardware and software providers invest in meeting both regulatory and market demands for information security and privacy. And individual organizations — corporations and government agencies alike — are voicing concern about the problem.
Achilles Tendon Injuries: Comparison of Different Conservative and Surgical Treatment and Rehabilitation
Despite its high incidence and the great development of literature, there is still controversy about the optimal management of Achilles tendon rupture. The several techniques proposed to treat acute ruptures can essentially be classifi ed into: conservative management (cast immobilization or functional bracing), open repair, minimally invasive technique and percutaneous repair with or without augmentation. Although chronic ruptures represent a different chapter, the ideal treatment seems to be surgical too (debridement, local tissue transfer, augmentation and synthetic grafts). In this paper we reviewed the literature on acute injuries. Review Article Achilles Tendon Injuries: Comparison of Different Conservative and Surgical Treatment and Rehabilitation Alessandro Bistolfi , Jessica Zanovello, Elisa Lioce, Lorenzo Morino, Raul Cerlon, Alessandro Aprato* and Giuseppe Massazza Medical school, University of Turin, Turin, Italy *Address for Correspondence: Alessandro Aprato, Medical School, University of Turin, Viale 25 Aprile 137 int 6 10131 Torino, Italy, Tel: +39 338 6880640; Email: [email protected] Submitted: 03 January 2017 Approved: 13 February 2017 Published: 21 February 2017 Copyright: 2017 Bistolfi A, et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. How to cite this article: Bistolfi A, Zanovello J, Lioce E, Morino L, Cerlon R, et al. Achilles Tendon Injuries: Comparison of Different Conservative and Surgical Treatment and Rehabilitation. J Nov Physiother Rehabil. 2017; 1: 039-053. https://doi.org/10.29328/journal.jnpr.1001006 INTRODUCTION The Achilles is the strongest and the largest tendon in the body and it can normally withstand several times a subject’s body weight. Achilles tendon rupture is frequent and it has been shown to cause signi icant morbidity and, regardless of treatment, major functional de icits persist 1 year after acute Achilles tendon rupture [1] and only 50-60% of elite athletes return to pre-injury levels following the rupture [2]. Most Achilles tendon rupture is promptly diagnosed, but at irst exam physicians may miss up to 20% of these lesions [3]. The de inition of an old, chronic or neglected rupture is variable: the most used timeframe is 4 to 10 weeks [4]. The diagnosis of chronic rupture can be more dif icult because the gap palpable in acute ruptures is no longer present and it has been replaced by ibrous scar tissue. Typically chronic rupture occur 2 to 6 cm above the calcaneal insertion with extensive scar tissue deposition between the retracted tendon stumps [5], and the blood supply to this area is poor. In this lesion the tendon end usually has been retracted so the management must be different from the acute lesion’s one. Despite its high incidence and the great development of literature about this topic, there is still controversy about the optimal management of Achilles tendon rupture [6]. The several techniques proposed to treat acute ruptures can essentially be classi ied into: conservative management (cast immobilization or functional bracing), open repair, minimally invasive technique and percutaneous repair [7] with or without augmentation. Chronic ruptures represent a different chapter and the ideal treatment seems to be surgical [3]: the techniques frequently used are debridement, local tissue transfer, augmentation and synthetic grafts [8]. Achilles Tendon Injuries: Comparison of Different Conservative and Surgical Treatment and Rehabilitation Published: February 21, 2017 040 Conservative treatment using a short leg resting cast in an equinus position is probably justi ied for elderly patients who have lower functional requirements or increased risk of surgical healing, such as individuals with diabetes mellitus or in treatment with immunosuppressive drugs. In the conservative treatment, traditionally the ankle is immobilized in maximal plantar lexion, so as to re-approximate the two stumps, and a cast is worn to enable the tendon tissue to undergo biological repair. Advantages include the avoidance of surgical complications [9-11] and hospitalization, and the cost minimization. However, conservative treatment is associated with high rate of tendon re-rupture (up to 20%) [12]. Operative treatment can ensure tendon approximation and improve healing, and thus leads to a lower re-rupture rate (about 2-5%). However, complications such as wound infections, skin tethering, sural nerve damage and hypertrophic scar have been reported to range up to 34% [13]. The clinically most commonly used suture techniques for ruptured Achilles tendon are the Bunnell [14,15] and Kessler techniques [16-18]. Minimally invasive surgical techniques (using limited incisions or percutaneous techniques) are considered to reduce the risk of operative complications and appear successful in preventing re-rupture in cohort studies [19,20]. Ma and Grif ith originally described the percutaneous repair, which is a closed procedure performed under local anesthesia using various surgical techniques and instruments. The advantages in this technique are reduced rate of complications such as infections, nerve lesions or re-ruptures [21]. The surgical repair of a rupture of the Achilles tendon with the AchillonTM device and immediate weight-bearing has shown fewer complications and faster rehabilitation [22]. A thoughtful, comprehensive and responsive rehabilitation program is necessary after the operative treatment of acute Achilles lesions. First of all, the purposes of the rehabilitation program are to obtain a reduction of pain and swelling; secondly, progress toward the gradual recovery of ankle motion and power; lastly, the restoration of coordinated activity and safe return to daily life and athletic activity [23]. An important point to considerer is the immediate postoperative management, which includes immobilization of the ankle and limited or prohibited weight-bearing [24].
Transfer of several phytopathogenic Pseudomonas species to Acidovorax as Acidovorax avenae subsp. avenae subsp. nov., comb. nov., Acidovorax avenae subsp. citrulli, Acidovorax avenae subsp. cattleyae, and Acidovorax konjaci.
DNA-rRNA hybridizations, DNA-DNA hybridizations, polyacrylamide gel electrophoresis of whole-cell proteins, and a numerical analysis of carbon assimilation tests were carried out to determine the relationships among the phylogenetically misnamed phytopathogenic taxa Pseudomonas avenae, Pseudomonas rubrilineans, "Pseudomonas setariae," Pseudomonas cattleyae, Pseudomonas pseudoalcaligenes subsp. citrulli, and Pseudomonas pseudoalcaligenes subsp. konjaci. These organisms are all members of the family Comamonadaceae, within which they constitute a separate rRNA branch. Only P. pseudoalcaligenes subsp. konjaci is situated on the lower part of this rRNA branch; all of the other taxa cluster very closely around the type strain of P. avenae. When they are compared phenotypically, all of the members of this rRNA branch can be differentiated from each other, and they are, as a group, most closely related to the genus Acidovorax. DNA-DNA hybridization experiments showed that these organisms constitute two genotypic groups. We propose that the generically misnamed phytopathogenic Pseudomonas species should be transferred to the genus Acidovorax as Acidovorax avenae and Acidovorax konjaci. Within Acidovorax avenae we distinguished the following three subspecies: Acidovorax avenae subsp. avenae, Acidovorax avenae subsp. cattleyae, and Acidovorax avenae subsp. citrulli. Emended descriptions of the new taxa are presented.
Children, Ethics, and the Law: Professional Issues and Cases♦
The intended readership of this volume is the full range of behavioral scientists, mental health professionals, and students aspiring to such roles who work with children. This includes psychologists (applied, clinical, counseling, developmental, school, including academics, researchers, and practitioners), family counselors, psychiatrists, social workers, psychiatric nurses, child protection workers, and any other mental health professionals who work with children, adolescents, and their families.
Learning deep representations by mutual information estimation and maximization
This work investigates unsupervised learning of representations by maximizing mutual information between an input and the output of a deep neural network encoder. Importantly, we show that structure matters: incorporating knowledge about locality in the input into the objective can significantly improve a representation’s suitability for downstream tasks. We further control characteristics of the representation by matching to a prior distribution adversarially. Our method, which we call Deep InfoMax (DIM), outperforms a number of popular unsupervised learning methods and compares favorably with fully-supervised learning on several classification tasks in with some standard architectures. DIM opens new avenues for unsupervised learning of representations and is an important step towards flexible formulations of representation learning objectives for specific end-goals.
A dual system model of preferences under risk.
This article presents a dual system model (DSM) of decision making under risk and uncertainty according to which the value of a gamble is a combination of the values assigned to it independently by the affective and deliberative systems. On the basis of research on dual process theories and empirical research in Hsee and Rottenstreich (2004) and Rottenstreich and Hsee (2001) among others, the DSM incorporates (a) individual differences in disposition to rational versus emotional decision making, (b) the affective nature of outcomes, and (c) different task construals within its framework. The model has good descriptive validity and accounts for (a) violation of nontransparent stochastic dominance, (b) fourfold pattern of risk attitudes, (c) ambiguity aversion, (d) common consequence effect, (e) common ratio effect, (f) isolation effect, and (g) coalescing and event-splitting effects. The DSM is also used to make several novel predictions of conditions under which specific behavior patterns may or may not occur.
Knowledge Growth in Teaching Mathematics/Science with Spreadsheets: Moving PCK to TPACK through Online Professional Development.
Inservice teachers need ways to gain an integrated knowledge of content, pedagogy, and technologies that reflects new ways of teaching and learning in the 21 st century. This interpre-tive study examined inservice K–8 teachers' growth in their pedagogical content knowledge (PCK) toward technology, pedagogy, and content knowledge (TPACK) in an online graduate course designed for integrating dynamic spreadsheets as teaching and learning tools in mathematics and science. With the lens of four TPACK components (Niess, 2005), the analysis describes teachers' development from recognizing to accepting , adapting, and exploring TPACK levels. Implications and recommendations for the design of future professional development courses and continuing research are identified to support inservice teachers' knowledge growth for teaching with technologies. (Keywords: Teacher knowledge, spreadsheets, inservice teachers, elementary and middle school, online, professional development) M ore than 20 years ago, Shulman (1986, 1987) proposed the construct of pedagogical content knowledge (PCK) as knowledge teachers need for teaching. This construct recognized that teachers rely on more than content knowledge as they guide students in learning that content. PCK was proposed as an integrated knowledge structure of subject area, knowledge of students, pedagogical knowledge, and knowledge of the environmental context as teachers engaged in planning, teaching , and assessing activities. In response, many teacher educators reconstructed their teacher preparation programs based on research and understanding of the development of PCK (Niess, 2001). At the time of the recognition of PCK, digital technologies were becoming more powerful and accessible as potential educational learning tools. The assumption was that these technologies were like other educational tools, and the PCK development in these programs provided adequate preparation for the knowledge of teaching with educational tools, including digital technologies. However, evidence abounded that teachers' PCK did not automatically translate to integrating knowledge about teaching and learning with these technologies (Pierson, 2001). Teacher educators were challenged to identify preparation for preservice and inservice teachers to extend their PCK to a more robust knowledge for teaching with technologies. During the 1990s, spreadsheets emerged as potential tools for learning mathematics and science. Although the designers of spreadsheet programs had not focused on designing educational tools, spreadsheets contain features for modeling situations and analyzing change in various educational contexts. Linking key problem variables to tables and charts presented dynamic environments that afforded opportunities to engage in algebraic reasoning even in elementary grades. With access to multiple data collection devices (probeware to gather temperatures, light …
Wearable, Wireless EEG Solutions in Daily Life Applications: What are we Missing?
Monitoring human brain activity has great potential in helping us understand the functioning of our brain, as well as in preventing mental disorders and cognitive decline and improve our quality of life. Noninvasive surface EEG is the dominant modality for studying brain dynamics and performance in real-life interaction of humans with their environment. To take full advantage of surface EEG recordings, EEG technology has to be advanced to a level that it can be used in daily life activities. Furthermore, users have to see it as an unobtrusive option to monitor and improve their health. To achieve this, EEG systems have to be transformed from stationary, wired, and cumbersome systems used mostly in clinical practice today, to intelligent wearable, wireless, convenient, and comfortable lifestyle solutions that provide high signal quality. Here, we discuss state-of-the-art in wireless and wearable EEG solutions and a number of aspects where such solutions require improvements when handling electrical activity of the brain. We address personal traits and sensory inputs, brain signal generation and acquisition, brain signal analysis, and feedback generation. We provide guidelines on how these aspects can be advanced further such that we can develop intelligent wearable, wireless, lifestyle EEG solutions. We recognized the following aspects as the ones that need rapid research progress: application driven design, end-user driven development, standardization and sharing of EEG data, and development of sophisticated approaches to handle EEG artifacts.
Inside "Big Data management": ogres, onions, or parfaits?
In this paper we review the history of systems for managing "Big Data" as well as today's activities and architectures from the (perhaps biased) perspective of three "database guys" who have been watching this space for a number of years and are currently working together on "Big Data" problems. Our focus is on architectural issues, and particularly on the components and layers that have been developed recently (in open source and elsewhere) and on how they are being used (or abused) to tackle challenges posed by today's notion of "Big Data". Also covered is the approach we are taking in the ASTERIX project at UC Irvine, where we are developing our own set of answers to the questions of the "right" components and the "right" set of layers for taming the "Big Data" beast. We close by sharing our opinions on what some of the important open questions are in this area as well as our thoughts on how the dataintensive computing community might best seek out answers.
The Recognition of Human Movement Using Temporal Templates
ÐA new view-based approach to the representation and recognition of human movement is presented. The basis of the representation is a temporal templateÐa static vector-image where the vector value at each point is a function of the motion properties at the corresponding spatial location in an image sequence. Using aerobics exercises as a test domain, we explore the representational power of a simple, two component version of the templates: The first value is a binary value indicating the presence of motion and the second value is a function of the recency of motion in a sequence. We then develop a recognition method matching temporal templates against stored instances of views of known actions. The method automatically performs temporal segmentation, is invariant to linear changes in speed, and runs in real-time on standard platforms. Index TermsÐMotion recognition, computer vision.
GreeDi: An energy efficient routing algorithm for big data on cloud
The ever-increasing density in cloud computing parties, i.e. users, services, providers and data centres, has led to a significant exponential growth in: data produced and transferred among the cloud computing parties; network traffic; and the energy consumed by the cloud computing massive infrastructure, which is required to respond quickly and effectively to users requests. Transferring big data volume among the aforementioned parties requires a high bandwidth connection, which consumes larger amounts of energy than just processing and storing big data on cloud data centres, and hence producing high carbon dioxide emissions. This power consumption is highly significant when transferring big data into a data centre located relatively far from the users geographical location. Thus, it became high-necessity to locate the lowest energy consumption route between the user and the designated data centre, while making sure the users requirements, e.g. response time, are met. The main contribution of this paper is GreeDi, a network-based routing algorithm to find the most energy efficient path to the cloud data centre for processing and storing big data. The algorithm is, first, formalised by the situation calculus. The linear, goal and dynamic programming approaches used to model the algorithm. The algorithm is then evaluated against the baseline shortest path algorithm with minimum number of nodes traversed, using a real Italian ISP physical network topology.
Excavating Culture: Ethnicity and Context as Predictors of Parenting Behavior
Ethnic, socioeconomic, and contextual predictors of parenting and family socialization practices were examined among African American and European American families. This is one of a set of coordinated studies presented in this special issue (Le et al.). With the goal of sampling African American and European American children and families that were roughly equivalent on socioeconomic indicators, 103 mothers and their children were interviewed when the children were in kindergarten, and 83.5% were interviewed again in fourth grade. There were no ethnic differences in mothers' reports of warmth and communication at kindergarten; mothers' and children's reports of behavioral control at fourth grade, and children's reports of warmth at fourth grade. Among the ethnic differences in the parenting constructs, a number of them were related to cultural variables. For example, African American mothers expressed higher levels of self-efficacy and this was positively related to beliefs in communicating ethnic pride ...
Wisecrackers: A theory-grounded investigation of phishing and pretext social engineering threats to information security
people by businesses and governments is ubiquitous. One of the main threats to people's privacy comes from human carelessness with this information, yet little empirical research has studied behaviors associated with information carelessness and the ways that people exploit this vulnerability. The studies that have investigated this important question have not been grounded in theory. In particular , the extant literature reveals little about social engineering threats and the reasons why people may or may not fall victim. Synthesizing theory from the marketing literature to explain consumer behavior, an empirical field study was conducted to see if factors that account for successful marketing campaigns may also account for successful social engineering attacks. People have become increasingly aware of the pervasive threats to information security and there are a variety of solutions now available for solving the problem of information insecurity such as improving technologies, including the application of advanced cryptography, or techniques, such as performing risk analyses and risk mitigation (Bresz, 2004; Sasse, Brostoff & Weirich, 2004). There has also been important suggestions from the information systems (IS) security literature that include augmenting security procedures as a solution (cf. Debar & Viinikka 2006), addressing situational factors such as reducing workload so that security professionals have time to implement the recommended procedures (Albrechtsen, 2007), improving the quality of policies (von Solms & von Solms, 2004), improving the alignment between an organization's security goals and its practices (Leach, 2003), and gaining improvements from software developers regarding the security implementations during the software development cycle (Jones & Rastogi, 2004). Yet despite all these recommendations, people often fail to take basic security precautions that result in billions of dollars annually in individual and corporate losses and even " Knowing better, but not doing better " is thus one of the key scholarly and practical issues that have not been fully addressed. One area of particular concern involves threats from social engineering. Social engineering consists of techniques used to manipulate people into performing actions or divulging confidential information (Mitnick & Simon, 2002). Social engineers often attempt to persuade potential victims with appeals to strong emotions such as excitement or fear, whereas others utilize ways to establish interpersonal relationships or create a feeling of trust and commitment (Gao & Kim, 2007). For example, they may promise that valuable prize or interest from a transfer bank deposit will be given if the victim complies with a request …
Am I wasting my time organizing email?: a study of email refinding
We all spend time every day looking for information in our email, yet we know little about this refinding process. Some users expend considerable preparatory effort creating complex folder structures to promote effective refinding. However modern email clients provide alternative opportunistic methods for access, such as search and threading, that promise to reduce the need to manually prepare. To compare these different refinding strategies, we instrumented a modern email client that supports search, folders, tagging and threading. We carried out a field study of 345 long-term users who conducted over 85,000 refinding actions. Our data support opportunistic access. People who create complex folders indeed rely on these for retrieval, but these preparatory behaviors are inefficient and do not improve retrieval success. In contrast, both search and threading promote more effective finding. We present design implications: current search-based clients ignore scrolling, the most prevalent refinding behavior, and threading approaches need to be extended.
Maintaining Large-Scale Rechargeable Sensor Networks Perpetually via Multiple Mobile Charging Vehicles
Wireless energy transfer technology based on magnetic resonant coupling has been emerging as a promising technology for wireless sensor networks (WSNs) by providing controllable yet perpetual energy to sensors. In this article, we study the deployment of the minimum number of mobile charging vehicles to charge sensors in a large-scale WSN so that none of the sensors will run out of energy, for which we first advocate a flexible on-demand charging paradigm that decouples sensor energy charging scheduling from the design of sensing data routing protocols. We then formulate a novel optimization problem of scheduling mobile charging vehicles to charge life-critical sensors in the network with an objective to minimize the number of mobile charging vehicles deployed, subject to the energy capacity constraint on each mobile charging vehicle. As the problem is NP-hard, we instead propose an approximation algorithm with a provable performance guarantee if the energy consumption of each sensor during each charging tour is negligible. Otherwise, we devise a heuristic algorithm by modifying the proposed approximation algorithm. We finally evaluate the performance of the proposed algorithms through experimental simulations. Experimental results demonstrate that the proposed algorithms are very promising, and the solutions obtained are fractional of the optimal ones. To the best of our knowledge, this is the first approximation algorithm with a nontrivial approximation ratio for a novel scheduling problem of multiple mobile charging vehicles for charging sensors.
The search engine manipulation effect (SEME) and its possible impact on the outcomes of elections.
Internet search rankings have a significant impact on consumer choices, mainly because users trust and choose higher-ranked results more than lower-ranked results. Given the apparent power of search rankings, we asked whether they could be manipulated to alter the preferences of undecided voters in democratic elections. Here we report the results of five relevant double-blind, randomized controlled experiments, using a total of 4,556 undecided voters representing diverse demographic characteristics of the voting populations of the United States and India. The fifth experiment is especially notable in that it was conducted with eligible voters throughout India in the midst of India's 2014 Lok Sabha elections just before the final votes were cast. The results of these experiments demonstrate that (i) biased search rankings can shift the voting preferences of undecided voters by 20% or more, (ii) the shift can be much higher in some demographic groups, and (iii) search ranking bias can be masked so that people show no awareness of the manipulation. We call this type of influence, which might be applicable to a variety of attitudes and beliefs, the search engine manipulation effect. Given that many elections are won by small margins, our results suggest that a search engine company has the power to influence the results of a substantial number of elections with impunity. The impact of such manipulations would be especially large in countries dominated by a single search engine company.
From big smartphone data to worldwide research: The Mobile Data Challenge
This paper presents an overview of the Mobile Data Challenge (MDC), a large-scale research initiative aimed at generating innovations around smartphone-based research, as well as community-based evaluation of mobile data analysis methodologies. First, we review the Lausanne Data Collection Campaign (LDCC) – an initiative to collect unique, longitudinal smartphone data set for the MDC. Then, we introduce the Open and Dedicated Tracks of the MDC; describe the specific data sets used in each of them; discuss the key design and implementation aspects introduced in order to generate privacypreserving and scientifically relevant mobile data resources for wider use by the research community; and summarize the main research trends found among the 100+ challenge submissions. We finalize by discussing the main lessons learned from the participation of several hundred researchers worldwide in the MDC Tracks.
Fast head modeling for animation
This paper describes an efficient method to make individual faces for animation from several possible inputs. We present a method to reconstruct 3D facial model for animation from two orthogonal pictures taken from front and side views or from range data obtained from any available resources. It is based on extracting features on a face in a semiautomatic way and modifying a generic model with detected feature points. Then the fine modifications follow if range data is available. Automatic texture mapping is employed using a composed image from the two images. The reconstructed 3Dface can be animated immediately with given expression parameters. Several faces by one methodology applied to different input data to get a final animatable face are illustrated.
Hippocampal structural asymmetry in unsuccessful psychopaths
BACKGROUND Structural and functional hippocampal abnormalities have been previously reported in institutionalized psychopathic and aggressive populations. This study assessed whether prior findings of a right greater than left (R > L) functional asymmetry in caught violent offenders generalize to the structural domain in unsuccessful, caught psychopaths. METHODS Left and right hippocampal volumes were assessed using structural magnetic resonance imaging (MRI) in 23 control subjects, 16 unsuccessful psychopaths, and 12 successful (uncaught) community psychopaths and transformed into standardized space. RESULTS Unsuccessful psychopaths showed an exaggerated structural hippocampal asymmetry (R > L) relative both to successful psychopaths and control subjects (p <.007) that was localized to the anterior region. This effect could not be explained by environmental and diagnostic confounds and constitutes the first brain imaging analysis of successful and unsuccessful psychopaths. CONCLUSIONS Atypical anterior hippocampal asymmetries in unsuccessful psychopaths may reflect an underlying neurodevelopmental abnormality that disrupts hippocampal-prefrontal circuitry, resulting in affect dysregulation, poor contextual fear conditioning, and insensitivity to cues predicting capture.
Symmetric diffeomorphic image registration with cross-correlation: Evaluating automated labeling of elderly and neurodegenerative brain
One of the most challenging problems in modern neuroimaging is detailed characterization of neurodegeneration. Quantifying spatial and longitudinal atrophy patterns is an important component of this process. These spatiotemporal signals will aid in discriminating between related diseases, such as frontotemporal dementia (FTD) and Alzheimer's disease (AD), which manifest themselves in the same at-risk population. Here, we develop a novel symmetric image normalization method (SyN) for maximizing the cross-correlation within the space of diffeomorphic maps and provide the Euler-Lagrange equations necessary for this optimization. We then turn to a careful evaluation of our method. Our evaluation uses gold standard, human cortical segmentation to contrast SyN's performance with a related elastic method and with the standard ITK implementation of Thirion's Demons algorithm. The new method compares favorably with both approaches, in particular when the distance between the template brain and the target brain is large. We then report the correlation of volumes gained by algorithmic cortical labelings of FTD and control subjects with those gained by the manual rater. This comparison shows that, of the three methods tested, SyN's volume measurements are the most strongly correlated with volume measurements gained by expert labeling. This study indicates that SyN, with cross-correlation, is a reliable method for normalizing and making anatomical measurements in volumetric MRI of patients and at-risk elderly individuals.
Temporal plannability by variance of the episode length
Optimization of decision problems in stochastic environments is usually concerned with maximizing the probability of achieving the goal and minimizing the expected episode length. For interacting agents in time-critical applications, learning of the possibility of scheduling of subtasks (events) or the full task is an additional relevant issue. Besides, there exist highly stochastic problems where the actual trajectories show great variety from episode to episode, but completing the task takes almost the same amount of time. The identification of sub-problems of this nature may promote e.g., planning, scheduling and segmenting Markov decision processes. In this work, formulae for the average duration as well as the standard deviation of the duration of events are derived. We show, that the emerging Bellman-type equation is a simple extension of Sobel’s work (1982) and that methods of dynamic programming as well as methods of reinforcement learning can be applied. Computer demonstration on a toy problem serve to highlight the principle.
Therapy of Established Postmenopausal Osteoporosis with Monofluorophosphate plus Calcium: Dose-Related Effects on Bone Density and Fracture Rate
Recent experience from different groups suggests that low fluoride doses resulting in moderate increases in bone mineral density (BMD) may be advantageous in terms of fracture-reducing potency. In a randomized prospective 3-year study we examined the therapeutic efficacy of different dosages of monofluorophosphate (MFP) plus calcium in comparison with calcium alone in 134 women with established postmenopausal osteoporosis (mean age 64.0 years, average vertebral fractures per patient 3.6). Group A received 1000 mg calcium/day and a low-dose intermittent MFP regimen (3 months on, 1 month off) corresponding to an average daily fluoride ion dose of 11.2 mg. Group B received 1000 mg calcium/day plus continuous MFP corresponding to 20 mg fluoride ions per day. Group C was treated with 1000 mg calcium alone throughout the study period. Bone density was measured with dual-energy X-ray absorptiometry at L2–4 and three proximal femur areas and with single photon absorptiometry at two radius sites. New vertebral fractures were identified from annual lateral radiographs of the spine. A significant reduction in subjective complaints as measured by a combined pain–mobility score (CPMS) was found in both fluoride groups in comparison with the calcium monotherapy group. Group A showed increases in BMD at all six measuring sites, reaching +12.6% at the spine after 3 years. In group B we found significant increases at the spine, Ward’s triangle and distal radius, but slight decreases at the femoral neck and radius shaft. For the spine the average change amounted to +19.5% after 3 years. In group C losses of BMD were observed at all six sites, with an average loss of 1.6% for the spine at the end of the study. The incidence of new vertebral fractures per 100 patient-years was 8.6, 17.0 and 31.6 in groups A, B and C, respectively. In conclusion, both calcium–MFP regimens resulted in significantly lower vertebral fracture rates than calcium monotherapy. However, the low intermittent MFP regimen, leading to a mean annual increase in spinal BMD of only 4.2%, showed a clear trend to greater effectiveness in reducing vertebral fracture than the higher fluoride dosage that was followed by an average spinal BMD increase of 6.5% per year. Furthermore the rate of fluoride-specific side effects (lower-extremity pain syndrome) was 50% lower in patients receiving the lower fluoride dosage.
Design of GaN-Based MHz Totem-Pole PFC Rectifier
The totem-pole bridgeless power factor correction (PFC) rectifier has recently been recognized as a promising front-end candidate for applications like servers and telecommunication power supplies. This paper begins with a discussion of the advantages of using emerging high-voltage gallium-nitride (GaN) devices in totem-pole PFC rectifiers rather than traditional PFC rectifiers. The critical-mode operation is used in the totem-pole PFC rectifier in order to achieve both high frequency and high efficiency. Then, several high-frequency issues and detailed design considerations are introduced, including extending zero-voltage-switching operation for the entire line-cycle, a variable on-time strategy for zero-crossing distortion suppression, and interleaving control for ripple current cancellation. The volume reduction of differential-mode electromagnetic interference filters is also presented, which benefits greatly from MHz high-frequency operation and multiphase interleaving. Finally, a dual-phase interleaved GaN-based MHz totem-pole PFC rectifier is demonstrated with 99% peak efficiency and 220 W/in $^{\mathrm {{3}}}$ power density.
Emotion Cause Events: Corpus Construction and Analysis
Emotion processing has always been a great challenge. Given the fact that an emotion is triggered by cause events and that cause events are an integral part of emotion, this paper constructs a Chinese emotion cause corpus as a first step towards automatic inference of cause-emotion correlation. The corpus focuses on five primary emotions, namely happiness, sadness, fear, anger, and surprise. It is annotated with emotion cause events based on our proposed annotation scheme. Corpus data shows that most emotions are expressed with causes, and that causes mostly occur before the corresponding emotion verbs. We also examine the correlations between emotions and cause events in terms of linguistic cues: causative verbs, perception verbs, epistemic markers, conjunctions, prepositions, and others. Results show that each group of linguistic cues serves as an indicator marking the cause events in different structures of emotional constructions. We believe that the emotion cause corpus will be the useful resource for automatic emotion cause detection as well as emotion detection and classification.
An Elimination Theorem for Regular Behaviours with Integration
In this paper we consider we consider a variant of the process algebra ACP with rational time and integration. We shall indicate a subdomain of regular processes for which an Elimination Theorem holds: for each pair of processes p, q in this class there is a process z in this class such that p∥q and z have the same behaviour. Furthermore, we indicate by some simple examples that if the subdomain is restricted or enlarged, then the elimination result is lost. The subdomain has a strong link with the model of timed automata of Alur and Dill.
Impact of TQM Practices on Firm ’ s Performance of Pakistan ’ s Manufacturing Organizations
This empirical study examines the association between total quality management (TQM) practices and performance, i.e. quality, business, and organizational performance. The quantitative data were obtained through a survey from 171 quality managers of Pakistan’s manufacturing industry. This study supports the hypothesis that TQM practices positively impact the performance. TQM tools and techniques (Incentive and Recognition System, Process, Monitoring and Control and Continuous Improvement) and Behavioral factors (Fact based-management, top management’s commitment to quality, employee involvement and customer focus) contribute to the successful implementation of TQM. The study reports that successful adoption and implementation of TQM practices results in improving the performance of organization. The main implication of the findings for managers is that with TQM practices, manufacturing organizations are more likely to achieve better performance in customer satisfaction, employee relations, quality and business performance than without TQM practices.
Ensemble Classification and Extended Feature Selection for Credit Card Fraud Detection
Due to the rise of technology, the possibility of fraud in different areas such as banking has increased. Credit card fraud is a crucial problem in banking and its danger is ever increasing. This paper proposes an advanced data mining method, considering both the feature selection and the decision cost for accuracy enhancement of credit card fraud detection. After selecting the best and most effective features, using an extended wrapper method, an ensemble classification is performed. The extended feature selection approach includes a prior feature filtering and a wrapper approach using C4.5 decision tree. Ensemble classification is performed using cost sensitive decision trees in a decision forest framework. A locally gathered fraud detection dataset is used to estimate the proposed method. The method is assessed using accuracy, recall, and F-measure as the evaluation metrics and compared with the basic classification algorithms including ID3, J48, Naïve Bayes, Bayesian Network, and NB tree. The experiments carried out show that considering the F-measure as the evaluation metric, the proposed approach yields 1.8 to 2.4 percent performance improvement compared to the other classifiers.
Identifying Rhetorical Questions in Social Media
Social media provides a platform for seeking information from a large user base. Information seeking in social media, however, occurs simultaneously with users expressing their viewpoints by making statements. Rhetorical questions have the form of a question but serve the function of a statement and might mislead platforms assisting information seeking in social media. It becomes difficult to identify rhetorical questions as they are not syntactically different from other questions. In this paper, we develop a framework to identify rhetorical questions by modeling the motivations of the users to post them. We focus on one motivation of the users drawing from linguistic theories, to implicitly convey a message. We develop a framework from this motivation to identify rhetorical questions in social media and evaluate the framework using questions posted on Twitter. This is the first framework to model the motivations for posting rhetorical questions to identify them on social
Image enhancement using a fusion framework of histogram equalization and laplacian pyramid
The image enhancement methods based on histogram equalization (HE) often fail to improve local information and sometimes have the fatal flaw of over-enhancement when a quantum jump occurs in the cumulative distribution function of the histogram. To overcome these shortcomings, we propose an image enhancement method based on a modified Laplacian pyramid framework that decomposes an image into band-pass images to improve both the global contrast and local information. For the global contrast, a novel robust HE is proposed to provide a well-balanced mapping function which effectively suppresses the quantum jump. For the local information, noise-reduced and adaptively gained high-pass images are applied to the resultant image. In qualitative and quantitative comparisons through experimental results, the proposed method shows natural and robust image quality and suitability for video sequences, achieving generally higher performance when compared to existing methods.
The numerical solution of second-order boundary-value problems by collocation method with the Haar wavelets
Multimodal Deep Boltzmann Machines for feature selection on gene expression data
In this paper, multimodal Deep Boltzmann Machines (DBM) is employed to learn important genes (biomarkers) on gene expression data from human carcinoma colorectal. The learning process involves gene expression data and several patient phenotypes such as lymph node and distant metastasis occurrence. The proposed framework in this paper uses multimodal DBM to train records with metastasis occurrence. Later, the trained model is tested using records with no metastasis occurrence. After that, Mean Squared Error (MSE) is measured from the reconstructed and the original gene expression data. Genes are ranked based on the MSE value. The first gene has the highest MSE value. After that, k-means clustering is performed using various number of genes. Features that give the highest purity index are considered as the important genes. The important genes obtained from the proposed framework and two sample t-test are being compared. From the accuracy of metastasis classification, the proposed framework gives higher results compared to the top genes from two sample t-test.
The efficacy of topical betamethasone for treating phimosis: a comparison of two treatment regimens.
OBJECTIVES To compare the efficacy of two different topical betamethasone treatment regimens with respect to outcome and untoward effects in boys with phimosis. METHODS Boys with phimosis whose parents opted for medical management were treated with topical betamethasone (0.05%) and manual retraction. One author (J.S.P.) prescribed betamethasone twice daily (BID) for 30 days (60 doses), and the other author (L.S.P.) prescribed betamethasone thrice daily (TID) for 21 days (63 doses). All boys had severe phimosis (prepuce unretractable to evaluate meatus) before treatment. The degree of phimosis was graded 1 month after treatment as severe, moderate (prepuce retractable to less than 50% glanular exposure), or mild (penile adhesions). Chi-square analysis (P <0.05) was used to compare the two groups. Treatment failure was defined as persistent severe phimosis. RESULTS A total of 200 consecutive patients from each treatment group were included. The median patient ages were similar between the groups (3.8 years BID, 4.4 years TID). One child had an untoward effect (candidal dermatitis, TID regimen). There was an 84.5% response rate (moderate to no phimosis) with the BID regimen and an 87% response rate with the TID regimen (P = nonsignificant). Two patients with severe phimosis before treatment were diagnosed with congenital urethral malformations (hypospadias and epispadias) after treatment. CONCLUSIONS The topical application of betamethasone is a highly efficacious, safe, and well-tolerated treatment of phimosis in this large series of boys. The 21-day TID and 30-day BID regimens in conjunction with manual retraction are equally efficacious and can be offered to parents requesting nonsurgical management of phimosis. Untoward effects are rare with either regimen. Important urethral anomalies can occasionally be revealed.
Knowledge tracing: Modeling the acquisition of procedural knowledge
This paper describes an effort to model students' changing knowledge state during skill acquisition. Students in this research are learning to write short programs with the ACT Programming Tutor (APT). APT is constructed around a production rule cognitive model of programming knowledge, called theideal student model. This model allows the tutor to solve exercises along with the student and provide assistance as necessary. As the student works, the tutor also maintains an estimate of the probability that the student has learned each of the rules in the ideal model, in a process calledknowledge tracing. The tutor presents an individualized sequence of exercises to the student based on these probability estimates until the student has ‘mastered’ each rule. The programming tutor, cognitive model and learning and performance assumptions are described. A series of studies is reviewed that examine the empirical validity of knowledge tracing and has led to modifications in the process. Currently the model is quite successful in predicting test performance. Further modifications in the modeling process are discussed that may improve performance levels.
Randomised trial of personalised computer based information for cancer patients.
OBJECTIVE To compare the use and effect of a computer based information system for cancer patients that is personalised using each patient's medical record with a system providing only general information and with information provided in booklets. DESIGN Randomised trial with three groups. Data collected at start of radiotherapy, one week later (when information provided), three weeks later, and three months later. PARTICIPANTS 525 patients started radical radiotherapy; 438 completed follow up. INTERVENTIONS Two groups were offered information via computer (personalised or general information, or both) with open access to computer thereafter; the third group was offered a selection of information booklets. OUTCOMES Patients' views and preferences, use of computer and information, and psychological status; doctors' perceptions; cost of interventions. RESULTS More patients offered the personalised information said that they had learnt something new, thought the information was relevant, used the computer again, and showed their computer printouts to others. There were no major differences in doctors' perceptions of patients. More of the general computer group were anxious at three months. With an electronic patient record system, in the long run the personalised information system would cost no more than the general system. Full access to booklets cost twice as much as the general system. CONCLUSIONS Patients preferred computer systems that provided information from their medical records to systems that just provided general information. This has implications for the design and implementation of electronic patient record systems and reliance on general sources of patient information.
To circ or not to circ: indications, risks, and alternatives to circumcision in the pediatric population with phimosis.
Summary Although there continues to be considerable debate over the merits of circumcision, it is clear that preservation of the pediatric foreskin, even in the presence of phimosis, is a viable option. Steroid topical cream is a painless, less-complicated, and more economical alternative to circumcision for treating phimosis. Success rates are quite high, especially when patient selection is appropriate and parents are adequately instructed on application. In those children in whom topical steroid therapy has failed, there remains a variety of foreskin-preserving surgical options for treating phimosis. Compared to circumcision, these less-invasive techniques are associated with lower morbidities and cost. Furthermore, depending on the tissue-preserving technique used, satisfactory cosmesis is also achieved. Thus, those males who were not circumcised at birth now have medical and surgical options, which will decrease the likelihood of requiring circumcision at an older age. As health care providers in the United States see more and more uncircumcised male children, it is important for these children and their parents to understand the natural history of physiologic phimosis. Additionally, it is the responsibility of health care providers to present the management options available for the treatment of the persistent nonretractile foreskin and/or pathologic phimosis. These options are particularly important for those individuals whose religious, cultural, or personal preference is to retain the foreskin.
A general framework for kernel similarity-based image denoising
Any image can be represented as a function defined on a discrete weighted graph whose vertices are image pixels. Each pixel can be linked to other pixels via graph edges with corresponding weights derived from similarities between image pixels (graph vertices) measured in some appropriate fashion. Image structure is encoded in the Laplacian matrix derived from these similarity weights. Taking advantage of this graph-based point of view, we present a general regularization framework for image denoising. A number of well-known existing denoising methods like bilateral, NLM, and LARK, can be described within this formulation. Moreover, we present an analysis for the filtering behavior of the proposed method based on the spectral properties of Laplacian matrices. Some of the well established iterative approaches for improving kernel-based denoising like diffusion and boosting iterations are special cases of our general framework. The proposed approach provides a better understanding of enhancement mechanisms in self similarity-based methods, which can be used for their further improvement. Experimental results verify the effectiveness of this approach for the task of image denoising.
User-based meta-search with the co-citation graph
Although there are numerous search engines in the Web environment, no one could claim producing reliable results in all conditions. This problem is becoming more serious considering the exponential growth of the number of Web resources. In the response to these challenges, the meta-search engines are introduced to enhance the search process by devoting some outstanding search engines as their information resources. In recent years, some approaches are proposed to handle the result combination problem which is the fundamental problem in the meta-search environment. In this paper, a new merging/re-ranking method is introduced which uses the characteristics of the Web co-citation graph that is constructed from search engines and returned lists. The information extracted from the co-citation graph, is combined and enriched by the userspsila click-through data as their implicit feedback in an adaptive framework. Experimental results show a noticeable improvement against the basic method as well as some well-known meta-search engines.
Water pollution situation and domestic water ecological restoration method research situation
After investigate water pollution situation at home,analyse investigation and research results,and investigate water ecological restoration technology research for polluted water,puts forward specific measures and methods of water pollution prevention and water ecological restoration in HanDan City.It has main physical method,chemical method,biological-ecological technology method,these methods provides the theory basis for landscape ecological restoration.Especially biological-ecological technology method is an important theoretical basis for the water ecological restoration of city water landscape,is an important method of ecological sustainable development.