title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
The LOCATA Challenge Data Corpus for Acoustic Source Localization and Tracking | Algorithms for acoustic source localization and tracking are essential for a wide range of applications such as personal assistants, smart homes, tele-conferencing systems, hearing aids, or autonomous systems. Numerous algorithms have been proposed for this purpose which, however, are not evaluated and compared against each other by using a common database so far. The IEEE-AASP Challenge on sound source localization and tracking (LOCATA) provides a novel, comprehensive data corpus for the objective benchmarking of state-of-the-art algorithms on sound source localization and tracking. The data corpus comprises six tasks ranging from the localization of a single static sound source with a static microphone array to the tracking of multiple moving speakers with a moving microphone array. It contains real-world multichannel audio recordings, obtained by hearing aids, microphones integrated in a robot head, a planar and a spherical microphone array in an enclosed acoustic environment, as well as positional information about the involved arrays and sound sources represented by moving human talkers or static loudspeakers. |
How Much Qualitative Reasoning is Required in Elementary School Science Test Questions ? | Understanding how to build cognitive systems with commonsense is a difficult problem. Since one goal of qualitative reasoning is to explain human mental models of the continuous world, hopefully qualitative representations and reasoning have a role to play. But how much of a role? Standardized tests used in education provide a potentially useful way to measure both how much qualitative knowledge is used in commonsense science, and to assess progress in qualitative representation and reasoning. This paper analyzes a small corpus of science tests from US classrooms and shows that QR techniques are central in answering 13% of them, and play a role in at least an additional 16%. We found that today’s QR techniques suffice for standard QR questions, but integrating QR with broader knowledge about the world and automatically understanding the questions as expressed in language and pictures provide new research challenges. |
A hinged external fixator for complex elbow dislocations: A multicenter prospective cohort study | BACKGROUND
Elbow dislocations can be classified as simple or complex. Simple dislocations are characterized by the absence of fractures, while complex dislocations are associated with fractures of the radial head, olecranon, or coronoid process. The majority of patients with these complex dislocations are treated with open reduction and internal fixation (ORIF), or arthroplasty in case of a non-reconstructable radial head fracture. If the elbow joint remains unstable after fracture fixation, a hinged elbow fixator can be applied. The fixator provides stability to the elbow joint, and allows for early mobilization. The latter may be important for preventing stiffness of the joint. The aim of this study is to determine the effect of early mobilization with a hinged external elbow fixator on clinical outcome in patients with complex elbow dislocations with residual instability following fracture fixation.
METHODS/DESIGN
The design of the study will be a multicenter prospective cohort study of 30 patients who have sustained a complex elbow dislocation and are treated with a hinged elbow fixator following fracture fixation because of residual instability. Early active motion exercises within the limits of pain will be started immediately after surgery under supervision of a physical therapist. Outcome will be evaluated at regular intervals over the subsequent 12 months. The primary outcome is the Quick Disabilities of the Arm, Shoulder, and Hand score. The secondary outcome measures are the Mayo Elbow Performance Index, Oxford Elbow Score, pain level at both sides, range of motion of the elbow joint at both sides, radiographic healing of the fractures and formation of periarticular ossifications, rate of secondary interventions and complications, and health-related quality of life (Short-Form 36).
DISCUSSION
The outcome of this study will yield quantitative data on the functional outcome in patients with a complex elbow dislocation and who are treated with ORIF and additional stabilization with a hinged elbow fixator.
TRIAL REGISTRATION
The trial is registered at the Netherlands Trial Register (NTR1996). |
Race and racism in Internet Studies: A review and critique | Race and racism persist online in ways that are both new and unique to the Internet, alongside vestiges of centuries-old forms that reverberate significantly both offline and on. As we mark 15 years into the field of Internet studies, it becomes necessary to assess what the extant research tells us about race and racism. This paper provides an analysis of the literature on race and racism in Internet studies in the broad areas of (1) race and the structure of the Internet, (2) race and racism matters in what we do online, and (3) race, social control and Internet law. Then, drawing on a range of theoretical perspectives, including Hall’s spectacle of the Other and DuBois’s view of white culture, the paper offers an analysis and critique of the field, in particular the use of racial formation theory. Finally, the paper points to the need for a critical understanding of whiteness in Internet studies. |
Resource-constrained reasoning using a reasoner composition approach | To increase the interoperability and accessibility of data in sensor-rich systems, there has been a recent proliferation of the use of Semantic Web technologies in sensor-rich systems. Quite a range of such applications have emerged, such as hazard monitoring and rescue, context-aware computing, environmental monitoring, field studies, internet of things, and so on. These systems often assume a centralized paradigm for data processing, which does not always hold in reality especially when the systems are deployed in a hostile environment. At runtime, the infrastructure of systems deployed in such an environment is also prone to interference or damage, causing part of the infrastructure to have limited network connection or even to be detached from the rest. A solution to such a problem would be to push the intelligence, such as semantic reasoning, down to the device layer. A key enabler for such a solution is to run semantic reasoning on resourceconstrained devices. This paper shows how reasoner composition (i.e. to automatically adjust a reasoning approach to preserve only a “well-suited” amount of reasoning for a given ontology) can achieve resource-efficient semantic reasoning. Two novel reasoner composition algorithms are introduced and implemented. Evaluation indicates that the reasoner composition algorithms greatly reduce the resources required for OWL reasoning, potentially facilitating greater semantic reasoning on sensor devices. |
Service mediation and negotiation bootstrapping as first achievements towards self-adaptable grid and cloud services | Nowadays, novel computing paradigms as for example Grid or Cloud Computing are gaining more and more on importance. In case of Cloud Computing users pay for the usage of the computing power provided as a service. Beforehand they can negotiate specific functional and non-functional requirements relevant for the application execution. However, providing computing power as a service bears different research challenges. On the one hand dynamic, versatile, and adaptable services are required, which can cope with system failures and environmental changes. On the other hand, human interaction with the system should be minimized. In this paper we present the first results in establishing adaptable, versatile, and dynamic services considering negotiation bootstrapping and service mediation achieved in context of the Foundations of Self-Governing ICT Infrastructures (FoSII) project. We discuss novel meta-negotiation and SLA mapping solutions for Grid/Cloud services bridging the gap between current QoS models and Grid/Cloud middleware and representing important prerequisites for the establishment of autonomic Grid/Cloud services. We present document models for the specification of meta-negotiations and SLA mappings. Thereafter, we discuss the sample architecture for the management of meta-negotiations and SLA mappings. |
New approaches in the management of insomnia: weighing the advantages of prolonged-release melatonin and synthetic melatoninergic agonists | Hypnotic effects of melatonin and melatoninergic drugs are mediated via MT(1) and MT(2) receptors, especially those in the circadian pacemaker, the suprachiasmatic nucleus, which acts on the hypothalamic sleep switch. Therefore, they differ fundamentally from GABAergic hypnotics. Melatoninergic agonists primarily favor sleep initiation and reset the circadian clock to phases allowing persistent sleep, as required in circadian rhythm sleep disorders. A major obstacle for the use of melatonin to support sleep maintenance in primary insomnia results from its short half-life in the circulation. Solutions to this problem have been sought by developing prolonged-release formulations of the natural hormone, or melatoninergic drugs of longer half-life, such as ramelteon, tasimelteon and agomelatine. With all these drugs, improvements of sleep are statistically demonstrable, but remain limited, especially in primary chronic insomnia, so that GABAergic drugs may be indicated. Melatoninergic agonists do not cause next-day hangover and withdrawal effects, or dependence. They do not induce behavioral changes, as sometimes observed with z-drugs. Despite otherwise good tolerability, the use of melatoninergic drugs in children, adolescents, and during pregnancy has been a matter of concern, and should be avoided in autoimmune diseases and Parkinsonism. Problems and limits of melatoninergic hypnotics are compared. |
A robust approach to independent component analysis of signals with high-level noise measurements | We propose a robust approach for independent component analysis (ICA) of signals where observations are contaminated with high-level additive noise and/or outliers. The source signals may contain mixtures of both sub-Gaussian and super-Gaussian components, and the number of sources is unknown. Our robust approach includes two procedures. In the first procedure, a robust prewhitening technique is used to reduce the power of additive noise, the dimensionality and the correlation among sources. A cross-validation technique is introduced to estimate the number of sources in this first procedure. In the second procedure, a nonlinear function is derived using the parameterized t-distribution density model. This nonlinear function is robust against the undue influence of outliers fundamentally. Moreover, the stability of the proposed algorithm and the robust property of misestimating the parameters (kurtosis) have been studied. By combining the t-distribution model with a family of light-tailed distributions (sub-Gaussian) model, we can separate the mixture of sub-Gaussian and super-Gaussian source components. Through the analysis of artificially synthesized data and real-world magnetoencephalographic (MEG) data, we illustrate the efficacy of this robust approach. |
A R ELATIVE APPROACH TO HIERARCHICAL CLUSTERING | This paper presents a new approach to agglomerative hierarchical clustering. Classical hierarchical clustering algorithms are based on metrics which only consider the absolute distance between two clusters, merging the pair of clusters with highest absolute similarity. We propose a relative dissimilarity measure, which considers not only the distance between a pair of clusters, but also how distant are they from the rest of the clusters. It defines for each cluster its relative nearest cluster respect to the whole data set, instead of its usual absolute nearest cluster. We have named the agglomerative hierarchical scheme with this relative dissimilarity measure as Relative Hierarchical Clustering (RHC). The performance of RHC is compared with the classical approach with different intercluster distances by using three data sets. |
Completeness Theorems with Constructive Proofs for Finite Deterministic 2-Party Functions (full version) | In this paper we present simple but comprehensive combinatorial criteria for completeness of finite deterministic 2-party functions with respect to information-theoretic security. We give a general protocol construction for efficient and statistically secure reduction of oblivious transfer to any finite deterministic 2-party function that fulfills our criteria. For the resulting protocols we prove universal composability. Our results are tight in the sense that our criteria still are necessary for any finite deterministic 2-party function to allow for implementation of oblivious transfer with statistical privacy and correctness.
We unify and generalize results of Joe Kilian (1991, 2000) in two ways. Firstly, we show that his completeness criteria also hold in the UC framework. Secondly, what is our main contribution, our criteria also cover a wide class of primitives that are not subject of previous criteria. We show that there are non-trivial examples of finite deterministic 2- party functions that are neither symmetric nor asymmetric and therefore have not been covered by existing completeness criteria so far. As a corollary of our work, every finite deterministic 2-party function is either complete or can be considered equivalent to a noncomplete symmetric 2-party function-this assertion holds true with respect to active adversaries as well as passive adversaries. Thereby known results on non-complete symmetric 2-party functions are strengthened. |
Why sampling scheme matters: the effect of sampling scheme on landscape genetic results | There has been a recent trend in genetic studies of wild populations where researchers have changed their sampling schemes from sampling pre-defined populations to sampling individuals uniformly across landscapes. This reflects the fact that many species under study are continuously distributed rather than clumped into obvious “populations”. Once individual samples are collected, many landscape genetic studies use clustering algorithms and multilocus genetic data to group samples into subpopulations. After clusters are derived, landscape features that may be acting as barriers are examined and described. In theory, if populations were evenly sampled, this course of action should reliably identify population structure. However, genetic gradients and irregularly collected samples may impact the composition and location of clusters. We built genetic models where individual genotypes were either randomly distributed across a landscape or contained gradients created by neighbor mating for multiple generations. We investigated the influence of six different sampling protocols on population clustering using program STRUCTURE, the most commonly used model-based clustering method for multilocus genotype data. For models where individuals (and their alleles) were randomly distributed across a landscape, STRUCTURE correctly predicted that only one population was being sampled. However, when gradients created by neighbor mating existed, STRUCTURE detected multiple, but different numbers of clusters, depending on sampling protocols. We recommend testing for fine scale autocorrelation patterns prior to sample clustering, as the scale of the autocorrelation appears to influence the results. Further, we recommend that researchers pay attention to the impacts that sampling may have on subsequent population and landscape genetic results. |
Journey of discovery: the night journey project as "video/game art" | This paper describes the development of a video/game art project being produced by media artist Bill Viola in collaboration with a team from the USC Game Innovation Lab, which uses a combination of both video and game technologies to explore the universal experience of an individual's journey towards enlightenment. Here, we discuss both the creative and technical approaches to achieving the project's goals of evoking in the player the sense of undertaking a spiritual journey. |
Advanced dicing technology for semiconductor wafer -Stealth Dicing | "Stealth Dicing (SD) " was developed to solve such inherent problems of dicing process as debris contaminants and unnecessary thermal damage on work wafer. In SD, laser beam power of transmissible wavelength is absorbed only around focal point in the wafer by utilizing temperature dependence of absorption coefficient of the wafer. And these absorbed power forms modified layer in the wafer, which functions as the origin of separation in followed separation process. Since only the limited interior region of a wafer is processed by laser beam irradiation, damages and debris contaminants can be avoided in SD. Besides characteristics of devices will not be affected. Completely dry process of SD is another big advantage over other dicing methods. |
Evaluation of Single-Pass Photovoltaic-Thermal Air Collector with Rectangle Tunnel Absorber | Problem statement: Photovoltaic solar cell generate electric by receiving sun light or solar irradiance. But solar cell received heat from solar irradiance as well and this will reduced the efficiency of the solar cell. The heat trap at the solar photovoltaic panel become waste energy. Approach: The solution for this was by adding a cooling system to the photovoltaic panel. The purpose of this study was to cool the solar cell in order to increase its electrical efficiency and also to produce heat energy in the form of hot air. Hot air can be used for drying applications. A single pass PVT with rectangle tunnel absorber has been developed. The rectangle tunnel acted as an absorber and was located at the back side of a standard photovoltaic panel. The dimension of the photovoltaic panel was 120×53 cm. The size of the rectangle tunnel was 27 units of tunnel bar with the size of 1.2×2.5×120 cm (width×tall×length) and 12 units with 1.2×2.5×105.3 cm (width×tall×length). The rectangle tunnel was connected in parallel. The PVT collector has been tested using a solar simulator. Results: Electrical efficiency increased when the solar cell was cool by air flow. Solar photovoltaic thermal collector with rectangle tunnel absorber has better electrical and thermal efficiency compared to solar collector without rectangle tunnel absorber. Photovoltaic, thermal and combined photovoltaic thermal efficiency of 10.02, 54.70 and 64.72% at solar irradiance of 817.4 W m, mass flow rate of 0.0287 kg sec at ambiant temperature of 25°C respectively has been obtained. Conclusion: The hybrid photovoltaic and thermal with rectangle tunnel as heat absorber shows higher performance compared to conventional PV/T system. |
Urine toxicology screening in Austrian trauma patients: a prospective study | The question as to whether the patient consumed drugs prior to the trauma and which drugs were consumed, is of prime importance for the anesthesia required during surgery. However, many patients are unwilling or unable (including those with multiple trauma or impaired consciousness, or unconscious patients) to answer this question. The purpose of our prospective multicenter study was to collect data about drug consumption in Austria to determine whether drugs are identifiable in the urine of recently injured individuals and to establish the types of drugs consumed. This prospective study included severely and moderately injured patients admitted to the Lorenz Boehler Trauma Hospital (Vienna, Austria), the Trauma Hospital Linz (Linz, Austria) and the Department of Trauma Surgery of the General Hospital Horn (Horn, Austria) during an 18-month period (October 2003–March 2005). All patients were suffering from injuries urgently requiring surgery. Urine samples were gained from all patients immediately after admission. Urinary samples were tested by Immuno-Assay (Triage™ 8 Immuno-Assay, Biosite®, San Diego, USA). Urine samples were screened simultaneously for opiates, methadone, cocaine, barbiturates, amphetamines, cannabinoids, benzodiazepines and tricyclic antidepressants. Our prospective study included a total of 664 patients (320 from Vienna, 193 from the city of Linz, and 151 from Horn). Six hundred and forty-two patients were moderately injured (ISS < 16), suffering mostly from injuries to the extremities (504 patients) and 22 patients were severely injured (ISS > 16). Of the 664 patients, 178 (26.8%) tested positive for one or more drugs. The drugs most commonly detected were benzodiazepines (111 patients, 16.7%), cannabinoides (39 patients, 6%), tricyclic antidepressants (28, 4.2%) and opiates (26, 3.9%). Drug use is widespread in patients presenting to urban trauma centers in Austria. Physicians should maintain a high index of suspicion that their patients may be intoxicated and should perform drug testing routinely. |
Electrical stimulation plus progressive resistance training for leg strength in spinal cord injury: A randomized controlled trial | Study design:A randomized controlled trial.Objectives:To determine the effectiveness of electrical stimulation (ES)-evoked muscle contractions superimposed on progressive resistance training (PRT) for increasing voluntary strength in the quadriceps muscles of people with spinal cord injuries (SCI).Setting:Sydney, Australia.Methods:A total of 20 people with established SCI and neurologically induced weakness of the quadriceps muscles participated in the trial. Participants were randomized between experimental and control groups. Volunteers in the experimental group received ES superimposed on PRT to the quadriceps muscles of one leg thrice weekly for 8 weeks. Participants in the control group received no intervention. Assessments occurred at the beginning and at the end of the 8-week period. The four primary outcomes were voluntary strength (Nm) and endurance (fatigue ratio) as well as the performance and satisfaction items of the Canadian Occupational Performance Measure (COPM; points).Results:The between-group mean differences (95% confidence interval (CI)) for voluntary strength and endurance were 14 Nm (1–27; P=0.034) and 0.1 (−0.1 to 0.3; P=0.221), respectively. The between-group median differences (95% CI) for the performance and satisfaction items of the COPM were 1.7 points (−0.2 to 3.2; P=0.103) and 1.4 points (−0.1 to 4.6; P=0.058), respectively.Conclusion:ES superimposed on PRT improves voluntary strength, although there is uncertainty about whether the size of the treatment effect is clinically important. The relative effectiveness of ES and PRT is yet to be determined. |
Subdivisions of auditory cortex and processing streams in primates. | The auditory system of monkeys includes a large number of interconnected subcortical nuclei and cortical areas. At subcortical levels, the structural components of the auditory system of monkeys resemble those of nonprimates, but the organization at cortical levels is different. In monkeys, the ventral nucleus of the medial geniculate complex projects in parallel to a core of three primary-like auditory areas, AI, R, and RT, constituting the first stage of cortical processing. These areas interconnect and project to the homotopic and other locations in the opposite cerebral hemisphere and to a surrounding array of eight proposed belt areas as a second stage of cortical processing. The belt areas in turn project in overlapping patterns to a lateral parabelt region with at least rostral and caudal subdivisions as a third stage of cortical processing. The divisions of the parabelt distribute to adjoining auditory and multimodal regions of the temporal lobe and to four functionally distinct regions of the frontal lobe. Histochemically, chimpanzees and humans have an auditory core that closely resembles that of monkeys. The challenge for future researchers is to understand how this complex system in monkeys analyzes and utilizes auditory information. |
Pedestrian Detection using Infrared images and Histograms of Oriented Gradients | This paper presents a complete method for pedestrian detection applied to infrared images. First, we study an image descriptor based on histograms of oriented gradients (HOG), associated with a support vector machine (SVM) classifier and evaluate its efficiency. After having tuned the HOG descriptor and the classifier, we include this method in a complete system, which deals with stereo infrared images. This approach gives good results for window classification, and a preliminary test applied on a video sequence proves that this approach is very promising |
Abnormal driving behavior detection for bus based on the Bayesian classifier | As the primary transportation in the urban traffic, bus plays a significant role in road safety. It would cause severe casualties if an accident occurred. To improve the safety of bus driving, we classify the specific types of latent abnormal driving behavior, which include sudden braking, lane changing casually, quick turn, fast U-turn and long time parking, and propose a method to identify the abnormal driving behavior of the bus. Firstly, we collect the acceleration, orientation and timestamp bus driving data through smartphone's accelerometer and orientation sensor. After de-nosing the collected data, we extract features in thirteen dimensions and train the naive Bayesian classifier, which is employed to detect and identify abnormal driving behaviors. The experimental results have been compared with support vector machine, and shows that the naive Bayesian classifier has an better performance than support vector machine on detecting and identifying various types of the abnormal bus driving behavior with the accuracy at 98.40%. |
Some aspects of variational inequalities | Abstract In this paper we provide an account of some of the fundamental aspects of variational inequalities with major emphasis on the theory of existence, uniqueness, computational properties, various generalizations, sensitivity analysis and their applications. We also propose some open problems with sufficient information and references, so that someone may attempt solution(s) in his/her area of special interest. We also include some new results, which we have recently obtained. |
Reduced between-hospital variation in short term survival after acute myocardial infarction: the result of improved cardiac care? | OBJECTIVES
To re-examine interhospital variation in 30 day survival after acute myocardial infarction (AMI) 10 years on to see whether the appointment of new cardiologists and their involvement in emergency care has improved outcome after AMI.
DESIGN
Retrospective cohort study.
SETTING
Acute hospitals in Scotland.
PARTICIPANTS
61,484 patients with a first AMI over two time periods: 1988-1991; and 1998-2001.
MAIN OUTCOME MEASURES
30 day survival.
RESULTS
Between 1988 and 1991, median 30 day survival was 79.2% (interhospital range 72.1-85.1%). The difference between highest and lowest was 13.0 percentage points (age and sex adjusted, 12.1 percentage points). Between 1998 and 2001, median survival rose to 81.6% (and range decreased to 78.0-85.6%) with a difference of 7.6 (adjusted 8.8) percentage points. Admission hospital was an independent predictor of outcome at 30 days during the two time periods (p < 0.001). Over the period 1988-1991, the odds ratio for death ranged, between hospitals, from 0.71 (95% confidence interval (CI) 0.58 to 0.88) to 1.50 (95% CI 1.19 to 1.89) and for the period 1998-2001 from 0.82 (95% CI 0.60 to 1.13) to 1.46 (95% CI 1.07 to 1.99). The adjusted risk of death was significantly higher than average in nine of 26 hospitals between 1988 and 1991 but in only two hospitals between 1998 and 2001.
CONCLUSIONS
The average 30 day case fatality rate after admission with an AMI has fallen substantially over the past 10 years in Scotland. Between-hospital variation is also considerably less notable because of better survival in the previously poorly performing hospitals. This suggests that the greater involvement of cardiologists in the management of AMI has paid dividends. |
A k-core Decomposition Framework for Graph Clustering | Graph clustering or community detection constitutes an important task for investigating the internal structure of graphs, with a plethora of applications in several domains. Traditional techniques for graph clustering, such as spectral methods, typically suffer from high time and space complexity. In this article, we present CoreCluster, an efficient graph clustering framework based on the concept of graph degeneracy, that can be used along with any known graph clustering algorithm. Our approach capitalizes on processing the graph in an hierarchical manner provided by its core expansion sequence, an ordered partition of the graph into different levels according to the k-core decomposition. Such a partition provides an efficient way to process the graph in an incremental manner that preserves its clustering structure, while making the execution of the chosen clustering algorithm much faster due to the smaller size of the graph’s partitions onto which the algorithm operates. An experimental analysis on a multitude of real and synthetic data demonstrates that our approach can be applied to any clustering algorithm accelerating the clustering process, while the quality of the clustering structure is preserved or even improved. |
Multicriteria decision aid in Classification problems | Classification problems refer to the assignment of some alt ern tives into predefined classes (groups, categories). Such problems often arise in several application fields. For instance, in assessing credit card applications the loan officer must evaluate the charact eristics of each applicant and decide whether an application should be accepted or rejected. Simil ar situations are very common in fields such as finance and economics, production management (fault diagnosis) , medicine, customer satisfaction measurement, data base management and retrieval, etc. |
Correlation-based Feature Selection for Discrete and Numeric Class Machine Learning | Algorithms for feature selection fall into two broad categories: wrappers that use the learning algorithm itself to evaluate the usefulness of features and filters that evaluate features according to heuristics based on general characteristics of the data. For application to large databases, filters have proven to be more practical than wrappers because they are much faster. However, most existing filter algorithms only work with discrete classification problems. This paper describes a fast, correlation-based filter algorithm that can be applied to continuous and discrete problems. The algorithm often outperforms the well-known ReliefF attribute estimator when used as a preprocessing step for naive Bayes, instance-based learning, decision trees, locally weighted regression, and model trees. It performs more feature selection than ReliefF does—reducing the data dimensionality by fifty percent in most cases. Also, decision and model trees built from the preprocessed data are often significantly smaller. |
WCL2R: A Benchmark Collection for Learning to Rank Research with Clickthrough Data | In this paper we present WCL2R, a benchmark collection for supporting research in learning to rank (L2R) algorithms which exploit clickthrough features. Differently from other L2R benchmark collections, such as LETOR and the recently released Yahoo!’s collection for a L2R competition, in WCL2R we focus on defining a significant (and new) set of features over clickthrough data extracted from the logs of a real-world search engine. In this paper, we describe the WCL2R collection by providing details about how the corpora, queries and relevance judgments were obtained, how the learning features were constructed and how the process of splitting the collection in folds for representative learning was performed. We also analyze the discriminative power of the WCL2R collection using traditional feature selection algorithms and show that the most discriminative features are, in fact, those based on clickthrough data. We then compare several L2R algorithms on WCL2R, showing that all of them obtain significant gains by exploiting clickthrough information over using traditional ranking approaches. |
Flexible intrinsic evaluation of hierarchical clustering for TDT | The Topic Detection and Tracking (TDT) evaluation program has included a "cluster detection" task since its inception in 1996. Systems were required to process a stream of broadcast news stories and partition them into non-overlapping clusters. A system's effectiveness was measured by comparing the generated clusters to "truth" clusters created by human annotators. Starting in 2003, TDT is moving to a more realistic model that permits overlapping clusters (stories may be on more than one topic) and encourages the creation of a hierarchy to structure the relationships between clusters (topics). We explore a range of possible evaluation models for this modified TDT clustering task to understand the best approach for mapping between the human-generated "truth" clusters and a much richer hierarchical structure. We demonstrate that some obvious evaluation techniques fail for degenerate cases. For a few others we attempt to develop an intuitive sense of what the evaluation numbers mean. We settle on some approaches that incorporate a strong balance between cluster errors (misses and false alarms) and the distance it takes to travel between stories within the hierarchy. |
Social Attitudes of AI Rebellion: A Framework | Human attitudes of objection, protest, and rebellion have undeniable potential to bring about social benefits, from social justice to healthy balance in relationships. At times, they can even be argued to be ethically obligatory. Conversely, AI rebellion is largely seen as a dangerous, destructive prospect. With the increase of interest in collaborative human/AI environments in which synthetic agents play social roles or, at least, exhibit behavior with social and ethical implications, we believe that AI rebellion could have benefits similar to those of its counterpart in humans. We introduce a framework meant to help categorize and design Rebel Agents, discuss their social and ethical implications, and assess their potential benefits and the risks they may pose. We also present AI rebellion scenarios in two considerably different contexts (military unmanned vehicles and computational social creativity) that exemplify components of the framework. Society, Ethics, and AI Rebellion In human social contexts, attitudes of resistance, objection, protest, and rebellion are not necessarily destructive and antisocial; they serve a variety of fundamentally positive, constructive social functions. At a macro-societal level, protest can support social justice. At a micro level, saying “no” in a constructive way can help maintain healthy balance in personal and professional relationships (Ury, 2007). In many cases, rebellious attitudes are arguably not merely acceptable, but ethically obligatory, e.g. an engineer refusing to continue working on a project if a number of safety issues are not addressed. In contrast, AI rebellion is generally perceived as being fundamentally destructive: not just antisocial, but antihuman, a narrative reinforced by numerous sci-fi depictions in which AI follows in the footsteps of various mythical creatures to play the part of the ominous “other”. Such manifestations of rebellion are generally attributed to post-singularity AI with mysterious but decidedly dangerous inner workings. Copyright © 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. We believe that AI attitudes of constructive rebellion can in many ways contribute to “maximizing the societal benefit of AI”, an AI research priority expressed by Russell, Dewey, and Tegmark (2015), by enabling refusal of unethical behavior, supporting value alignment with human groups (e.g., through protest on behalf of humans), maintaining safety, supporting task execution correctness, enhancing social co-creativity, and providing or supporting diverse points of view. As we will show through two scenarios and various smaller examples, such instances of AI rebellion neither require human-level intelligence or superintelligence nor involve rebelling against humanity as a whole. We are especially interested in collaborative, human-AI interaction environments, such as the long-term collaborations envisioned by Wilson, Arnold, and Scheutz (2016). In such contexts, AI rebellion has benefits comparable to those it has in human social contexts and associated risks pertaining to the maintenance of the long-term collaborations. The two scenarios that we present are drawn from the fields of (1) military unmanned vehicles and (2) computational social creativity. The first scenario is based on preexisting work of established practical interest, while the second is largely speculative. To facilitate this discussion, we define AI Rebel Agents and propose an initial framework for their study. A reduced version of this framework is described in (Aha and Coman, 2017). Rebel Agents are AI agents that can develop attitudes of opposition to goals or courses of action assigned to them by other agents, or to the general behavior of other agents. These attitudes can result in resistance, objection, and/or refusal to carry out tasks, or in challenging the attitudes or behaviors of other agents. We use “rebellion” as an umbrella term covering reluctance, protest, refusal, rejection of tasks, and similar stances/behaviors. We call an agent against which one rebels an Interactor. We assume that the Interactor is in a position of power in relation to the Rebel Agent; the source(s) and nature of that power can vary. The Interactor can be human or synthetic, an individual or a group. A Rebel Agent is not intended to be permanently adversarial towards the Interactor or in a rebelling state by default. A Rebel Agent has potential for rebellion that may or may not manifest based on external and internal conditions. An AI agent can be specifically designed to be a Rebel Agent (rebel by design), but rebellious behavior can also emerge unintendedly from the agent’s autonomy model (emergent rebellion). Our proposed framework for AI rebellion includes types of rebellion and stages of the rebellion process. The framework is applicable to both types of rebellion introduced above: (1) it can be used to guide the development and implementation of intentionally Rebel Agents, and (2) to categorize and study the rebellion potential and ramifications of emergent rebels (including their dangerous AI potential: while we argue that AI rebellion can, in certain instances, be positive and beneficial, we do not claim that it is necessarily so). The framework also (3) facilitates discussion of social and ethics-related aspects and implications of rebellion: we demonstrate this by examining dimensions of AI rebellion with strong social implications (emotion and social capital: particularly, trust) and by including ethics-related questions pertaining to the framework throughout the paper. Our framework is meant to be generally applicable to AI agents, with no restrictions on agent architecture, paradigm, purpose, deployment context, or other factors. However, the type and features of the agent will affect how it instantiates the components of the framework. |
Propofol sedation during awake craniotomy for seizures: patient-controlled administration versus neurolept analgesia. | This prospective study evaluated the safety and efficacy of patient-controlled sedation (PCS) using propofol during awake seizure surgery performed under bupivacaine scalp blocks. Thirty-seven patients were randomized to receive either propofol PCS combined with a basal infusion of propofol (n = 20) or neurolept analgesia using an initial bolus dose of fentanyl and droperidol followed by a fentanyl infusion (n = 17). Both groups received supplemental fentanyl and dimenhydrinate for intraoperative pain and nausea, respectively. Comparisons were made between groups for sedation, memory, and cognitive function, patient satisfaction, and incidence of complications. Levels of intraoperative sedation and patient satisfaction were similar between groups. Memory and cognitive function were well preserved in both groups. The incidence of transient episodes of ventilatory rate depression (<8 bpm) was more frequent among the propofol patients (5 vs 0, P = 0.04), particularly after supplemental doses of opioid. Intraoperative seizures were more common among the neurolept patients (7 vs 0, P = 0.002). PCS using propofol represents an effective alternative to neurolept analgesia during awake seizure surgery performed in a monitored care environment. |
The Effects of Intervention Using a Robot on the Levels of Compliance of Four Children with Autism to Requests Given by T heir Mothers | The Effects of Intervention Using a Robot on the Levels of Compliance of Four Children with Autism to Requests Given by Their Mothers Holly J. Nelson Department of Communication Disorders Master of Science The current study presents the use of a humanoid robot to facilitate compliant behaviors to two types of directives in four children with autism. The children participated in a three-month intervention program that incorporated core components of the SCERTS model in order to facilitate social communication (Prizant, Wetherby, Rubin, & Laurent, 2003). Treatment sessions were comprised of 40 minutes of traditional treatment and 10 minutes of interaction with a humanoid robot. Preand post-intervention assessment were conducted, each involving a 5 minute interaction with the child’s mother, in which they were presented with directives in the form of physical manipulation and verbal requests accompanied by a gesture or model. These preand post-intervention sessions were recorded, analyzed, and coded for compliant and noncompliant behavior to the directives, as well as any eye contact, language, or reciprocal action that accompanied their behavior. The overall results were variable, revealing that two participants made notable gains, one child remained consistent, and another participant showed a decrease in compliant behavior in the post-intervention sessions. Further research should be conducted to include a longer period of baseline and intervention, more systematic identification of the most effective probes for the child, and documentation of the child’s physical and emotional state. |
Perceived Helpfulness of Online Consumer Reviews: The Role of Message Content and Style | The rise of online reviews written by consumers makes possible an examination of how the content and style of these word-of-mouth messages contribute to their helpfulness. In this study, consumers are asked to judge the value of real online consumer reviews to their simulated shopping activities. The results suggest the benefits of moderate review length and of positive, but not negative, product evaluative statements. Non-evaluative product information and information about the reviewer were also found to be associated with review helpfulness. Stylistic elements that may impair clarity (such as spelling and grammatical errors) were associated with less valuable reviews, and elements that may make a review more entertaining (such as expressive slang and humor) were associated with more valuable reviews. These findings point to factors beyond product information that may affect the perceived helpfulness of an online consumer review. Copyright © 2012 John Wiley & Sons, Ltd. Word-of-mouth (WOM) communication—the exchange of information about goods and services among consumers— has long been recognized as a valued and influential source of consumer information (e.g., Whyte, 1954). The Internet has dramatically increased WOM communication, particularly in the form of consumer reviews on retailing websites. As of 2004, it was estimated that there were over 10 million consumer reviews on Amazon.com alone (Harmon, 2004). The growth in online consumer reviews has been motivated by consumer interest in such reviews (Kumar and Benbasat, 2006) and has benefitted online retailers with increased customer loyalty and lower costs, such as for returned products (Voight, 2007). It has been proposed that consumers who post reviews serve as “sales assistants” for online retailers (Chen and Xie, 2008), who provide other consumers with useful information, and who contribute to other consumers’ satisfaction with the shopping experience. The goal of the study reported here was to better understand the factors that make online reviews appealing to consumers. Although there is recent research on the effects of consumer reviews on product sales (e.g., Godes and Mazylin, 2004; Chevalier and Mayzlin, 2006, Liu, 2006; Forman et al., 2008; Zhu and Zhang, 2010), we focus on the characteristics of online reviews that shoppers find helpful and of value. Although there is older research on “source effects” (e.g., Hovland and Weiss, 1951; McGuire, 1969; Brown and Reingen, 1987; Wilson and Sherrell, 1993), we take advantage of the fact that this modern consumer WOM is expressed in written form and look at the wording of consumer reviews rather than their source. We divide wording factors into two categories, content and style. Greater understanding of these factors can help guide managers of retail websites toward encouraging reviews that their patrons will find more useful and may also shed some light on the nature of the long-acknowledged power of WOM communication. CONCEPTUAL FRAMEWORK The distinction between the content and the style of a message is one that is well established in the field of communication research (e.g., Norton, 1978). The content-versus-style distinction has been used in studying the effectiveness of personal selling communications (Williams and Spiro, 1985; Dion and Notarantonio, 1992), in communications regarding product information (Moon, 2002), and in understanding how people use electronic communications such as email (Colley and Todd, 2002). For the purposes of the present study, we define the content of an online review as the information it provides. Its style, by contrast, involves the choice of words the reviewer uses to express this information. |
Multi-objective Architecture Search for CNNs | Architecture search aims at automatically finding neural architectures that are competitive with architectures designed by human experts. While recent approaches have come close to matching the predictive performance of manually designed architectures for image recognition, these approaches are problematic under constrained resources for two reasons: first, the architecture search itself requires vast computational resources for most proposed methods. Secondly, the found neural architectures are solely optimized for high predictive performance without penalizing excessive resource consumption. We address the first shortcoming by proposing NASH, an architecture search which considerable reduces the computational resources required for training novel architectures by applying network morphisms and aggressive learning rate schedules. On CIFAR10, NASH finds architectures with errors below 4% in only 3 days. We address the second shortcoming by proposing Pareto-NASH, a method for multi-objective architecture search that allows approximating the Pareto-front of architectures under multiple objective, such as predictive performance and number of parameters, in a single run of the method. Within 56 GPU days of architecture search, Pareto-NASH finds a model with 4M parameters and test error of 3.5%, as well as a model with less than 1M parameters and test error of 4.6%. |
Per-Input Control-Flow Integrity | Control-Flow Integrity (CFI) is an effective approach to mitigating control-flow hijacking attacks. Conventional CFI techniques statically extract a control-flow graph (CFG) from a program and instrument the program to enforce that CFG. The statically generated CFG includes all edges for all possible inputs; however, for a concrete input, the CFG may include many unnecessary edges.
We present Per-Input Control-Flow Integrity (PICFI), which is a new CFI technique that can enforce a CFG computed for each concrete input. PICFI starts executing a program with the empty CFG and lets the program itself lazily add edges to the enforced CFG if such edges are required for the concrete input. The edge addition is performed by PICFI-inserted instrumentation code. To prevent attackers from arbitrarily adding edges, PICFI uses a statically computed all-input CFG to constrain what edges can be added at runtime. To minimize performance overhead, operations for adding edges are designed to be idempotent, so they can be patched to no-ops after their first execution. As our evaluation shows, PICFI provides better security than conventional fine-grained CFI with comparable performance overhead. |
TSum: fast, principled table summarization | Given a table where rows correspond to records and columns correspond to attributes, we want to find a small number of patterns that succinctly summarize the dataset. For example, given a set of patient records with several attributes each, how can we find (a) that the "most representative" pattern is, say, (male, adult, *), followed by (*, child, low-cholesterol), etc? We propose TSum, a method that provides a sequence of patterns ordered by their "representativeness." It can decide both which these patterns are, as well as how many are necessary to properly summarize the data. Our main contribution is formulating a general framework, TSum, using compression principles. TSum can easily accommodate different optimization strategies for selecting and refining patterns. The discovered patterns can be used to both represent the data efficiently, as well as interpret it quickly. Extensive experiments demonstrate the effectiveness and intuitiveness of our discovered patterns. |
Development of Novel Non-Contact Electrodes for Mobile Electrocardiogram Monitoring System | Real-time monitoring of cardiac health is helpful for patients with cardiovascular disease. Many telemedicine systems based on ubiquitous computing and communication techniques have been proposed for monitoring the user's electrocardiogram (ECG) anywhere and anytime. Usually, wet electrodes are used in these telemedicine systems. However, wet electrodes require conduction gels and skin preparation that can be inconvenient and uncomfortable for users. In order to overcome this issue, a new non-contact electrode circuit was proposed and applied in developing a mobile electrocardiogram monitoring system. The proposed non-contact electrode can measure bio-potentials across thin clothing, allowing it to be embedded in a user's normal clothing to monitor ECG in daily life. We attempted to simplify the design of these non-contact electrodes to reduce power consumption while continuing to provide good signal quality. The electrical specifications and the performance of monitoring arrhythmia in clinical settings were also validated to investigate the reliability of the proposed design. Experimental results show that the proposed non-contact electrode provides good signal quality for measuring ECG across thin clothes. |
Assistance-On-Demand: a Speech-Based Assistance System for Urban Intersections | We evaluated a system to support the driver in urban intersections (called "Assistance on Demand" AoD system). The system is controlled via speech and supports the driver in monitoring and decision making by providing recommendations for suitable time gaps to enter the intersection. This speech-based control of the system allows the implementation of an "on-demand"-concept where the driver can activate the assistance only if he desires support. 24 drivers took part in the study, performing three drives each. A comparison with a manual condition without system support showed that the AoD system was highly accepted, decreased workload and facilitated monitoring of traffic. In addition, subjective acceptance highly correlated with objective acceptance measured by the actual usage of the system. This clearly justifies the "on-demand"-concept. |
The silent fading of an academic search engine: the case of Microsoft Academic Search | The goal of this working paper is to summarize the main empirical evidences provided by the scientific community as regards the comparison between the two main citation-based academic search engines: Google Scholar (GS) and Microsoft Academic Search (MAS), paying special attention to the following issues: coverage; correlations between journal rankings; and usage of these academic search engines. Additionally, self-elaborated data is offered, which are intended to provide current evidence about the popularity of these tools on the Web, by measuring the number of rich files (PDF, PPT and DOC) in which these tools are mentioned, the amount of external links that both products receive, and the search queries’ frequency from Google Trends. The poor results obtained by MAS led us to an unexpected and unnoticed discovery: Microsoft Academic Search is outdated since 2013. Therefore, the second part of the working paper aims at advancing some data demonstrating this lack of update. For this purpose we gathered the number of total records indexed by MAS since 2000. The data shows an abrupt drop in the number of documents indexed from 2,346,228 in 2010 to 8,147 in 2013. This decrease is offered according to 15 thematic areas as well. In view of these problems it seems logical not only that MAS was poorly used to search for articles by academics and students (who mostly use Google or Google Scholar), but virtually ignored by bibliometricians. |
Mosaic: quantifying privacy leakage in mobile networks | With the proliferation of online social networking (OSN) and mobile devices, preserving user privacy has become a great challenge. While prior studies have directly focused on OSN services, we call attention to the privacy leakage in mobile network data. This concern is motivated by two factors. First, the prevalence of OSN usage leaves identifiable digital footprints that can be traced back to users in the real-world. Second, the association between users and their mobile devices makes it easier to associate traffic to its owners. These pose a serious threat to user privacy as they enable an adversary to attribute significant portions of data traffic including the ones with NO identity leaks to network users' true identities. To demonstrate its feasibility, we develop the Tessellation methodology. By applying Tessellation on traffic from a cellular service provider (CSP), we show that up to 50% of the traffic can be attributed to the names of users. In addition to revealing the user identity, the reconstructed profile, dubbed as "mosaic," associates personal information such as political views, browsing habits, and favorite apps to the users. We conclude by discussing approaches for preventing and mitigating the alarming leakage of sensitive user information. |
The anti-IgE antibody omalizumab improves asthma-related quality of life in patients with allergic asthma. | The aim of the present study was to determine the effect of treatment with omalizumab, an anti-immunoglobulin E antibody, on asthma-related quality of life (AQoL) in patients with moderate-to-severe allergic asthma. A total of 546 patients with allergic asthma were randomised to double-blind subcutaneous treatment with either placebo or omalizumab for 52 weeks. A constant beclomethasone dipropionate dose was maintained during the first 16 weeks (steroid-stable phase). This was followed by a 12-week steroid-reduction phase. The core study was followed by a 24-week double-blind extension phase. AQoL was evaluated at baseline and at the end of the steroid-stable (week 16), steroid-reduction (week 28) and extension phases (week 52) using the Juniper Asthma Quality of Life Questionnaire (AQLQ). Baseline AQLQ scores were comparable for the two treatment groups. Relative to placebo, omalizumab-treated patients demonstrated statistically significant improvements from baseline across all four AQLQ domains, as well as overall AQoL score, at weeks 16 (except environmental exposure), 28 and 52. Patients on omalizumab were also more likely to achieve clinically significant improvements in AQoL during the course of the study. Overall, almost 70% of patients and investigators rated treatment with omalizumab as "excellent/good", compared with approximately 40% of placebo recipients. Clinical studies show that omalizumab enhances disease control whilst reducing corticosteroid consumption in patients with allergic asthma. The results of the present study show that these changes are paralleled by improvements in asthma-related quality of life that are meaningful to such patients. |
Semi-Supervised Sequential Labeling and Segmentation Using Giga-Word Scale Unlabeled Data | This paper provides evidence that the use of more unlabeled data in semi-supervised learning can improve the performance of Natural Language Processing (NLP) tasks, such as part-of-speech tagging, syntactic chunking, and named entity recognition. We first propose a simple yet powerful semi-supervised discriminative model appropriate for handling large scale unlabeled data. Then, we describe experiments performed on widely used test collections, namely, PTB III data, CoNLL’00 and ’03 shared task data for the above three NLP tasks, respectively. We incorporate up to 1G-words (one billion tokens) of unlabeled data, which is the largest amount of unlabeled data ever used for these tasks, to investigate the performance improvement. In addition, our results are superior to the best reported results for all of the above test collections. |
The effect of dexamethasone on the longevity of syringe driver subcutaneous sites in palliative care patients. | OBJECTIVE
To assess the effect of adding 1 mg dexamethasone to syringe drivers on the viability time of subcutaneous cannulation sites in palliative care patients.
DESIGN
Prospective, double-blind, randomised, controlled trial in which patients received half their daily infused medications plus 1 mg dexamethasone in 1 mL saline through one subcutaneous site (test site) and the other half of their medications plus 1 mL saline through another symmetrically placed site (control site).
PARTICIPANTS AND SETTING
Palliative care patients from the inpatient units at two hospices, recruited between 1999 and 2002.
MAIN OUTCOME MEASURE
Difference in time that the test and control sites remained viable.
RESULTS
38 patients consented and were randomised. Twenty did not complete the trial because their participation in the study finished before either site broke down. Eighteen patients either partially completed (at least one site broke down) or fully completed (both sites broke down) the trial. In these 18 patients, test sites lasted 3.6 days longer than control sites (95% CI, 1.5-5.8 days; P = 0.002). Twelve patients fully completed the trial. In this group, test sites lasted 3.9 days longer than control sites (95% CI, 0.6-7.2 days; P = 0.025).
CONCLUSIONS
The addition of 1 mg dexamethasone to syringe drivers significantly extends the viability time of subcutaneous cannulation sites in palliative care patients. |
Is capillary ketone determination useful in clinical practice? In which circumstances? | A new method is now available to measure capillary levels of 3-hydroxybutyrate (3HB), one of the three ketone bodies. It is a quantitative and enzymatic test that uses the same equipment as for home capillary blood glucose determination but with specific strips. In comparison to urine ketone test, there is no false negative or false positive results, it is highly correlate to standard automate assays and patients find it more acceptable. Clinical implementations of this new test begin to be reported. Some studies showed an advantage of ketonemia versus ketonuria measurement to detect and to treat diabetic ketoacidosis in the emergency room. In diabetic patients treated with continuous subcutaneous insulin infusion, ketonemia seems to be more relevant to detect lack of insulin. In the current care of patient with type 1 diabetes and especially in children blood ketone test is more effective than urine ketone test to prevent hospitalisation during sick days. For other situations such as diabetic pregnancy or type 2 diabetes, more data are needed to determine if capillary measurement of 3HB is really useful. This new test is easier and less unpleasant than doing urinary test but it is still far more expensive. Further clinical studies are needed to define whether self 3HB monitoring should substitute urinary test in outpatient care. |
Coco: Runtime Reasoning about Conflicting Commitments | To interact effectively, agents must enter into commitments. What should an agent do when these commitments conflict? We describe Coco, an approach for reasoning about which specific commitments apply to specific parties in light of general types of commitments, specific circumstances, and dominance relations among specific commitments. Coco adapts answer-set programming to identify a maximal set of nondominated commitments. It provides a modeling language and tool geared to support practical applications. |
Text summarization using rough sets | Text Summarization aims to generate concise and compressed form of original documents. The techniques used for text summarization may be categorized as extractive summarization and abstractive summarization. We consider extractive techniques which are based on selection of important sentences within a document. A major issue in extractive summarization is how to select important sentences, i.e., what criteria should be defined for selection of sentences which are eventually part of the summary. We examine this issue using rough sets notion of reducts. A reduct is an attribute subset which essentially contains the same information as the original attribute set. In particular, we defined and examined three types of matrices based on an information table, namely, discernibility matrix, indiscernibility matrix and equal to one matrix. Each of these matrices represents a certain type of relationship between the objects of an information table. Three types of reducts are determined based on these matrices. The reducts are used to select sentences and consequently generate text summaries. Experimental results and comparisons with existing approaches advocates for the use of the proposed approach in generating text summaries. |
The impact of financial services and managerial Training on Costa Rica SMEs | Este trabajo estima el impacto de los servicios financieros (diferentes del credito) y los cursos de capacitacion de corto plazo sobre las ventas reales, numero de empleados, grado de formalizacion y acceso al credito en el sistema financiero formal, de un conjunto de micro, pequenas y medianas empresas clientes de una entidad de microfinanzas que opera en Costa Rica. Para ello, se emplea un panel de datos de cinco anos (2006 a 2010) y modelos econometricos que tratan de controlar por atributos de las empresas, tanto observables como no observables, que afectan el desempeno de las variables sobre las cuales se mide el impacto. Los resultados senalan que aquellas empresas que recibieron servicios financieros diferentes del credito lograron aumentar mas el valor de sus ventas y el empleo, y tambien mejoraron su grado de formalizacion (aunque este resultado es debil estadisticamente), cuando se las compara con aquellas que no obtuvieron acceso a este tipo de financiamiento. Ademas, pareceria ser que las garantias de participacion y cumplimiento son el instrumento financiero que mas impacta en forma positiva el desempeno de estas empresas. Por otra parte, no se obtuvo evidencia de que los servicios de capacitacion de corta duracion tuvieran algun impacto sobre el desempeno de estas empresas. Con base en todo lo anterior se plantean varias recomendaciones de politica. Abstract This paper considers the impact that financial services other than credit, and short term training courses have on total sales, employee number, level of formalization and access to credit in the regular financial system for a group of micro, small and medium size enterprises that are clients of a micro financing entity operating in Costa Rica. With this aim, data collected over five years (2006 to 2010), and econometric models that monitor a company`s performance by visible and non-visible attributes affecting measuring variables were used. Results show that the companies that received financial services other than the credit itself increased sales and employment more , and also improved their level of formalization (although the latter is statistically weak), when compared to others that did not had access to this type of financing. Also, it seems that participation and compliance guarantees are the most impacting financial instrument on company performance. On the other hand, no evidence was obtained about short training courses had any impact on company performance. Finally, some policy changes are recommended. |
Using facebook after losing a job: differential benefits of strong and weak ties | Among those who have recently lost a job, social networks in general and online ones in particular may be useful to cope with stress and find new employment. This study focuses on the psychological and practical consequences of Facebook use following job loss. By pairing longitudinal surveys of Facebook users with logs of their online behavior, we examine how communication with different kinds of ties predicts improvements in stress, social support, bridging social capital, and whether they find new jobs. Losing a job is associated with increases in stress, while talking with strong ties is generally associated with improvements in stress and social support. Weak ties do not provide these benefits. Bridging social capital comes from both strong and weak ties. Surprisingly, individuals who have lost a job feel greater stress after talking with strong ties. Contrary to the "strength of weak ties" hypothesis, communication with strong ties is more predictive of finding employment within three months. |
On the Recognition of Printed Characters of Any Font and Size | We describe the current state of a system that recognizes printed text of various fonts and sizes for the Roman alphabet. The system combines several techniques in order to improve the overall recognition rate. Thinning and shape extraction are performed directly on a graph of the run-length encoding of a binary image. The resulting strokes and other shapes are mapped, using a shape-clustering approach, into binary features which are then fed into a statistical Bayesian classifier. Large-scale trials have shown better than 97 percent top choice correct performance on mixtures of six dissimilar fonts, and over 99 percent on most single fonts, over a range of point sizes. Certain remaining confusion classes are disambiguated through contour analysis, and characters suspected of being merged are broken and reclassified. Finally, layout and linguistic context are applied. The results are illustrated by sample pages. |
Binary Histogram in Image Classification for Retrieval Purposes | Image retrieval can be considered as a classification problem. Classification is usually based on some image features. In the feature extraction image segmentation is commonly used. In this paper we introduce a new feature for image classification for retrieval purposes. This feature is based on the gray level histogram of the image. The feature is called binary histogram and it can be used for image classification without segmentation. Binary histogram can be used for image retrieval as such by using similarity calculation. Another approach is to extract some features from it. In both cases indexing and retrieval do not require much computational time. We test the similarity measurement and the feature-based retrieval by making classification experiments. The proposed features are tested using a set of paper defect images, which are acquired from an industrial imaging application. |
Contextual Gamification of Social Interaction - Towards Increasing Motivation in Social E-learning | In current e-learning studies, one of the main challenges is to keep learners motivated in performing desirable learning behaviours and achieving learning goals. Towards tackling this challenge, social e-learning contributes favourably, but it requires solutions that can reduce side effects, such as abusing social interaction tools for ‘chitchat’, and further enhance learner motivation. In this paper, we propose a set of contextual gamification strategies, which apply flow and self-determination theory for increasing intrinsic motivation in social e-learning environments. This paper also presents a social e-learning environment that applies these strategies, followed by a user case study, which indicates increased learners’ perceived intrinsic motivation. |
Accurate Geo-Registration by Ground-to-Aerial Image Matching | We address the problem of geo-registering ground-based multi-view stereo models by ground-to-aerial image matching. The main contribution is a fully automated geo-registration pipeline with a novel viewpoint-dependent matching method that handles ground to aerial viewpoint variation. We conduct large-scale experiments which consist of many popular outdoor landmarks in Rome. The proposed approach demonstrates a high success rate for the task, and dramatically outperforms state-of-the-art techniques, yielding geo-registration at pixel-level accuracy. |
Vertex Bisection is Hard, too | We settle an open problem mentioned in Diaz, Petit, and Serna: A survey of graph layout problems (ACM Computing Surveys 34:313–356, 2002). Of eight objectives considered in that survey, only the complexity status of minimum vertex bisection is listed as unknown. We show that both minimum and maximum vertex bisection are NP-hard, but polynomially solvable on special graph classes such as hypercubes and trees. Submitted: December 2005 Reviewed: April 2006 Revised: April 2007 Accepted: February 2009 Final: February 2009 Published: April 2009 Article type: Concise paper Communicated by: S. Albers E-mail addresses: [email protected] (Ulrik Brandes) [email protected] (Daniel Fleischer) 120 U. Brandes, D. Fleischer Vertex Bisection |
The TUM Kitchen Data Set of everyday manipulation activities for motion tracking and action recognition | We introduce the publicly available TUM Kitchen Data Set as a comprehensive collection of activity sequences recorded in a kitchen environment equipped with multiple complementary sensors. The recorded data consists of observations of naturally performed manipulation tasks as encountered in everyday activities of human life. Several instances of a table-setting task were performed by different subjects, involving the manipulation of objects and the environment. We provide the original video sequences, full-body motion capture data recorded by a markerless motion tracker, RFID tag readings and magnetic sensor readings from objects and the environment, as well as corresponding action labels. In this paper, we both describe how the data was computed, in particular the motion tracker and the labeling, and give examples what it can be used for. We present first results of an automatic method for segmenting the observed motions into semantic classes, and describe how the data can be integrated in a knowledge-based framework for reasoning about the observations. |
Passive geolocation and tracking of an unknown number of emitters | An algorithm for the geolocation and tracking of an unknown number of ground emitters using the time difference of arrival (TDOA) measurements in practical scenarios is proposed. The focus is on solving the important issue of data association, i.e., deciding from which target, if any, a measurement originated. A previous solution for data association based on the assignment formulation for passive measurement tracking systems relied on solving two assignment problems: an S-dimensional (or SD, where S /spl ges/ 3) assignment for association across sensors and a 2D assignment for the measurement-to-track association. In this paper, (S + 1 )D assignment algorithm - an extension of the SD assignment formulation - that performs the data association in one step, is introduced. It will be shown later that the (S + 1 )D assignment formulation reduces the computational cost significantly without compromising tracking accuracy. The incorporation of correlated measurements, as with the case of TDOA measurements, into the SO framework that typically assumes uncorrelated measurements, is also discussed. The nonlinear TDOA equations are posed as an optimization problem and solved using SolvOpt, a nonlinear optimization solver. The interacting multiple model (IMM) estimator is used in conjunction with the unscented Kalman filter (UKF) to track the geolocated emitters. |
Switching to second-line antiretroviral therapy in resource-limited settings: comparison of programmes with and without viral load monitoring. | BACKGROUND
In high-income countries, viral load is routinely measured to detect failure of antiretroviral therapy (ART) and guide switching to second-line ART. Viral load monitoring is not generally available in resource-limited settings. We examined switching from nonnucleoside reverse transcriptase inhibitor (NNRTI)-based first-line regimens to protease inhibitor-based regimens in Africa, South America and Asia.
DESIGN AND METHODS
Multicohort study of 17 ART programmes. All sites monitored CD4 cell count and had access to second-line ART and 10 sites monitored viral load. We compared times to switching, CD4 cell counts at switching and obtained adjusted hazard ratios for switching (aHRs) with 95% confidence intervals (CIs) from random-effects Weibull models.
RESULTS
A total of 20 113 patients, including 6369 (31.7%) patients from 10 programmes with access to viral load monitoring, were analysed; 576 patients (2.9%) switched. Low CD4 cell counts at ART initiation were associated with switching in all programmes. Median time to switching was 16.3 months [interquartile range (IQR) 10.1-26.6] in programmes with viral load monitoring and 21.8 months (IQR 14.0-21.8) in programmes without viral load monitoring (P < 0.001). Median CD4 cell counts at switching were 161 cells/microl (IQR 77-265) in programmes with viral load monitoring and 102 cells/microl (44-181) in programmes without viral load monitoring (P < 0.001). Switching was more common in programmes with viral load monitoring during months 7-18 after starting ART (aHR 1.38; 95% CI 0.97-1.98), similar during months 19-30 (aHR 0.97; 95% CI 0.58-1.60) and less common during months 31-42 (aHR 0.29; 95% CI 0.11-0.79).
CONCLUSION
In resource-limited settings, switching to second-line regimens tends to occur earlier and at higher CD4 cell counts in ART programmes with viral load monitoring compared with programmes without viral load monitoring. |
Effects of international health electives on medical student learning and career choice: results of a systematic literature review. | BACKGROUND AND OBJECTIVES
The present study reviewed the published literature to examine the effects of international health electives (IHEs) on medical student learning and career choice.
METHODS
A systematic literature review was conducted to identify key English-language articles on IHEs, using PubMed journal databases for the period 1990--2009. Article inclusion for this review was vetted by a rigorous evaluation of each article's study methods, content, and data quality. Pooled or aggregate information from 11 key articles, including information on type and duration of IHE, study and comparison group characteristics, and measured outcomes such as self-reported changes in cultural competency, clinical skills, and specialty choice, were extracted and summarized.
RESULTS
Findings suggest that having IHE experiences contributed to a more well-rounded training for medical students; students reported being more culturally competent and were more likely to choose a primary care specialty and/or a public service career.
CONCLUSIONS
Although IHE experiences appear to have educational benefits, the quality and availability of these electives vary by institution. Barriers to ensuring that students attain a safe and rich experience include the lack of consistent categorical funding, safety concerns when traveling, and limited faculty experience and resources to support and guide students during their rotations abroad. |
A controlled trial of sputum induction and routine collection methods for TB diagnosis in a South African community | The diagnostic yield of pulmonary tuberculosis (TB) by sputum induction (SI) at the first point of contact with health services, conducted in all patients with suspected TB regardless of the ability to expectorate spontaneously, has not been evaluated. We compared the diagnostic yield of SI to routine sputum collection in a South African community setting. Ambulatory patients with suspected TB provided a ‘spot’ expectorated sputum sample, an SI sample by hypertonic (5 %) saline nebulization, and early morning expectorated sputum sample. The diagnostic yield of sputum smear microscopy and liquid culture (denominator all subjects with any positive Mycobacterium tuberculosis culture), and time-to-positivity of culture were compared between SI and expectorated samples. A total of 555 subjects completed the SI procedure, of whom 132 (24 %) were human immunodeficiency virus (HIV)-infected. One hundred and twenty-nine samples (129, 23 %) were M. tuberculosis culture-positive. The time-to-positivity of Mycobacteria Growth Indicator Tube (MGIT) culture was shorter for SI (median difference 2 days, p = 0.63) and for early morning expectorated sputum (median difference 2 days, p = 0.02) compared to spot expectorated sputum. However, there was no difference in the culture-positive diagnostic yield between SI and spot expectorated sputum [difference +0.7 %; confidence interval (CI) −7.0 to +8.5 %, p = 0.82] or SI and early morning expectorated sputum (difference +4.7 %; CI −3.2 to +12.5 %, p = 0.20) for all subjects or for HIV-infected subjects. SI reduces the time to positive M. tuberculosis culture, but does not increase the rate of positive culture compared to routine expectorated sputum collection. SI cannot be recommended as the routine collection method at first contact among ambulatory patients with suspected TB in high-burden communities. |
Sketch it, make it: sketching precise drawings for laser cutting | Sketch It, Make It (SIMI) is a modeling tool that enables non-experts to design items for fabrication with laser cutters. SIMI recognizes rough, freehand input as a user iteratively edits a structured vector drawing. The tool combines the strengths of sketch-based interaction with the power of constraint-based modeling. Several interaction techniques are combined to present a coherent system that makes it easier to make precise designs for laser cutters. |
Experiences with group projects in software engineering | Computer science graduates are often accused of having no experience of the real-world problems of large software systems. In general, the practical work is said to be too small scale, and too individually oriented. The paper describes the introduction of a major group project into the degree syllabus at Heriot-Watt University. The project is designed to give the students experience of the problems involved in writing multi-author software and meeting strict deadlines. The choice of topics, supervision and management aspects are all covered. The problems of assessment are described. > |
Unsupervised prototype learning in an associative-memory network | Unsupervised learning in a generalized Hopfield associative-memory network is investigated in this work. First, we prove that the (generalized) Hopfield model is equivalent to a semi-restricted Boltzmann machine with a layer of visible neurons and another layer of hidden binary neurons, so it could serve as the building block for a multilayered deep-learning system. We then demonstrate that the Hopfield network can learn to form a faithful internal representation of the observed samples, with the learned memory patterns being prototypes of the input data. Furthermore, we propose a spectral method to extract a small set of concepts (idealized prototypes) as the most concise summary or abstraction of the empirical data. |
Genetic and clinical risk prediction model for postoperative atrial fibrillation. | BACKGROUND
Postoperative atrial fibrillation (PoAF) is common after coronary artery bypass grafting. We previously showed that atrial fibrillation susceptibility single nucleotide polymorphisms (SNPs) at the chromosome 4q25 locus are associated with PoAF. Here, we tested the hypothesis that a combined clinical and genetic model incorporating atrial fibrillation risk SNPs would be superior to a clinical-only model.
METHODS AND RESULTS
We developed and externally validated clinical and clinical/genetic risk models for PoAF. The discovery and validation cohorts included 556 and 1164 patients, respectively. Clinical variables previously associated with PoAF and 13 SNPs at loci associated with atrial fibrillation in genome-wide association studies were considered. PoAF occurred in 30% and 29% of patients in the discovery and validation cohorts, respectively. In the discovery cohort, a logistic regression model with clinical factors had good discrimination, with an area under the receiver operator characteristic curve of 0.76. The addition of 10 SNPs to the clinical model did not improve discrimination (area under receiver operator characteristic curve, 0.78; P=0.14 for difference between the 2 models). In the validation cohort, the clinical model had good discrimination (area under the receiver operator characteristic curve, 0.69) and addition of genetic variables resulted in a marginal improvement in discrimination (area under receiver operator characteristic curve, 0.72; P<0.0001).
CONCLUSIONS
We developed and validated a model for the prediction of PoAF containing common clinical variables. Addition of atrial fibrillation susceptibility SNPs did not improve model performance. Tools to accurately predict PoAF are needed to risk stratify patients undergoing coronary artery bypass grafting and identify candidates for prophylactic therapies. |
Robust Discriminative Localization Maps | Activation maps obtained from CNN filter responses have been used to visualize and improve the performance of deep learning models. However, as CNNs are susceptible to adversarial attack, so are the activation maps. While recovering the predictions of the classifier is a difficult task and often requires complex transformations, we show that recovering activation maps is trivial and does not require any changes either to the classifier or the input image. Code: github.com/iamaaditya/robust-activation-maps |
Axiomatising ST-Bisimulation Equivalence | A simple ST operational semantics for a process algebra is provided, by defining a set of operational rules in Plotkin’s style. This algebra comprises TCSP parallel composition, ACP sequential composition and a refinement operator, which is used for replacing an action with an entire process, thus permitting hierarchical specification of systems. We prove that ST-bisimulation equivalence is a congruence, resorting to standard techniques on rule formats. Moreover, we provide a set of axioms that is sound and complete with respect to ST-bisimulation. The intriguing case of the forgetful refinement (i.e. when an action is refined into the properly terminated process) is dealt with in a new, improved manner. 1. To appear in Proceedings of the IFIP Working Conference on Programming Concepts, Methods and Calculi (PROCOMET ’94), S. Miniato, Italy, June 1994. The first and third authors have been supported by CNR, MURST and Esprit project 8130LOMAPS. The second author has been fundedbyONRundergrant numberN00014-92-J-1974. 2. Dipartimento di Matematica, Università di Siena Via del Capitano, 15, I-53100 Siena, Italy. 3. Computer Science Department, Stanford University Stanford, CA 94305, USA. 4. Dipartimento di Matematica, Università di Bologna Piazza di Porta S. Donato 5, I-40127 Bologna, Italy. |
Intrusion Detection Model Based On Particle Swarm Optimization and Support Vector Machine | Advance in information and communication technologies, force us to keep most of the information electronically, consequently, the security of information has become a fundamental issue. The traditional intrusion detection systems look for unusual or suspicious activity, such as patterns of network traffic that are likely indicators of unauthorized activity. However, normal operation often produces traffic that matches likely "attack signature", resulting in false alarms. One main drawback is the inability of detecting new attacks which do not have known signatures. In this paper particle swarm optimization (PSO) is used to implement a feature selection, and support vector machine (SVMs) with the one-versus-rest method serve as a fitness function of PSO for classification problems from the literature. Experimental result shows that our method allows us to recognize not only known attacks but also to detect suspicious activity that may be the result of a new, unknown attack. Our method simplifies features effectively and obtains a higher classification accuracy compared to other methods |
Space-Time Alignment for Channel Estimation in Millimeter Wave Communication with Beam Sweeping | In millimeter wave (mmWave) communication systems with hybrid multiple-input multiple-output (MIMO) processors, it is often necessary to employ analog beam sweeping during pilot signal transmission/reception to increase the signal to noise ratio (SNR). In this paper, we develop a process, called the space-time (ST) alignment, for receiving pilot sequences for mmWave channel estimation from the signal received during beam sweeping. The spatial alignment removes the effect of beam sweeping on pilot arrival time (PATs). The process for locating the temporal windows on the received signal to obtain pilot-containing sequences for channel estimation is called the temporal alignment. Simulation results demonstrate that mmWave channels can be estimated successfully using the sequences obtained via the ST alignment. |
Formulation Development and Evaluation of Chewable Tablet of Albendazole by Different Techniques | Albendazole is a benzimidazole derivative with broad spectrum anthelmenthic activity and excellent tolerability. Orally it is rapidly absorbed and metabolized to sulfoxide and sulfone, which may be responsible for its anthelmenthic action. Single dose administration of albendazole has produced cure rates in ascarisis, hookworm and enterobiasis which are comparable to three day treatment with mebendazole. Albendazole chewable tablets (400 mg) were prepared by three methods viz. non aqueous granulation, aqueous granulation and direct compression and were named as NAG, AG and DC respectively. Tablet prepared by these three methods were evaluated by different parameters such as average weight, hardness, carr’s index, tapped density, friability, disintegration, content uniformity test, in vitro dissolution etc. All the parameters were found within the specifications. The study on the dissolution profile revealed that product ‘DC’ had faster dissolution rate while compared to remaining batches and marketed product. Assay values were within the limits of 90% to 110%. |
Floor Layout Planning Using Artificial Intelligence Technique | In the era of e-commerce while buying furniture online the customers obviously feel the need for visual representation of the arrangement of their furniture. Even when doing interiors of the house it's difficult to just rely on assumptions about best layouts possible and professional help may become quite expensive. In this project, we make use of Genetic Algorithm (GA) which is an Artificial Intelligence technique to display various optimal arrangements of furniture. The basic idea behind using GA is developing an evolutionary design model. This is done by generating chromosomes for each possible solution and then performing a crossover between them in each generation until an optimum fitness function is reached. Modification in chromosome representation may also be done for better results. The proposed system will generate different layout designs for the furniture keeping in consideration the structure of a master bedroom. |
I/O efficient Core Graph Decomposition at web scale | Core decomposition is a fundamental graph problem with a large number of applications. Most existing approaches for core decomposition assume that the graph is kept in memory of a machine. Nevertheless, many real-world graphs are big and may not reside in memory. In the literature, there is only one work for I/O efficient core decomposition that avoids loading the whole graph in memory. However, this approach is not scalable to handle big graphs because it cannot bound the memory size and may load most parts of the graph in memory. In addition, this approach can hardly handle graph updates. In this paper, we study I/O efficient core decomposition following a semi-external model, which only allows node information to be loaded in memory. This model works well in many web-scale graphs. We propose a semi-external algorithm and two optimized algorithms for I/O efficient core decomposition using very simple structures and data access model. To handle dynamic graph updates, we show that our algorithm can be naturally extended to handle edge deletion. We also propose an I/O efficient core maintenance algorithm to handle edge insertion, and an improved algorithm to further reduce I/O and CPU cost by investigating some new graph properties. We conduct extensive experiments on 12 real large graphs. Our optimal algorithm significantly outperform the existing I/O efficient algorithm in terms of both processing time and memory consumption. In many memory-resident graphs, our algorithms for both core decomposition and maintenance can even outperform the in-memory algorithm due to the simple structures and data access model used. Our algorithms are very scalable to handle web-scale graphs. As an example, we are the first to handle a web graph with 978.5 million nodes and 42.6 billion edges using less than 4.2 GB memory. |
Central arteriovenous anastomosis for the treatment of patients with uncontrolled hypertension (the ROX CONTROL HTN study): a randomised controlled trial | BACKGROUND
Hypertension contributes to cardiovascular morbidity and mortality. We assessed the safety and efficacy of a central iliac arteriovenous anastomosis to alter the mechanical arterial properties and reduce blood pressure in patients with uncontrolled hypertension.
METHODS
We enrolled patients in this open-label, multicentre, prospective, randomised, controlled trial between October, 2012, and April, 2014. Eligible patients had baseline office systolic blood pressure of 140 mm Hg or higher and average daytime ambulatory blood pressure of 135 mm Hg or higher systolic and 85 mm Hg or higher diastolic despite antihypertensive treatment. Patients were randomly allocated in a 1:1 ratio to undergo implantation of an arteriovenous coupler device plus current pharmaceutical treatment or to maintain current treatment alone (control). The primary endpoint was mean change from baseline in office and 24 h ambulatory systolic blood pressure at 6 months. Analysis was by modified intention to treat (all patients remaining in follow-up at 6 months). This trial is registered with ClinicalTrials.gov, number NCT01642498.
FINDINGS
83 (43%) of 195 patients screened were assigned arteriovenous coupler therapy (n=44) or normal care (n=39). Mean office systolic blood pressure reduced by 26·9 (SD 23·9) mm Hg in the arteriovenous coupler group (p<0·0001) and by 3·7 (21·2) mm Hg in the control group (p=0·31). Mean systolic 24 h ambulatory blood pressure reduced by 13·5 (18·8) mm Hg (p<0·0001) in arteriovenous coupler recipients and by 0·5 (15·8) mm Hg (p=0·86) in controls. Implantation of the arteriovenous coupler was associated with late ipsilateral venous stenosis in 12 (29%) of 42 patients and was treatable with venoplasty or stenting.
INTERPRETATION
Arteriovenous anastomosis was associated with significantly reduced blood pressure and hypertensive complications. This approach might be a useful adjunctive therapy for patients with uncontrolled hypertension.
FUNDING
ROX Medical. |
Anionic surfactant ionic liquids with 1-butyl-3-methyl-imidazolium cations: characterization and application. | For the first time a series of anionic surfactant ionic liquids (SAILs) has been synthesized based on organic surfactant anions and 1-butyl-3-methyl-imidazolium cations. These compounds are more environmentally friendly and chemically tunable as compared to other common ionic liquids. A detailed investigation of physicochemical properties highlights potential applications from battery design to reaction control, and studies into aqueous aggregation behavior, as well as structuring in pure ILs, point to possible uses in electrochemistry. |
Markov Logic Networks with Numerical Constraints | Markov logic networks (MLNs) have proven to be useful tools for reasoning about uncertainty in complex knowledge bases. In this paper, we extend MLNs with numerical constraints and present an efficient implementation in terms of a cutting plane method. This extension is useful for reasoning over uncertain temporal data. To show the applicability of this extension, we enrich log-linear description logics (DLs) with concrete domains (datatypes). Thereby, allowing to reason over weighted DLs with datatypes. Moreover, we use the resulting formalism to reason about temporal assertions in DBpedia, thus illustrating its practical use. |
Mutation rate in human microsatellites: influence of the structure and length of the tandem repeat. | In 10,844 parent/child allelic transfers at nine short-tandem-repeat (STR) loci, 23 isolated STR mismatches were observed. The parenthood in each of these cases was highly validated (probability >99.97%). The event was always repeat related, owing to either a single-step mutation (n=22) or a double-step mutation (n=1). The mutation rate was between 0 and 7 x 10(-3) per locus per gamete per generation. No mutations were observed in three of the nine loci. Mutation events in the male germ line were five to six times more frequent than in the female germ line. A positive exponential correlation between the geometric mean of the number of uninterrupted repeats and the mutation rate was observed. Our data demonstrate that mutation rates of different loci can differ by several orders of magnitude and that different alleles at one locus exhibit different mutation rates. |
Scalable Recognition with a Vocabulary Tree | A recognition scheme that scales efficiently to a large number of objects is presented. The efficiency and quality is exhibited in a live demonstration that recognizes CD-covers from a database of 40000 images of popular music CD’s. The scheme builds upon popular techniques of indexing descriptors extracted from local regions, and is robust to background clutter and occlusion. The local region descriptors are hierarchically quantized in a vocabulary tree. The vocabulary tree allows a larger and more discriminatory vocabulary to be used efficiently, which we show experimentally leads to a dramatic improvement in retrieval quality. The most significant property of the scheme is that the tree directly defines the quantization. The quantization and the indexing are therefore fully integrated, essentially being one and the same. The recognition quality is evaluated through retrieval on a database with ground truth, showing the power of the vocabulary tree approach, going as high as 1 million images. |
Predicting and analyzing secondary education placement-test scores: A data mining approach | Understanding the factors that lead to success (or failure) of students at placement tests is an interesting and challenging problem. Since the centralized placement tests and future academic achievements are considered to be related concepts, analysis of the success factors behind placement tests may help understand and potentially improve academic achievement. In this study using a large and feature rich dataset from Secondary Education Transition System in Turkey we developed models to predict secondary education placement test results, and using sensitivity analysis on those prediction models we identified the most important predictors. The results showed that C5 decision tree algorithm is the best predictor with 95% accuracy on hold-out sample, followed by support vector machines (with an accuracy of 91%) and artificial neural networks (with an accuracy of 89%). Logistic regression models came out to be the least accurate of the four with and overall accuracy of 82%. The sensitivity analysis revealed that previous test experience, whether a student has a scholarship, student’s number of siblings, previous years’ grade point average are among the most important predictors of the placement test scores. 2012 Elsevier Ltd. All rights reserved. |
One Century of Brain Mapping Using Brodmann Areas* | 100 years after their publication, Brodmann’s maps of the cerebral cortex are universally used to locate neuropsychological functions. On the occasion of this jubilee the life and work of Korbinian Brodmann are reported. The core functions of each single Brodmann area are described and Brodmann’s views on neuropsychological processes are depicted. 100 Jahre nach ihrer Veröffentlichung wird Brodmanns Kartierung des zerebralen Kortex universell zur Lokalisation neuropsychologischer Funktionen eingesetzt. Anlässlich dieses Jubiläums werden Leben und Werk von Korbinian Brodmann dargestellt. Die wesentlichen Funktionen der einzelnen Brodmann-Areale werden beschrieben und Brodmanns Ansichten über neuropsychologische Prozesse wiedergegeben. |
U.S. Timber Production, Trade, Consumption, and Price Statistics 1965-1999 | This report is part of an annual series that presents current and historical information on the production, trade, consumption, and prices of timber products in the United States. The report focuses on national statistics, but includes some data for individual States and regions and for Canada. The data were collected from industry trade associations and government agencies. They are intended for use by forest land managers, forest industries, trade associations, forestry schools, renewable resource organizations, libraries, organizations, individuals in the major timber producing and consuming countries of the world, and the general public. A major use of the data presented here is tracking technological change over time. One of the major technology shifts occurring in the wood-using industry is the substitution of oriented strandboard (OSB) for plywood in the structural panel sector, as well as a shift in plywood production from the west to the south United States. Some data show these shifts. United States production of structural panels totaled 29.4 billion ft2 in 1999. Production of OSB increased from less than 3 billion ft2 in 1985 to 11.6 billion ft2 in 1999. Plywood production was 20.1 billion ft2 in 1985 before falling to 17.8 billion ft2 in 1999. The decline in plywood production reflects the continued increase in the OSB share of the traditional plywood market |
Oil price risk and the Australian stock market | Abstract The primary aim of this paper is to investigate the sensitivity of Australian industry equity returns to an oil price factor over the period 1983–1996. The paper employs an augmented market model to establish the sensitivity. The key findings are as follows. First, a degree of pervasiveness of an oil price factor, beyond the influence of the market, is detected across some Australian industries. Second, we propose and find significant positive oil price sensitivity in the Oil and Gas and Diversified Resources industries. Similarly, we propose and find significant negative oil price sensitivity in the Paper and Packaging, and Transport industries. Generally, we find that long-term effects persist, although we hypothesize that some firms have been able to pass on oil price changes to customers or hedge the risk. The results have implications for management in these industries and policy makers and enhance our understanding of the “Dutch disease.” |
Silver Nanoparticles Synthesis of Mentha arvensis Extracts and Evaluation of Antioxidant Properties | Silver nanoparticle synthesis of selected plant extract were confirmed by Ultra violet visible and Fourier transform infrared spectroscopy The Mentha arvensis leaf extract mediated nanoparticles showed absorbance peaks at 340 nm region in the spectral analysis. Fourier transform infrared spectroscopy analysis of the silver nanoparticles showed absorption peaks of reduced silver at1650.95 cm −1 . The total antioxidant of AgNO3) shows a maximum activity of 40% was observed at 600μg/ml. 1-Dibhenyl-2-Picrylhydrazlradical in Mentha arvensis mediated silver nanoparticles showed a maximum activity of 25% was observed at 600μg/ml. Hydrogen peroxide scavenging assay in Mentha arvensis mediated silver nanoparticles showed a maximum activity of 10% was observed at 600μg/ml. Reducing power of Mentha arvensis silver nanoparticles exhibited a higher activity of 19% in 600μg/ml. The selected plant exhibits better antioxidant properties. |
Ernst Ferdinand Sauerbruch: Rise and Fall of the Pioneer of Thoracic Surgery | Ferdinand Sauerbruch (1875–1951) was a pioneer of thoracic and cardiac surgery and is undoubtedly one of the twentieth century's most outstanding surgeons. Before 1904 operations on the thorax met with fatal complications due to pneumothorax. Sauerbruch developed a pressure-differential chamber that maintained normal respiration and enabled safe operations to be undertaken on the thorax. Together with von Mikulicz, he initiated intrathoracic operations and later developed various surgical procedures on the mediastinum, lungs, pericardium, heart, and esophagus. The simple yet effective techniques of positive-pressure ventilation replaced the expensive, cumbersome negative-pressure chamber. Sauerbruch's latter years were marred by dementia that adversely affected his personality, intellect, and capacity as a surgeon. The unjustifiable toll of increasing patient morbidity and mortality forced authorities to dismiss him in 1949. He died at the age of 76 in Berlin. After almost a century since the advent of the first safe thoracic surgery, the advances in technique and technology have been enormous. A great deal is owed to the inspiration and contributions of Ferdinand Sauerbruch. |
Effects of Meridian Acupressure Massage on Body Composition, Edema, Stress, and Fatigue in Postpartum Women. | OBJECTIVES
This study aims to investigate the effects of meridian acupressure massage on body composition, edema, stress, and fatigue in postpartum women.
DESIGN
A quasi-experimental design with a nonequivalent control group was utilized.
SETTINGS/LOCATION
The Postpartum Care Center of Women's Hospital in Gwangju City, Republic of Korea.
SUBJECTS
The study group consisted of 39 postpartum women, 19 in the experimental group and 20 in the control group, recruited from the postpartum care center of Women's Hospital in Gwangju city, South Korea.
INTERVENTIONS
The experimental group was provided with meridian acupressure massage for 90 min daily over 5 days as an experimental therapy.
OUTCOME MEASURES
Body composition (body weight, BMI, total body water, ECW ratio, LBM, and body fat) Edema (subjective edema, average girth of the upper limbs, and average girth of the lower limbs), Stress (psychological stress and physical stress), and Fatigue.
RESULTS
The experimental group demonstrated a significantly larger decrease compared with the control group in measures of body composition, edema, total subjective stress, psychological stress, and subjective fatigue.
CONCLUSIONS
Meridian acupressure massage can hasten the return to original body composition after childbirth. |
Task Parameterization Using Continuous Constraints Extracted From Human Demonstrations | In this paper, we propose an approach for learning task specifications automatically, by observing human demonstrations. Using this approach allows a robot to combine representations of individual actions to achieve a high-level goal. We hypothesize that task specifications consist of variables that present a pattern of change that is invariant across demonstrations. We identify these specifications at different stages of task completion. Changes in task constraints allow us to identify transitions in the task description and to segment them into subtasks. We extract the following task-space constraints: 1) the reference frame in which to express the task variables; 2) the variable of interest at each time step, position, or force at the end effector; and 3) a factor that can modulate the contribution of force and position in a hybrid impedance controller. The approach was validated on a seven-degree-of-freedom Kuka arm, performing two different tasks: grating vegetables and extracting a battery from a charging stand. |
Multi-class SVM classifier for english handwritten digit recognition using manual class segmentation | A new method for recognition of isolated handwritten English digits is presented here. This method is based on Support Vector Machines (SVMs). Mean and standard deviation of each digit is considered as the features. Using these features, multiple SVM classifiers are trained to separate different classes of digits. Support vector machine are based on the concept of decision planes that defines the decision boundaries. The decision plane is one that separates between the set of digits having different class membership. The approach works in four steps
1) Preprocessing 2) Feature extraction
3) Classification 4) detection. A database of 100 different representation of each digit is constructed for the training database. The digits are first manually segmented into 5 classes to minimize the time required to obtain the hyperplane. Then the input is again check against the two classes by 2-class SVM classifier. Experiments show that the proposed features can provide a very good recognition result using Support Vector Machines at a recognition rate 97%, compared with 91.25% obtained by MLP neural network classifier using the same features and test set. |
A spectral LF model based approach to voice source parameterisation | This paper presents a new method of extracting LF model based parameters using a spectral model matching approach. Strategies are described for overcoming some of the known difficulties of this type of approach, in particular high frequency noise. The new method performed well compared to a typical time based method particularly in terms of robustness against distortions introduced by the recording system and in terms of the ability of parameters extracted in this manner to differentiate three discrete voice qualities. Results from this study are very promising for the new method and offer a way of extracting a set of non-redundant spectral parameters that may be very useful in both recognition and synthesis systems. |
Insider Threat Identification by Process Analysis | The insider threat is one of the most pernicious in computer security. Traditional approaches typically instrument systems with decoys or intrusion detection mechanisms to detect individuals who abuse their privileges (the quintessential "insider"). Such an attack requires that these agents have access to resources or data in order to corrupt or disclose them. In this work, we examine the application of process modeling and subsequent analyses to the insider problem. With process modeling, we first describe how a process works in formal terms. We then look at the agents who are carrying out particular tasks, perform different analyses to determine how the process can be compromised, and suggest countermeasures that can be incorporated into the process model to improve its resistance to insider attack. |
Characteristics of cross (bypass) coupling through higher/lower order modes and their applications in elliptic filter design | This paper presents a new set of results concerning the use of higher/lower order modes as a means to implement bypass or cross coupling for applications in elliptic filter design. It is shown that the signs of the coupling coefficients to produce a transmission zero (TZ) either below or above the passband are, in certain situations, reversed from the predictions of simpler existing models. In particular, the bypass coupling to higher/lower order modes must be significantly stronger than the coupling to the main resonance in order to generate TZs in the immediate vicinity of the passband. Planar (H-plane) singlets are used to illustrate the derived results. This study should provide very important guidelines in selecting the proper main and bypass couplings for sophisticated filtering structures. Example filters are designed, built, and measured to demonstrate the validity of the introduced theory. |
A deployable sampling strategy for data race detection | Dynamic data race detection incurs heavy runtime overheads. Recently, many sampling techniques have been proposed to detect data races. However, some sampling techniques (e.g., Pacer) are based on traditional happens-before relation and incur a large basic overhead. Others utilize hardware to reduce their sampling overhead (e.g., DataCollider) and they, however, detect a race only when the race really occurs by delaying program executions. In this paper, we study the limitations of existing techniques and propose a new data race definition, named as Clock Races, for low overhead sampling purpose. The innovation of clock races is that the detection of them does not rely on concrete locks and also avoids heavy basic overhead from tracking happens-before relation. We further propose CRSampler (Clock Race Sampler) to detect clock races via hardware based sampling without directly delaying program executions, to further reduce runtime overhead. We evaluated CRSampler on Dacapo benchmarks. The results show that CRSampler incurred less than 5% overhead on average at 1% sampling rate. Whereas, Pacer and DataCollider incurred larger than 25% and 96% overhead, respectively. Besides, at the same sampling rate, CRSampler detected significantly more data races than that by Pacer and DataCollider. |
An empirical study of regression test application frequency | Regression testing is an expensive maintenance process used to revalidate modified software. Regression test selection (RTS) techniques try to lower the cost of regression testing by selecting and running a subset of the existing test cases. Many such techniques have been proposed and initial studies show that they can produce savings. We believe, however, that issues such as the frequency with which testing is done have a strong effect on the behavior of these techniques. Therefore, we conducted an experiment to assess the effects of test application frequency on the costs and benefits of regression test selection techniques. Our results expose essential tradeoffs that should be considered when using these techniques over a series of software releases. |
Exploring the use of speech input by blind people on mobile devices | Much recent work has explored the challenge of nonvisual text entry on mobile devices. While researchers have attempted to solve this problem with gestures, we explore a different modality: speech. We conducted a survey with 169 blind and sighted participants to investigate how often, what for, and why blind people used speech for input on their mobile devices. We found that blind people used speech more often and input longer messages than sighted people. We then conducted a study with 8 blind people to observe how they used speech input on an iPod compared with the on-screen keyboard with VoiceOver. We found that speech was nearly 5 times as fast as the keyboard. While participants were mostly satisfied with speech input, editing recognition errors was frustrating. Participants spent an average of 80.3% of their time editing. Finally, we propose challenges for future work, including more efficient eyes-free editing and better error detection methods for reviewing text. |
The role of gut microbiota in the gut-brain axis: current challenges and perspectives | Brain and the gastrointestinal (GI) tract are intimately connected to form a bidirectional neurohumoral communication system. The communication between gut and brain, knows as the gut-brain axis, is so well established that the functional status of gut is always related to the condition of brain. The researches on the gut-brain axis were traditionally focused on the psychological status affecting the function of the GI tract. However, recent evidences showed that gut microbiota communicates with the brain via the gut-brain axis to modulate brain development and behavioral phenotypes. These recent findings on the new role of gut microbiota in the gut-brain axis implicate that gut microbiota could associate with brain functions as well as neurological diseases via the gut-brain axis. To elucidate the role of gut microbiota in the gut-brain axis, precise identification of the composition of microbes constituting gut microbiota is an essential step. However, identification of microbes constituting gut microbiota has been the main technological challenge currently due to massive amount of intestinal microbes and the difficulties in culture of gut microbes. Current methods for identification of microbes constituting gut microbiota are dependent on omics analysis methods by using advanced high tech equipment. Here, we review the association of gut microbiota with the gut-brain axis, including the pros and cons of the current high throughput methods for identification of microbes constituting gut microbiota to elucidate the role of gut microbiota in the gut-brain axis. |
Clinical outcomes of adverse cardiovascular events in patients with acute dapsone poisoning | OBJECTIVE
Adverse cardiovascular events (ACVEs) account for a large proportion of the morbidities and mortalities associated with drug overdose emergencies. However, there are no published reports regarding outcomes of ACVEs associated with acute dapsone poisoning. Here, the authors retrospectively analyzed ACVEs reported within 48 hours of treatment in patients with acute dapsone poisoning and assessed the significance of ACVEs as early predictors of mortality.
METHODS
Sixty-one consecutive cases of acute dapsone poisoning that were diagnosed and treated at a regional emergency center between 2006 and 2014 were included in the study. An ACVE was defined as myocardial injury, shock, ventricular dysrhythmia, cardiac arrest, or any combination of these occurring within the first 48 hours of treatment for acute dapsone poisoning.
RESULTS
Nineteen patients (31.1%) had evidence of myocardial injury (elevation of serum troponin-I level or electrocardiography signs of ischemia) after dapsone overdose, and there were a total of 19 ACVEs (31.1%), including one case of shock (1.6%). Fourteen patients (23.0%) died from pneumonia or multiple organ failure, and the incidence of ACVEs was significantly higher among non-survivors than among survivors (64.3% vs. 21.3%, P=0.006). ACVE was a significant predictor of mortality (odds ratio, 5.690; 95% confidence interval, 1.428 to 22.675; P=0.014).
CONCLUSION
The incidence of ACVE was significantly higher among patients who died after acute dapsone poisoning. ACVE is a significant predictor of mortality after dapsone overdose, and evidence of ACVE should be carefully sought in these patients. |
Towards a lightweight embedded virtualization architecture exploiting ARM TrustZone | Virtualization has been used as the de facto technology to allow multiple operating systems (virtual machines) to run on top of the same hardware platform. In the embedded systems domain, virtualization research has focused on the coexistence of real-time requirements with non-real-time characteristics. However, existent standard software-based virtualization solutions have been shown to negatively impact the overall system, especially in performance, memory footprint and determinism. This work in progress paper presents the implementation of an embedded virtualization architecture through commodity hardware. ARM TrustZone technology is exploited to implement a lightweight virtualization solution with low overhead and high determinism, corroborated by promising preliminary results. Research roadmap is also pointed and discussed. |
Understanding the Predictive Power of Social | Please check this box if you do not wish your email address to be published Acknowledgments: The authors would like to thank the anonymous reviewers for their valuable comments that have enabled the improvement of manuscript's quality. The authors would also like to acknowledge that the Before that, he served as a Researcher Grade D at the research center CERTH/ITI and at research center NCSR " Demokritos ". He was also founder and manager of the eGovernment Unit at Archetypon SA, an international IT company. He holds a Diploma in Electrical Engineering from the National Technical University of Athens, Greece, and an MSc and PhD from Brunel University, UK. During the past years he has initiated and managed several research projects (e.g. Automation. He has about 200 research publications in the areas of software modeling and development for the domains of eGovernment, eBusiness, eLearning, eManufacturing etc. Structured Abstract: Purpose The purpose of this article is to consolidate existing knowledge and provide a deeper understanding of the use of Social Media (SM) data for predictions in various areas, such as disease outbreaks, product sales, stock market volatility, and elections outcome predictions. Design/methodology/approach The scientific literature was systematically reviewed to identify relevant empirical studies. These studies were analyzed and synthesized in the form of a proposed conceptual framework, which was thereafter applied to further analyze this literature, hence gaining new insights into the field. Findings The proposed framework reveals that all relevant studies can be decomposed into a small number of steps, and different approaches can be followed in each step. The application of the framework resulted in interesting findings. For example, most studies support SM predictive power, however more than one-third of these studies infer predictive power without employing predictive analytics. In addition, analysis suggests that there is a clear need for more advanced sentiment analysis methods as well as methods for identifying search terms for collection and filtering of raw SM data. Value The proposed framework enables researchers to classify and evaluate existing studies, to design scientifically rigorous new studies, and to identify the field's weaknesses, hence proposing future research directions. Purpose: The purpose of this article is to consolidate existing knowledge and provide a deeper understanding of the use of Social Media (SM) data for predictions in various areas, such as disease outbreaks, product sales, stock market volatility, and elections outcome predictions. Design/methodology/approach: The scientific literature was systematically reviewed … |
Four common anatomic variants that predispose to unfavorable rhinoplasty results: a study based on 150 consecutive secondary rhinoplasties. | A retrospective study was conducted of 150 consecutive secondary rhinoplasty patients operated on by the author before February of 1999, to test the hypothesis that four anatomic variants (low radix/low dorsum, narrow middle vault, inadequate tip projection, and alar cartilage malposition) strongly predispose to unfavorable rhinoplasty results. The incidences of each variant were compared with those in 50 consecutive primary rhinoplasty patients. Photographs before any surgery were available in 61 percent of the secondary patients; diagnosis in the remaining individuals was made from operative reports, physical diagnosis, or patient history. Low radix/low dorsum was present in 93 percent of the secondary patients and 32 percent of the primary patients; narrow middle vault was present in 87 percent of the secondary patients and 38 percent of the primary patients; inadequate tip projection was present in 80 percent of the secondary patients and 31 percent of the primary patients; and alar cartilage malposition was present in 42 percent of the secondary patients and 18 percent of the primary patients. In the 150-patient secondary group, the most common combination was the triad of low radix, narrow middle vault, and inadequate tip projection (40 percent of patients). The second largest group (27 percent) had shared all four anatomic points before their primary rhinoplasties. Seventy-eight percent of the secondary patients had three or all four anatomic variants in some combination; each secondary patient had at least one of the four traits; 99 percent had two or more. Seventy-eight percent of the primary patients had at least two variants, and 58 percent had three or more. Twenty-two percent of the primary patients had none of the variants and therefore would presumably not be predisposed to unfavorable results following traditional reduction rhinoplasty. This study supports the contention that four common anatomic variants, if unrecognized, are strongly associated with unfavorable results following primary rhinoplasty. It is important for all surgeons performing rhinoplasty to recognize these anatomic variants to avoid the unsatisfactory functional and aesthetic sequelae that they may produce by making their correction a deliberate part of each preoperative surgical plan. |
On the splitting method for VQ codebook generation | The well known LBG algorithm uses binary splitting method for generating initial codebook which is then iteratively improved by GLA. In the present paper we study different variants of the splitting method and its application to codebook generation problem with and without GLA. A new iterative splitting method is proposed which is applicable to the codebook generation without the use of GLA. Experiments show that the improved splitting method outperforms both GLA and the other existing splitting based algorithms. The best combination uses hyperplane partitioning of the clusters along the principal axis as proposed in [ 1] , integrated with a local repartitioning phase at each step of the algorithm. |
Change-point monitoring for the detection of DoS attacks | This paper presents a simple and robust mechanism, called change-point monitoring (CPM), to detect denial of service (DoS) attacks. The core of CPM is based on the inherent network protocol behavior and is an instance of the sequential change point detection. To make the detection mechanism insensitive to sites and traffic patterns, a nonparametric cumulative sum (CUSUM) method is applied, thus making the detection mechanism robust, more generally applicable, and its deployment much easier. CPM does not require per-flow state information and only introduces a few variables to record the protocol behaviors. The statelessness and low computation overhead of CPM make itself immune to any flooding attacks. As a case study, the efficacy of CPM is evaluated by detecting a SYN flooding attack - the most common DoS attack. The evaluation results show that CPM has short detection latency and high detection accuracy |
Emotional responses to a romantic partner's imaginary rejection: the roles of attachment anxiety, covert narcissism, and self-evaluation. | These studies tested the associations between responses to an induced imaginary romantic rejection and individual differences on dimensions of attachment and covert narcissism. In Study 1 (N=125), we examined the associations between attachment dimensions and emotional responses to a vignette depicting a scenario of romantic rejection, as measured by self-reported negative mood states, expressions of anger, somatic symptoms, and self-evaluation. Higher scores on attachment anxiety, but not on attachment avoidance, were associated with stronger reactions to the induced rejection. Moreover, decreased self-evaluation scores (self-esteem and pride) were found to mediate these associations. In Study 2 (N=88), the relative contributions of covert narcissism and attachment anxiety to the emotional responses to romantic rejection were explored. Higher scores on covert narcissism were associated with stronger reactions to the induced rejection. Moreover, covert narcissism seemed to constitute a specific aspect of attachment anxiety. |
Speaker segmentation and clustering | This survey focuses on two challenging speech processing topics, namely: speaker segmentation and speaker clustering. Speaker segmentation aims at finding speaker change points in an audio stream, whereas speaker clustering aims at grouping speech segments based on speaker characteristics. Model-based, metric-based, and hybrid speaker segmentation algorithms are reviewed. Concerning speaker clustering, deterministic and probabilistic algorithms are examined. A comparative assessment of the reviewed algorithms is undertaken, the algorithm advantages and disadvantages are indicated, insight to the algorithms is offered, and deductions as well as recommendations are given. Rich transcription and movie analysis are candidate applications that benefit from combined speaker segmentation and clustering. |
Barriers to ITC Integration into Teachers' Classroom Practices: Lessons from a Case Study on Social Studies Teachers in Turkey | This study analyses the difficulties and obstacles faced by teachers of social studies education while using ICT-based teaching equipment and methods in their classes. Although ICT-based equipment and methods create important opportunities for the development of teaching, the literature shows that the ICT integration into teachers’ classroom practices is not at the desired level. The study aims to use case study methods to analyze the reasons underlying this situation. Eighteen teachers of social studies education participated in the study. Classroom observation and semi-structured interview were used as data collection methods. According to the results of the study, the main barriers against the use of ICT-based methods and equipment in teachers’ instructional practices are lack of ICT equipment in classrooms, lack of ICT-based teaching resources, the effect of traditional approaches on teachers’ practices, inadequacies regarding in-service teacher training and lack of time. |
The Revised Cardiac Risk Index performs poorly in patients undergoing major vascular surgery: a prospective observational study. | The preoperative assessment of the likelihood of a postoperative cardiac event is complex. The Revised Cardiac Risk Index (RCRI) is a commonly used scoring system for the stratification of cardiac risk of patients undergoing major non-cardiac surgery. The RCRI scores patients according to six clinical categories: high-risk surgery (thoracic, abdominal and supra-inguinal vascular surgery); history of ischaemic heart disease (IHD); history of congestive heart failure; cerebrovascular disease; insulin-dependent diabetes; and renal failure. Since the publication of the original article in 1999, the RCRI has become a widely used stratification tool for cardiac risk. A recent meta-analysis has examined the predictive value of the RCRI in these studies. It concluded that the RCRI performed moderately well at discriminating between low and high perioperative risk. However, the authors felt that it performed poorly in vascular cohorts, and that the studies included were of variable quality. This report highlights the need for further studies evaluating the RCRI. |
Robust Discriminative Response Map Fitting with Constrained Local Models | We present a novel discriminative regression based approach for the Constrained Local Models (CLMs) framework, referred to as the Discriminative Response Map Fitting (DRMF) method, which shows impressive performance in the generic face fitting scenario. The motivation behind this approach is that, unlike the holistic texture based features used in the discriminative AAM approaches, the response map can be represented by a small set of parameters and these parameters can be very efficiently used for reconstructing unseen response maps. Furthermore, we show that by adopting very simple off-the-shelf regression techniques, it is possible to learn robust functions from response maps to the shape parameters updates. The experiments, conducted on Multi-PIE, XM2VTS and LFPW database, show that the proposed DRMF method outperforms state-of-the-art algorithms for the task of generic face fitting. Moreover, the DRMF method is computationally very efficient and is real-time capable. The current MATLAB implementation takes 1 second per image. To facilitate future comparisons, we release the MATLAB code and the pre-trained models for research purposes. |
Improving Communication between Physicians and Their Patients through Mindfulness and Compassion-Based Strategies: A Narrative Review | Communication between physicians and patients is a key pillar of psychosocial support for enhancing the healing process of patients and for increasing their well-being and quality of life. Physicians and other health professionals might benefit from interventions that increase their self-care, awareness, compassion, and other-focused concern, and reduce the chances of distress and burnout. There is substantial evidence for the contribution of different management strategies to achieve these aims. The goal of this article is to review the potential effect of mindfulness and compassion-based strategies for the improvement of physician-patient interactions. The acquisition of the necessary skills by physicians requires continuous education. Future research will be useful for identifying more evidence on the cost-effectiveness of this type of intervention. |
A Novel Bayesian Similarity Measure for Recommender Systems | Collaborative filtering, a widely-used user-centric recommendation technique, predicts an item’s rating by aggregating its ratings from similar users. User similarity is usually calculated by cosine similarity or Pearson correlation coefficient. However, both of them consider only the direction of rating vectors, and suffer from a range of drawbacks. To solve these issues, we propose a novel Bayesian similarity measure based on the Dirichlet distribution, taking into consideration both the direction and length of rating vectors. Further, our principled method reduces correlation due to chance. Experimental results on six real-world data sets show that our method achieves superior accuracy. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.