title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Vision meets robotics: The KITTI dataset | We present a novel dataset captured from a VW station wagon for use in mobile robotics and autonomous driving research. In total, we recorded 6 hours of traffic scenarios at 10-100 Hz using a variety of sensor modalities such as highresolution color and grayscale stereo cameras, a Velodyne 3D laser scanner and a high-precision GPS/IMU inertial navigation system. The scenarios are diverse, capturing real-world traffic situations and range from freeways over rural areas to innercity scenes with many static and dynamic objects. Our data is calibrated, synchronized and timestamped, and we provide the rectified and raw image sequences. Our dataset also contains object labels in the form of 3D tracklets and we provide online benchmarks for stereo, optical flow, object detection and other tasks. This paper describes our recording platform, the data format and the utilities that we provide. |
Low operating voltage and short settling time CMOS charge pump for MEMS applications | A 16-stage constant-threshold-voltage type charge pump was designed and fabricated in a standard CMOS 0.18-pm process. With an input clock voltage of less than 1 2 V , the maximum possible output voltage is 14.8 V, with a 65 psec rise time. The charge pump employs metal-insulator-metal (MIM) capacitors in order to minimize the chip area (0.8 x 0.9 mm’). This circuit is intended for generating on-chip actuation voltages for microelectro-mechanical devices and systems (MEMS). |
Asymmetric Information : Theory and Applications | This paper discusses asymmetric information theory as presented in economics literature. We present the theory’s implications for market behavior and the market institutions that are created to mitigate the adverse effects implied by the theory. Furthermore, we present some applications of the theory found in the literature and propose a new application of the theory. |
LexPageRank: Prestige in Multi-Document Text Summarization | Multidocument extractive summarization relies on the concept of sentence centrality to identify the most important sentences in a document. Centrality is typically defined in terms of the presence of particular important words or in terms of similarity to a centroid pseudo-sentence. We are now considering an approach for computing sentence importance based on the concept of eigenvector centrality (prestige) that we call LexPageRank. In this model, a sentence connectivity matrix is constructed based on cosine similarity. If the cosine similarity between two sentences exceeds a particular predefined threshold, a corresponding edge is added to the connectivity matrix. We provide an evaluation of our method on DUC 2004 data. The results show that our approach outperforms centroid-based summarization and is quite successful compared to other summarization systems. |
Multi-layer dependability: From microarchitecture to application level | We show in this paper that multi-layer dependability is an indispensable way to cope with the increasing amount of technology-induced dependability problems that threaten to proceed further scaling. We introduce the definition of multi-layer dependability and present our design flow within this paradigm that seamlessly integrates techniques starting at circuit layer all the way up to application layer and thereby accounting for ASIC-based architectures as well as for reconfigurable-based architectures. At the end, we give evidence that the paradigm of multi-layer dependability bears a large potential for significantly increasing dependability at reasonable effort. |
A brief review on upper extremity robotic exoskeleton systems | Robotic exoskeleton systems are one of the highly active areas in recent robotic research. These systems have been developed significantly to be used for the human power augmentation, robotic rehabilitation, human power assist, and haptic interaction in virtual reality. Unlike the robots used in industry, the robotic exoskeleton systems should be designed with special consideration since they directly interact with human user. In the mechanical design of these systems, movable ranges, safety, comfort wearing, low inertia, and adaptability should be especially considered. Controllability, responsiveness, flexible and smooth motion generation, and safety should especially be considered in the controllers of exoskeleton systems. Furthermore, the controller should generate the motions in accordance with the human motion intention. This paper briefly reviews the upper extremity robotic exoskeleton systems. In the short review, it is focused to identify the brief history, basic concept, challenges, and future development of the robotic exoskeleton systems. Furthermore, key technologies of upper extremity exoskeleton systems are reviewed by taking state-of-the-art robot as examples. |
Characterizing structural relationships in scenes using graph kernels | Modeling virtual environments is a time consuming and expensive task that is becoming increasingly popular for both professional and casual artists. The model density and complexity of the scenes representing these virtual environments is rising rapidly. This trend suggests that data-mining a 3D scene corpus could be a very powerful tool enabling more efficient scene design. In this paper, we show how to represent scenes as graphs that encode models and their semantic relationships. We then define a kernel between these relationship graphs that compares common virtual substructures in two graphs and captures the similarity between their corresponding scenes. We apply this framework to several scene modeling problems, such as finding similar scenes, relevance feedback, and context-based model search. We show that incorporating structural relationships allows our method to provide a more relevant set of results when compared against previous approaches to model context search. |
The Transport Layer Security (TLS) Protocol Version 1.2 | This document specifies an Internet standards track protocol for the Internet community, and requests discussion and suggestions for improvements. Please refer to the current edition of the "Internet Official Protocol Standards" (STD 1) for the standardization state and status of this protocol. Distribution of this memo is unlimited. Abstract This document specifies Version 1.2 of the Transport Layer Security (TLS) protocol. The TLS protocol provides communications security over the Internet. The protocol allows client/server applications to communicate in a way that is designed to prevent eavesdropping, tampering, or message forgery. |
Deep RNNs Encode Soft Hierarchical Syntax | We present a set of experiments to demonstrate that deep recurrent neural networks (RNNs) learn internal representations that capture soft hierarchical notions of syntax from highly varied supervision. We consider four syntax tasks at different depths of the parse tree; for each word, we predict its part of speech as well as the first (parent), second (grandparent) and third level (great-grandparent) constituent labels that appear above it. These predictions are made from representations produced at different depths in networks that are pretrained with one of four objectives: dependency parsing, semantic role labeling, machine translation, or language modeling. In every case, we find a correspondence between network depth and syntactic depth, suggesting that a soft syntactic hierarchy emerges. This effect is robust across all conditions, indicating that the models encode significant amounts of syntax even in the absence of an explicit syntactic training supervision. |
KnowNER: Incremental Multilingual Knowledge in Named Entity Recognition | KnowNER is a multilingual Named Entity Recognition (NER) system that leverages different degrees of external knowledge. A novel modular framework divides the knowledge into four categories according to the depth of knowledge they convey. Each category consists of a set of features automatically generated from different information sources (such as a knowledge-base, a list of names or document-specific semantic annotations) and is used to train a conditional random field (CRF). Since those information sources are usually multilingual, KnowNER can be easily trained for a wide range of languages. In this paper, we show that the incorporation of deeper knowledge systematically boosts accuracy and compare KnowNER with state-of-the-art NER approaches across three languages (i.e., English, German and Spanish) performing amongst state-of-the art systems in all of them. |
Verapamil causes decreased diaphragm endurance but no decrease of nocturnal O2 saturation in patients with chronic obstructive pulmonary disease | Objective: In animal studies, it has been shown that verapamil reduces strength and endurance of the diaphragm and inhibits the beneficial effects of theophylline. We examined whether the use of verapamil in patients with severe chronic obstructive pulmonary disease (COPD) who use theophylline leads to a deterioration of diaphragmatic function resulting in a decrease of nocturnal O2 saturation. Methods: A double-blind, placebo-controlled crossover study was designed in eight stabile severe COPD patients [forced expiratory volume in 1 s (FEV1) 0.9 ± 0.1 l] taking theophylline. The doses of theophylline ranged from 600 mg daily to 1200 mg daily (7.0 mg/kg daily to 16.9 mg/kg daily). Nocturnal recordings, maximal respiratory muscle strength and endurance tests, lung function, blood pressure, electrocardiogram and arterial blood gas analysis were performed after 6 days of verapamil and after placebo. Results: A significant decrease of the endurance time from 7.7 min to 6.4 min was found in the threshold loading test. However, the mean nocturnal saturation values did not change significantly: 89.8% and 89.6%, respectively. Results of pulmonary function tests, arterial blood gas analysis and routine blood samples also did not change. Conclusion: The decrease of the respiratory muscle endurance after the use of verapamil is in line with experiments in animal diaphragms. However, the nocturnal saturation did not change. This finding suggests that the effect found on diaphragm endurance is of no clinical significance and that verapamil can be given to COPD patients without risk of worsening nocturnal saturation. However, this must be confirmed by future larger scale studies. |
Impact of Soil Salinity on the Relation Between Soil Moisture and Dielectric Permittivity | Soil complex dielectric permittivity spectrum depends mainly on soil volumetric water content and salinity, but other soil properties such as temperature, density and clay content can also significantly impact its dielectric properties. Especially, the influence of soil salinity and texture can have significant impact on the accuracy of moisture determination with the use of soil moisture-dielectric permittivity calibration curves. The paper deals with the impact of soil salinity on the real part of complex dielectric permittivity of soils of various texture. Soil samples of various moisture content and salinity were measured with the use of a coaxial transmission-line cell connected to a one-port vector-network-analyzer in the 0.01–3 GHz frequency range. The relations between soil moisture and the real part of dielectric permittivity at frequencies of 30, 70, 150, 200, 500 and 1000 MHz were examined and the frequency bands with the smallest impact of soil salinity and particle-size distribution were obtained. For the tested soils it occurred that the optimal frequency bands were located between 150 and 500 MHz. |
Real-time multi-stereo depth estimation on GPU with approximative discontinuity handling | This paper describes a system for dense depth estimation for multiple images in real-time. The algorithm runs almost entirely on standard graphics hardware, leaving the main CPU free for other tasks as image capture, compression and storage during scene capture. We follow a plain-sweep approach extended by truncated SSD scores, shiftable windows and best camera selection. We do not need specialized hardware and exploit the computational power of freely programmable PC graphics hardware. Dense depth maps are computed with up to 20 fps. |
A pose graph-based localization system for long-term navigation in CAD floor plans | Accurate localization is an essential technology for flexible automation. Industrial applications require mobile platforms to be precisely localized in complex environments, often subject to continuous changes and reconfiguration. Most of the approaches use precomputed maps both for localization and for interfacing robots with workers and operators. This results in increased deployment time and costs as mapping experts are required to setup the robotic systems in factory facilities. Moreover, such maps need to be updated whenever significant changes in the environment occur in order to be usable within commanding tools. To overcome those limitations, in this work we present a robust and highly accurate method for long-term LiDAR-based indoor localization that uses CAD-based architectural floor plans. The system leverages a combination of graph-based mapping techniques and Bayes filtering to maintain a sparse and up-to-date globally consistent map that represents the latest configuration of the environment. This map is aligned to the CAD drawing using prior constraints and is exploited for relative localization, thus allowing the robot to estimate its current pose with respect to the global reference frame of the floor plan. Furthermore, the map helps in limiting the disturbances caused by structures and clutter not represented in the drawing. Several long-term experiments in changing real-world environments show that our system outperforms common state-of-the-art localization methods in terms of accuracy and robustness while remaining memory and computationally efficient. |
A generalized accurate modelling method for automotive bulk current injection (BCI) test setups up to 1 GHz | Development of accurate system models of immunity test setups might be extremely time consuming or even impossible. Here a new generalized approach to develop accurate component-based models of different system-level EMC test setups is proposed on the example of a BCI test setup. An equivalent circuit modelling of the components in LF range is combined with measurement-based macromodelling in HF range. The developed models show high accuracy up to 1 GHz. The issues of floating PCB configurations and incorporation of low frequency behaviour could be solved. Both frequency and time-domain simulations are possible. Arbitrary system configurations can be assembled quickly using the proposed component models. Any kind of system simulation like parametric variation and worst-case analysis can be performed with high accuracy. |
Advanced Imaging for the Early Diagnosis of Local Recurrence Prostate Cancer after Radical Prostatectomy | Currently the diagnosis of local recurrence of prostate cancer (PCa) after radical prostatectomy (RT) is based on the onset of biochemical failure which is defined by two consecutive values of prostate-specific antigen (PSA) higher than 0.2 ng/mL. The aim of this paper was to review the current roles of advanced imaging in the detection of locoregional recurrence. A nonsystematic literature search using the Medline and Cochrane Library databases was performed up to November 2013. Bibliographies of retrieved and review articles were also examined. Only those articles reporting complete data with clinical relevance for the present review were selected. This review article is divided into two major parts: the first one considers the role of PET/CT in the restaging of PCa after RP; the second part is intended to provide the impact of multiparametric-MRI (mp-MRI) in the depiction of locoregional recurrence. Published data indicate an emerging role for mp-MRI in the depiction of locoregional recurrence, while the performance of PET/CT still remains unclear. Moreover Mp-MRI, thanks to functional techniques, allows to distinguish between residual glandular healthy tissue, scar/fibrotic tissue, granulation tissue, and tumour recurrence and it may also be able to assess the aggressiveness of nodule recurrence. |
A SUMMARY REVIEW OF VIBRATION-BASED DAMAGE IDENTIFICATION METHODS | This paper provides an overview of methods to detect, locate, and characterize damage in structural and mechanical systems by examining changes in measured vibration response. Research in vibration-based damage identification has been rapidly expanding over the last few years. The basic idea behind this technology is that modal parameters (notably frequencies, mode shapes, and modal damping) are functions of the physical properties of the structure (mass, damping, and stiffness). Therefore, changes in the physical properties will cause detectable changes in the modal properties. The motivation for the development of this technology is presented. The methods are categorized according to various criteria such as the level of damage detection provided, model-based vs. non-model-based methods and linear vs. nonlinear methods. The methods are also described in general terms including difficulties associated with their implementation and their fidelity. Past, current and future-planned applications of this technology to actual engineering systems are summarized. The paper concludes with a discussion of critical issues for future research in the area of vibration-based damage identification. |
Against Ambiguity | This paper argues that the widespread beliefthat ambiguity is beneficial in designcommunication stems from conceptual confusion.Communicating imprecise, uncertain andprovisional ideas is a vital part of designteamwork, but what is uncertain and provisionalneeds to be expressed as clearly as possible.Understanding what uncertainty informationdesigners can and should communicate, and how,is an urgent task for research. Viewing designcommunication as conveying permitted spaces forfurther designing is a useful rationalisationfor understanding what designers need fromtheir notations and computer tools, to achieveclear communication of uncertain ideas. Thepaper presents a typology of ways that designscan be uncertain. It discusses how sketches andother representations of designs can be bothintrinsically ambiguous, and ambiguous ormisleading by failing to convey informationabout uncertainty and provisionality, withreference to knitwear design, wherecommunication using inadequate representationscauses severe problems. It concludes thatsystematic use of meta-notations for conveyingprovisionality and uncertainty can reduce theseproblems. |
Poly(vinyl chloride) processes and products. | Poly(vinyl chloride) resins are produced by four basic processes: suspension, emulsion, bulk and solution polymerization. PVC suspensions resins are usually relatively dust-free and granular with varying degrees of particle porosity. PVC emulsion resins are small particle powders containing very little free monomer. Bulk PVC resins are similar to suspension PVC resins, though the particles tend to be more porous. Solution PVC resins are smaller in particle size than suspension PVC with high porosity particles containing essentially no free monomer. The variety of PVC resin products does not lend itself to broad generalizations concerning health hazards. In studying occupational hazards the particular PVC process and the product must be considered and identified in the study. |
A social-spam detection framework | Social networks such as Facebook, MySpace, and Twitter have become increasingly important for reaching millions of users. Consequently, spammers are increasing using such networks for propagating spam. Existing filtering techniques such as collaborative filters and behavioral analysis filters are able to significantly reduce spam, each social network needs to build its own independent spam filter and support a spam team to keep spam prevention techniques current. We propose a framework for spam detection which can be used across all social network sites. There are numerous benefits of the framework including: 1) new spam detected on one social network, can quickly be identified across social networks; 2) accuracy of spam detection will improve with a large amount of data from across social networks; 3) other techniques (such as blacklists and message shingling) can be integrated and centralized; 4) new social networks can plug into the system easily, preventing spam at an early stage. We provide an experimental study of real datasets from social networks to demonstrate the flexibility and feasibility of our framework. |
Self-supervised Relation Extraction from the Web | Web extraction systems attempt to use the immense amount of unlabeled text in the Web in order to create large lists of entities and relations. Unlike traditional Information Extraction methods, the Web extraction systems do not label every mention of the target entity or relation, instead focusing on extracting as many different instances as possible while keeping the precision of the resulting list reasonably high. SRES is a self-supervised Web relation extraction system that learns powerful extraction patterns from unlabeled text, using short descriptions of the target relations and their attributes. SRES automatically generates the training data needed for its pattern-learning component. The performance of SRES is further enhanced by classifying its output instances using the properties of the instances and the patterns. The features we use for classification and the trained classification model are independent from the target relation, which we demonstrate in a series of experiments. We also compare the performance of SRES to the performance of the state-of-the-art KnowItAll system, and to the performance of its pattern learning component, which learns simpler pattern language than SRES. |
Procedural Arrangement of Furniture for Real-Time Walkthroughs | This paper presents a procedural approach to generate furniture arrangements for large virtual indoor scenes. The interiors of buildings in 3D city scenes are often omitted. Our solution creates rich furniture arrangements for all rooms of complex buildings and even for entire cities. The key idea is to only furnish the rooms in the vicinity of the viewer while the user explores a building in real time. In order to compute the object layout we introduce an agent-based solution and demonstrate the flexibility and effectiveness of the agent approach. Furthermore, we describe advanced features of the system, like procedural furniture geometry, persistent room layouts, and styles for high-level control. |
Boundary and inertia effects on flow and heat transfer in porous media | The present work analyzes the effects of a solid boundary and the inertial forces on flow and heat transfer in porous media. Specific attention is given to flow through a porous medium in the vicinity of an impermeable boundary. The local volume-averaging technique has been utilized to establish the governing equations, along with an indication of physical limitations and assumptions made in the course of this development. A numerical scheme for the governing equations has been developed to investigate the velocity and temperature fields inside a porous medium near an impermeable boundary, and a new concept of the momentum boundary layer central to the numerical routine is presented. The boundary and inertial effects are characterized in terms of three dimensionless groups, and these effects are shown to be more pronounced in highly permeable media, high Prandtl-num~r fluids, large pressure gradients, and in the region close to the leading edge of the flow boundary layer. |
Building Enterprise Systems Infrastructure Flexibility as Enabler of Organisational Agility: Empirical Evidence | Enterprise systems (ES) that capture the most advanced developments of information technology are becoming common fixtures in most organisations. However, how ES affect organizational agility (OA) has been less researched and the existing research remains equivocal. From the perspective that ES can positively contribute to OA, this research via theory-based model development and rigorous empirical investigation of the proposed model, has bridged significant research gaps and provided empirical evidence for, and insights into, the effect of ES on OA. The empirical results based on data collected from 179 large organizations in Australia and New Zealand which have implemented and used ES for at least one year show that organisations can achieve agility out of their ES in two ways: by developing ES technical competences to build ES-enabled capabilities that digitise their key sensing and responding processes; and when ES-enabled sensing and responding capabilities are aligned in relatively turbulent environment. |
DETAILED CHEMISTRY SPRAY COMBUSTION MODEL FOR THE KIVA CODE | Till recently, the application of the detailed combustion chemistry approach as a predictive tool for engine modeling has been a sort of a ”taboo” motivated by different reasons, but, mainly, by an exaggerated rigor to the chemistry/turbulence interaction modeling. The situation has drastically changed only recently, when STAR-CD and Reaction Design declared in the Newsletter of Compuatational Dynamics (2000/1) the aim to combine multi-dimensional flow solver with the detailed chemistry analysis based on CHEMKIN and SURFACE CHEMKIN packages. Relying on their future developments, we present here the methodology based on the KIVA code. The basic novelty of the proposed methodology is the coupling of a generalized partially stirred reactor, PaSR, model with a high efficiency numerics based on a sparse matrix algebra technique to treat detailed oxidation kinetics of hydrocarbon fuels assuming that chemical processes proceed in two successive steps: the reaction act follows after the micro-mixing resolved on a sub-grid scale. In a completed form, the technique represents detailed chemistry extension of the classic EDCturbulent combustion model. The model application is illustrated by results of numerical simulation of spray combustion and emission formation in the Volvo D12C DI Diesel engine. The results of the 3-D engine modeling on a sector mesh are in reasonable agreement with video data obtained using an endoscopic technique. INTRODUCTION As pollutant emission regulations are becoming more stringent, it turns increasingly more difficult to reconcile emission requirements with the engine economy and thermal efficiency. Soot formation in DI Diesel engines is the key environmental problem whose solution will define the future of these engines: will they survive or are they doomed to disappear? To achieve the design goals, the understanding of the salient features of spray combustion and emission formation processes is required. Diesel spray combustion is nonstationary, three-dimensional, multi-phase process that proAddress all correspondence to this author. ceeds in a high-pressure and high-temperature environment. Recent attempts to develop a ”conceptual model” of diesel spray combustion, see Dec (1997), represent it as a relatively well organized process in which events take place in a logical sequence as the fuel evolves along the jet, undergoing the various stages: spray atomization, droplet ballistics and evaporation, reactant mixing, (macroand micromixing), and, finally, heat release and emissions formation. This opens new perspectives for the modeling based on realization of idealized patterns well confirmed by optical diagnostics data. The success of engine CFD simulations depends on submodels of the physical processes incorporated into the main solver. The KIVA-3v computer code developed by Amsden (1993, July 1997) has been selected for the reason that the code source is available, thus, representing an ideal platform for modification, validation and evaluation. For Diesel engine applications, the KIVA codes solve the conservation equations for evaporating fuel sprays coupled with the three-dimensional turbulent fluid dynamics of compressible, multicomponent, reactive gases in engine cylinders with arbitrary shaped piston geometries. The code treats in different ways ”fast” chemical reactions, which are assumed to be in equilibrium, and ”slow” reactions proceeding kinetically, albeit the general trimolecular processes with different third body efficiencies are not incorporated in the mechanism. The turbulent combustion is realized in the form of Magnussen-Hjertager approach not accounting for chemistry/turbulence interaction. This is why the chemical routines in the original code were replaced with our specialized sub-models. The code fuel library has been also updated using modern property data compiled in Daubert and Danner (1989-1994). The detailed mechanism integrating the n-heptane oxidation chemistry with the kinetics of aromatics (up to four aromatic rings) formation for rich acetylene flames developed by Wang and Frenklach (1997) consisting of 117 species and 602 reactions has been validated in conventional kinetic analysis, and a reduced mechanism (60 species, including soot forming agents and N2O and NOx species, 237 reactions) has been incorporated into the KIVA-3v code. This extends capabilities of the code to predict spray combustion of hydrocarbon fuels with particulate emission. |
Light propagation and large-scale inhomogeneities | We consider the effect on the propagation of light of inhomogeneities with sizes of order 10 Mpc or larger. The Universe is approximated through a variation of the Swiss-cheese model. The spherical inhomogeneities are void-like, with central underdensities surrounded by compensating overdense shells. We study the propagation of light in this background, assuming that the source and the observer occupy random positions, so that each beam travels through several inhomogeneities at random angles. The distribution of luminosity distances for sources with the same redshift is asymmetric, with a peak at a value larger than the average one. The width of the distribution and the location of the maximum increase with increasing redshift and length scale of the inhomogeneities. We compute the induced dispersion and bias on cosmological parameters derived from the supernova data. They are too small to explain the perceived acceleration without dark energy, even when the length scale of the inhomogeneities is comparable to the horizon distance. Moreover, the dispersion and bias induced by gravitational lensing at the scales of galaxies or clusters of galaxies are larger by at least an order of magnitude. Light Propagation and Large-Scale Inhomogeneities 2 |
Transforming Time Series into Complex Networks | We introduce transformations from time series data to the domain of complex networks which allow us to characterise the dynamics underlying the time series in terms of topological features of the complex network. We show that specific types of dynamics can be characterised by a specific prevalence in the complex network motifs. For example, lowdimensional chaotic flows with one positive Lyapunov exponent form a single family while noisy non-chaotic dynamics and hyper-chaos are both distinct. We find that the same phenomena is also true for discrete maplike data. These algorithms provide a new way of studying chaotic time series and equip us with a wide range of statistical measures previously not available in the field of nonlinear time series analysis. |
Online social networks and offline protest | Large-scale protests occur frequently and sometimes overthrow entire political systems. Meanwhile, online social networks have become an increasingly common component of people’s lives. We present a large-scale longitudinal study that connects online social media behaviors to offline protest. Using almost 14 million geolocated tweets and data on protests from 16 countries during the Arab Spring, we show that increased coordination of messages on Twitter using specific hashtags is associated with increased protests the following day. The results also show that traditional actors like the media and elites are not driving the results. These results indicate social media activity correlates with subsequent large-scale decentralized coordination of protests, with important implications for the future balance of power between citizens and their states. |
MetaStyle: Three-Way Trade-Off Among Speed, Flexibility, and Quality in Neural Style Transfer | An unprecedented booming has been witnessed in the research area of artistic style transfer ever since Gatys et al. introduced the neural method. One of the remaining challenges is to balance a trade-off among three critical aspects—speed, flexibility, and quality: (i) the vanilla optimization-based algorithm produces impressive results for arbitrary styles, but is unsatisfyingly slow due to its iterative nature, (ii) the fast approximation methods based on feed-forward neural networks generate satisfactory artistic effects but bound to only a limited number of styles, and (iii) feature-matching methods like AdaIN achieve arbitrary style transfer in a real-time manner but at a cost of the compromised quality. We find it considerably difficult to balance the trade-off well merely using a single feed-forward step and ask, instead, whether there exists an algorithm that could adapt quickly to any style, while the adapted model maintains high efficiency and good image quality. Motivated by this idea, we propose a novel method, coined MetaStyle, which formulates the neural style transfer as a bilevel optimization problem and combines learning with only a few post-processing update steps to adapt to a fast approximation model with satisfying artistic effects, comparable to the optimization-based methods for an arbitrary style. The qualitative and quantitative analysis in the experiments demonstrates that the proposed approach achieves high-quality arbitrary artistic style transfer effectively, with a good trade-off among speed, flexibility, and quality. |
Response to letter regarding article, "postprocedural aortic regurgitation in balloon-expandable and self-expandable transcatheter aortic valve replacement procedures: analysis of predictors and impact on long-term mortality: insights from the FRANCE2 registry". | T ranscatheter aortic valve replacement (TAVR) is a technique to treat patients with symptomatic aortic steno-sis and contraindications or at high risk for conventional surgical valve replacement. 1 Two revalving devices based on different technical concepts, self-expandable (SE) or balloon-expandable (BE), have been developed and are currently available. Background—Significant postprocedural aortic regurgitation (AR) is observed in 10% to 20% of cases after transcatheter aortic valve replacement (TAVR). The prognostic value and the predictors of such a complication in balloon-expandable (BE) and self-expandable (SE) TAVR remain unclear. Methods and Results—TAVR was performed in 3195 consecutive patients at 34 hospitals. Postprocedural transthoracic echocardiography was performed in 2769 (92%) patients of the eligible population, and these patients constituted the study group. Median follow-up was 306 days (Q1–Q3=178–490). BE and SE devices were implanted in 67.6% (n=1872) and 32.4% (n=897). Delivery was femoral (75.3%) or nonfemoral (24.7%). A postprocedural AR≥grade 2 was observed in 15.8% and was more frequent in SE (21.5%) than in BE-TAVR (13.0%, P=0.0001). Extensive multivariable analysis confirmed that the use of a SE device was one of the most powerful independent predictors of postprocedural AR≥grade 2. For BE-TAVR, 8 independent predictors of postprocedural AR≥grade 2 were identified including femoral delivery (P=0.04), larger aortic annulus (P=0.0004), and smaller prosthesis diameter (P=0.0001). For SE-TAVR, 2 independent predictors were identified including femoral delivery(P=0.0001). Aortic annulus and prosthesis diameter were not predictors of postprocedural AR for SE-TAVR. A postprocedural AR≥grade 2, but not a postprocedural AR=grade 1, was a strong independent predictor of 1-year mortality for BE (hazard ratio=2.50; P=0.0001) and SE-TAVR (hazard ratio=2.11; P=0.0001). Although postprocedural AR≥grade 2 was well tolerated in patients with AR≥grade 2 at baseline (1-year mortality=7%), it was associated with a very high mortality in other subgroups: renal failure (43%), AR<grade 2 at baseline (31%), low transaortic gradient (35%), or nonfemoral delivery (45%). Conclusions—Postprocedural AR≥grade 2 was observed in 15.8% of successful TAVR and was the strongest independent predictor of 1-year mortality. The use of the SE device was a powerful independent predictor of postprocedural AR≥grade 2. |
Origins of the Cold War: New Evidence | Conservative Western historians have been hustling to the press with scoops drawing upon privileged, selective, and purchased access to the archives and spymasters of the former Soviet Union. Not surprisingly, the greatest notoriety has gone to "smoking guns" that purportedly corroborate the basic cold war dogma of the U.S. right, particularly the McCarthyite spymania. But there is other recently available material from the Eastern archives given little or no attention. These documents call into question a fundamental premise of the Official American History of the second half of the twentieth century: Soviet responsibility for the origins of the Cold War. This article can also be found at the Monthly Review website , where most recent articles are published in full. Click here to purchase a PDF version of this article at the Monthly Review website. |
Automatically Extracting Procedural Knowledge from Instructional Texts using Natural Language Processing | Procedural knowledge is the knowledge required to perform certain tasks, and forms an important part of expertise. A major source of procedural knowledge is natural language instructions. While these readable instructions have been useful learning resources for human, they are not interpretable by machines. Automatically acquiring procedural knowledge in machine interpretable formats from instructions has become an increasingly popular research topic due to their potential applications in process automation. However, it has been insufficiently addressed. This paper presents an approach and an implemented system to assist users to automatically acquire procedural knowledge in structured forms from instructions. We introduce a generic semantic representation of procedures for analysing instructions, using which natural language techniques are applied to automatically extract structured procedures from instructions. The method is evaluated in three domains to justify the generality of the proposed semantic representation as well as the effectiveness of the implemented automatic system. |
Neural Decision Forests for Semantic Image Labelling | In this work we present Neural Decision Forests, a novel approach to jointly tackle data representation- and discriminative learning within randomized decision trees. Recent advances of deep learning architectures demonstrate the power of embedding representation learning within the classifier -- An idea that is intuitively supported by the hierarchical nature of the decision forest model where the input space is typically left unchanged during training and testing. We bridge this gap by introducing randomized Multi- Layer Perceptrons (rMLP) as new split nodes which are capable of learning non-linear, data-specific representations and taking advantage of them by finding optimal predictions for the emerging child nodes. To prevent overfitting, we i) randomly select the image data fed to the input layer, ii) automatically adapt the rMLP topology to meet the complexity of the data arriving at the node and iii) introduce an l1-norm based regularization that additionally sparsifies the network. The key findings in our experiments on three different semantic image labelling datasets are consistently improved results and significantly compressed trees compared to conventional classification trees. |
Environment Safety of Oil Resources, Exploitation In Offshore Russian Arctic Regions | Technical particularities of oil deposits development on the shelf make difficult to estimate and prevent possible impact on fragile Arctic ecosystems during the natural resources exploitation. Environment safety in the Arctic Regions is assured by the legal environment protection. The paper shows that the legal base of the natural resources development in offshore Russian Regions is insufficient to promote environment safety in the Arctic. The authors present the measures of legal base development which will provide environment safety of the Arctic Zone of the Russian Federation in future. Existing state system of the ecological monitoring and experience of Russian United Monitoring System programs are presented. It is very important to estimate the ecological damage and liquidate it in time. The responsibility for nature pollution must be strictly defined by the special norm-legal documents. The results of the study can be used for the maintenance of the environment safety in the perspective regions of oil resources development and for the perfection of regional and international oil spills response systems in the Arctic. |
The mineralocorticoid receptor agonist, fludrocortisone, differentially inhibits pituitary–adrenal activity in humans with psychotic major depression | INTRODUCTION
Hypothalamic-pituitary-adrenal (HPA) axis dysregulation has been linked with major depression, particularly psychotic major depression (PMD), with mineralocorticoid receptors (MRs) playing a role in HPA-axis regulation and the pathophysiology of depression. Herein we hypothesize that the MR agonist fludrocortisone differentially inhibits the HPA axis of psychotic major depression subjects (PMDs), non-psychotic major depression subjects (NPMDs), and healthy control subjects (HCs).
METHODS
Fourteen PMDs, 16 NPMDs, and 19 HCs were admitted to the Stanford University Hospital General Clinical Research Center. Serum cortisol levels were sampled at baseline and every hour from 18:00 to 23:00h, when greatest MR activity is expected, on two consecutive nights. On the second afternoon at 16:00h all subjects were given 0.5mg fludrocortisone. Mean cortisol levels pre- and post-fludrocortisone and percent change in cortisol levels were computed.
RESULTS
There were no significant group differences for cortisol at baseline: F(2,47)=.19, p=.83. There were significant group differences for post-fludrocortisone cortisol: F(2,47)=5.13, p=.01, which were significantly higher in PMDs compared to HCs (p=.007), but not compared to NPMDs (p=.18). There were no differences between NPMD's and HC's (p=.61). Also, PMDs had a lower percent change from baseline in cortisol levels at 2200h than NPMDs (p=.01) or HCs (p=.009).
CONCLUSIONS
Individuals with psychotic major depression compared to healthy control subjects have diminished feedback inhibition of the hypothalamic-pituitary-adrenal (HPA) axis in response to the mineralocorticoid receptor agonist fludrocortisone. To our knowledge, this is the first study to examine HPA axis response to MR stimulation in major depression (with and without psychosis), and only the third study to demonstrate that exogenously administered fludrocortisone can down-regulate the HPA axis in humans. |
Predicting Pre-click Quality for Native Advertisements | Native advertising is a specific form of online advertising where ads replicate the look-and-feel of their serving platform. In such context, providing a good user experience with the served ads is crucial to ensure long-term user engagement. In this work, we explore the notion of ad quality, namely the effectiveness of advertising from a user experience perspective. We design a learning framework to predict the pre-click quality of native ads. More specifically, we look at detecting offensive native ads, showing that, to quantify ad quality, ad offensive user feedback rates are more reliable than the commonly used click-through rate metrics. We then conduct a crowd-sourcing study to identify which criteria drive user preferences in native advertising. We translate these criteria into a set of ad quality features that we extract from the ad text, image and advertiser, and then use them to train a model able to identify offensive ads. We show that our model is very effective in detecting offensive ads, and provide in-depth insights on how different features affect ad quality. Finally, we deploy a preliminary version of such model and show its effectiveness in the reduction of the offensive ad feedback rate. |
"Put-that-there": Voice and gesture at the graphics interface | Recent technological advances in connected-speech recognition and position sensing in space have encouraged the notion that voice and gesture inputs at the graphics interface can converge to provide a concerted, natural user modality.
The work described herein involves the user commanding simple shapes about a large-screen graphics display surface. Because voice can be augmented with simultaneous pointing, the free usage of pronouns becomes possible, with a corresponding gain in naturalness and economy of expression. Conversely, gesture aided by voice gains precision in its power to reference. |
A Potentially Implementable FPGA for Quantum-Dot Cellular Automata | While still relatively “new”, the quantum-dot cellular automata (QCA) appears to be able to provide many of the properties and functionalities that have made CMOS successful over the past several decades. Early experiments have demonstrated and realized most, if not all, of the “fundamentals” needed for a computational circuit – devices, logic gates, wires, etc. This study introduces the beginning of a next step in experimental work: designing a computationally useful – yet simple and fabricatable circuit for QCA. The design target is a QCA Field Programmable Gate |
Impact of high-throughput screening in biomedical research | High-throughput screening (HTS) has been postulated in several quarters to be a contributory factor to the decline in productivity in the pharmaceutical industry. Moreover, it has been blamed for stifling the creativity that drug discovery demands. In this article, we aim to dispel these myths and present the case for the use of HTS as part of a proven scientific tool kit, the wider use of which is essential for the discovery of new chemotypes. |
Near-Surface to Deeper Burial Cementation Patterns and Foreland Basin Evolution, Middle Ordovician Ramp Carbonates, Virginia: ABSTRACT | Middle Ordovician ramp carbonates, Virginia, were deposited in a subsiding foreland basin bordered by developing tectonic highlands. Ramp carbonates are largely occluded by nonferroan, clear rim, and equant cements which contain cathodoluminescent zones consisting of nonluminescent (oldest), bright and dull (youngest) cements. The zonation largely relates to increasingly reducing conditions of pore waters. Zoned cements in peritidal beds have complex zonations, pendant to pore-rimming fabrics, and are associated with vadose silt (which abuts all cement zones); these cements are vadose to shallow phreatic. Major cementation of subtidal facies occurred under burial conditions. Zoned burial cements have a simple zonation reflecting progressive burial (up to 3,000 m) of carbo ates. Shallow burial nonluminescent cement formed from oxidizing, meteoric waters which expelled anoxic, connate marine waters; meteoric waters were carried by aquifers from tectonic upland recharge areas. Deeper burial, bright and dull cements formed at depths (2,000 to 3,000 m) and temperatures (75 to 135°C) associated with hydrocarbon emplacement during the Late Devonian or Mississippian. Final, clear dull cement fills tectonic fractures and was emplaced during late Paleozoic deformation. Deeper burial diagenesis appears to be genetically linked to late Paleozoic, Mississippi Valley-type mineralization. Zoned peritidal and burial cements are mainly confined to southeastern parts of the ramp, where cementation was influenced by meteoric waters from developing uplands on the southe stern margin of the foreland basin and carried northwest by aquifers. Cements in northwestern peritidal and subtidal ramp facies are dominated by nonzoned dull cements, where cementation was little influenced by upland-source meteoric waters. The close association of zoned cements and regional End_Page 574------------------------------ uplands in the Middle Ordovician sequence indicates the importance of assessing regional geology, geologic history, and tectonics in understanding regional cementation patterns and cementation processes of ancient carbonate platforms. End_of_Article - Last_Page 575------------ |
Pediatric Pain Syndromes and Noninflammatory Musculoskeletal Pain. | Chronic musculoskeletal pain (CMP) is one of the main reasons for referral to a pediatric rheumatologist and is the third most common cause of chronic pain in children and adolescents. Causes of CMP include amplified musculoskeletal pain, benign limb pain of childhood, hypermobility, overuse syndromes, and back pain. CMP can negatively affect physical, social, academic, and psychological function so it is essential that clinicians know how to diagnose and treat these conditions. This article provides an overview of the epidemiology and impact of CMP, the steps in a comprehensive pain assessment, and the management of the most common CMPs. |
Combining EEG Data with Place and Plausibility Responses as an Approach to Measuring Presence in Outdoor Virtual Environments | Outdoor virtual environments (OVEs) are becoming increasingly popular, as they allow a sense of presence in places that are inaccessible or protected from human intervention. These virtual environments (VEs) need to address physical modalities other than vision and hearing. We analyze the influence of four different physical modalities (vision, hearing, haptics, and olfaction) on the sense of presence on a virtual journey through the sea and the Laurissilva Forest of Funchal, Portugal. We applied Slater et al.'s (2010) method together with data gathered by the Emotiv EPOC EEG in an OVE setting. In such a setting, the combination of haptics and hearing are more important than the typical virtual environment (vision and hearing) in terms of place and plausibility illusions. Our analysis is particularly important for designers interested in crafting similar VEs because we classify different physical modalities according to their importance in enhancing presence. |
Learning Text Similarity with Siamese Recurrent Networks | This paper presents a deep architecture for learning a similarity metric on variablelength character sequences. The model combines a stack of character-level bidirectional LSTM’s with a Siamese architecture. It learns to project variablelength strings into a fixed-dimensional embedding space by using only information about the similarity between pairs of strings. This model is applied to the task of job title normalization based on a manually annotated taxonomy. A small data set is incrementally expanded and augmented with new sources of variance. The model learns a representation that is selective to differences in the input that reflect semantic differences (e.g., “Java developer” vs. “HR manager”) but also invariant to nonsemantic string differences (e.g., “Java developer” vs. “Java programmer”). |
SLAM-based Pseudo-GNSS/INS localization system for indoor LiDAR mobile mapping systems | The emergence of Mobile Mapping Systems (MMS) has set a marked paradigm in the photogrammetric and mapping community that has not only facilitated comprehensive 3D mapping of different environments but has also paved way for new aspects of applied research in this direction. Out of the many essential blocks that make these MMS a viable tool for mapping, the positioning and orientation module is considered to be a crucial yet an expensive component. The integration of such a module with mapping sensors has allowed for the extensive implementation of such systems to provide high-quality maps. However, while such systems do not lack in system robustness and performance in general, the deployment of these systems is restricted to applications and environments where a consistent availability of GNSS signals is assured. Extending these MMS to GNSS-denied areas, such as indoor environments, is therefore quite challenging and necessitates the development of an alternative module that can act as a viable substitute to GNSS/INS for system operation without having to resort to an exhaustive modification of the same to function in GNSS-denied locations. In this research, such a case has been considered for the implementation of an indoor MMS using an Unmanned Ground Vehicle (UGV) and a 3D laser scanner for the task of generating high density maps of GNSS-denied indoor areas. To mitigate the absence of GNSS data, this paper proposes a Pseudo-GNSS/INS module integrated framework which utilizes probabilistic Simultaneous Localization and Mapping (SLAM) techniques to estimate the platform pose and heading from 3D laser scanner data. This proposed framework has been implemented based on three major notions: (i) using geometric methods for sparse point cloud extraction to carry out real-time SLAM, (ii) generating position data and geo-referencing signals from these real-time SLAM pose estimates, and (iii) carrying out the entire operation through use of a single 3D mapping sensor. The final geo-referenced point cloud can then be generated through post-processing by the Iterative Closest Projected Point (ICPP) registration technique which also diminishes the effect of sensor measurement noise. The implementation, performance and results of the proposed MMS framework for an indoor mapping system have been presented in this paper that demonstrate the ability of this Pseudo-GNSS/INS framework to operate flexibly in GNSS-denied areas. |
Knowledge and Common Knowledge in a Distributed Environment | We argue that the right way to understand distributed protocols is by considering how messages change the state of knowledge of a system. We present a hierarchy of knowledge states that a system may be in, and discuss how communication can move the system's state of knowledge of a fact up the hierarchy. Of special interest is the notion of common knowledge. Common knowledge is an essential state of knowledge for reaching agreements and coordinating action. We show that in practical distributed systems, common knowledge is not attainable. We introduce various relaxations of common knowledge that are attainable in many cases of interest. We describe in what sense these notions are appropriate, and discuss their relationship to each other. We conclude with a discussion of the role of knowledge in distributed systems. |
A phase I study of oxaliplatin in combination with gemcitabine: correlation of clinical outcome with gene expression | Oxaliplatin has in vitro activity similar to or higher than other platinum agents. Preclinically, gemcitabine has demonstrated synergy when combined with platinum compounds. These facts formed the rationale for determining the maximum tolerated dose (MTD) of gemcitabine in combination with oxaliplatin. Eligible patients with advanced incurable solid tumors were given oxaliplatin 130 mg/m2 as a 2-h infusion on day 1 followed by escalating doses of gemcitabine given over 30 min on day 1 and 8 of a 21-day cycle. A total of 43 patients were enrolled, including 30 patients at the MTD in an expanded cohort. At a gemcitabine dose of 800 mg/m2, 1/6 patients had a dose limiting toxicity (DLT) (grade 3 blurred vision and memory loss). At 1,000 mg/m2, 1/6 patients had a DLT (grade 3 increase in AST). At 1,200 mg/m2, 2/3 patients had a DLT (grade 4 thrombocytopenia and grade 3 confusion). The MTD of gemcitabine with 130 mg/m2 of oxaliplatin was therefore 1,000 mg/m2. The clearances of gemcitabine and ultrafilterable platinum are within the ranges previously reported for single agents. A patient with colon cancer had a partial response, and 21 patients had a best response of stable disease. In patients with tumor biopsies treated at the MTD, decreased ribonucleotide reductase M2 expression correlated with response. Treatment with gemcitabine and oxaliplatin was well tolerated with primarily hematologic toxicity at the MTD. Study of biochemical correlates of response remain of interest althought current results remain exploratory. |
Broadband printed slot antenna for the fifth generation (5G) mobile and wireless communications | In this paper, a broadband elliptical-shaped slot antenna for the future fifth generation (5G) wireless applications is proposed. The antenna has a compact size of 0.5λ0 × 0.5λ0 at 30 GHz. It consists of a circular shaped radiating patch fed by a 50-Ω microstrip line via proximity-feed technique. An elliptically shaped slot is etched in the ground plane to enhance the antenna bandwidth. A stub has been added to the microstrip line feed to achieve better impedance matching bandwidth of the antenna. Simulated results indicate that the proposed 5G antenna yields a broadband impedance bandwidth larger than 67% (from 20 GHz to beyond 40 GHz) for S11 less than -10 dB. The achieved bandwidth covers both future 5G bands (28/38 GHz). The proposed antenna provides almost omni-directional patterns, relatively flat gain, and high radiation efficiency through the frequency band excluding the rejected band. |
Cost of open versus laparoscopically assisted right hemicolectomy for cancer | The aim of this study was to estimate and compare the costs of open right hemicolectomy (ORHC) versus laparoscopically assisted right hemicolectomy (LARHC) performed for cancer. A retrospective cost analysis of 61 consecutive patients operated on between January 1992 and August 1994 for right-sided colonic cancer by either LARHC (n = 28) or ORHC (n = 33) was performed. The analysis focused on the cost (in Australian dollars) incurred from the date of operation to the date of discharge. LARHC was significantly more expensive than ORHC (total cost LARHC $9064, ORHC $7881; p < 0.001). LARHC was associated with a significantly longer operating room utilization time (LARHC 261 minutes, ORHC 203 minutes; p < 0.001) and a greater cost of disposables (LARHC $854, ORHC $189; p < 0.001). This study demonstrates no cost benefit for LARHC compared to ORHC when performed for cancer. Le but de cette étude a été d’évaluer et de comparer les coûts de la colectomie droite effectuée par laparoscopie de celle réaliséc de façon traditionnelle. On a ainsi étudié une série de 61 patients consécutifs opérés entre Janvier 1992 et Août 1994 pour un cancer du colon droit soit par laparoscopie assistée (LARHC, n = 28) soit de façon traditionnelle (ORHC, n = 33). L’analyse était centrée sur les coûts (en dollars australiens [$ Au]) à partir de la date de l’opération jusqu’à la date de la sortie. Les coûts de la LARHC étaient significativement plus élevés que ceux de l’ORHC (coûts totaux de la LARHC = 9064 $ Au vs ORHC = 7881 $ Au; p < 0.01). La durée de l’intervention était significativement plus longue (LARHC: 261 minutes vs ORHC: 203 minutes; p < 0.001) et on a utilisé significativement plus d’instruments à usage unique avec une différence de coûts de 854 $ Au vs 189 $ Au, p < 0.001). Cette étude démontre que la colectomie sous laparoscopie pour cancer colique ne présente aucun avantage économique par rapport à la colectomie classique. El propósito del presente estudio fue el de estimar y comparar los costos de la hemicolectomía derecha abierta versus la operación asistida por laparoscopia en pacientes con cáncer. Se hizo un análisis retrospectivo de 61 paciente consecutivos operados entre enero de 1992 y agosto de 1994 por cáncer del colon derecho: hemicolectomía derecha asistida por laparoscopia (HCDAL, n = 28) o hemicolectomía derecha abierta (HCDA, n = 33). El análisis se enfocó hacia e costo (en $AU) causados desde la fecha de la operación hasta la fecha del egreso. La HCDAL resultó significativamente más costosa que la HCDA (costo total de la HCDAL $9064, HCDA $7881; p < 0.001). La HCDAL se asoció con un tiempo más prolongado en la sala de operaciones (HCDAL 261 minutos, HCDA 203 minutos; p < 0.001) y un mayor costo de los elementos desechables (HCDAL $854, HCDA $189; p < 0.001). El estudio demuestra que no hay beneficio en cuanto a costo para la HCDAL en comparación con la HCDA, realizadas por cáncer. |
Building Conference Proceedings Requires Adaptable Workflow and Content Management | ProceedingsBuilder is a system that helps the proceedings chair of a scientific conference to carry out his chores. It has features of both workflow management systems (WFMS) and content management systems (CMS), in order to collect the material for the printed proceedings and other products. ProceedingsBuilder has been operational at several conferences, including VLDB 2005. When using ProceedingsBuilder, we had a very intense lesson which kinds of workflow adaptations may become necessary. Existing WFMS do not offer support for most of them. The concern of this article is to describe and classify these various requirements regarding adaptation. ProceedingsBuilder is an example of a broad class of systems, namely editorial systems that collect content in order to publish it. Our findings are of interest to a broader audience, not only to conference organizers. |
The History of Ideas and the Study of Politics | LJN RECENT YEARS, a cloud of methodological confusion has been cast over the study of politics. The traditional procedure, studying political institutions and selected texts from the classics of political philosophy, had previously existed in uneasy compromise in most politics departments. The application of sociological techniques to the study of politics has made that compromise even more difficult to maintain, and many observers have argued that politics, far from being a unified discipline, is simply a naive application of different disciplines to a loosely defined and notoriously ambiguous subject-matter. Politics, in short, is seen to possess no more unity than such a subject as European studies. A voice from America, perceiving the difficulties of the old world, has attempted to unify the study of politics by the application of new methods. Closely adhering to the methods of the natural sciences, strongly influenced by the techniques developed in cultural anthropology, the voice has called us to follow along the road to positivism to a haven of rigorously empirical political science. But if we are of a historical frame of mind, we will appreciate that revolutionary apostles have been quite a normal feature in the history of political thought. We need only recall the claims made by Hobbes about his science of politics. And the claim to base the study of politics upon the natural sciences is not itself a new phenomenon. It has been made with monotonous regularity since the |
FOSTERING CRITICAL REFLECTION IN ADULTHOOD A Guide to Transformative and Emancipatory Learning ‘ How Critical Reflection triggers Transformative Learning ’ | ion as though it were an existing object, objectifying it (Whitehead’s „fallacy of misplaced concreteness‟). Interpreting reality concretely when what is required is interpreting it abstractly is a familiar epistemic distortion. Still another is the early positivist supposition that only those propositions are meaningful that are empirically verifiable. 2. Socio-cultural Distortions: involve taking for granted belief systems that pertain to power and social relationships, especially those currently prevailing and legitimized and enforced by institutions. A common sociocultural distortion is mistaking self-fulfilling and self-validating beliefs for beliefs that are not selffulfilling or self-validating. If we believe that members of a subgroup are lazy, unintelligent, and unreliable and treat them accordingly, they may become lazy, unintelligent, and unreliable. We have created a ‘selffulfilling prophesy’. When based on mistaken premises in the first place, such a belief becomes a distorted meaning perspective. Another distortion of this type is assuming that the particular interest of a subgroup is the general interest of the group as a whole. (Geuss, 1981, p.14). When people refer to ideology as a distorted belief system, they usually refer to what here is understood as sociocultural distortion. As critical social theorists have emphasized, ideology can become a form or false consciousness in that it supports, stablizes, or legitimates dependency-producing social institutions, unjust social practices, and relations of exploitation, exclusion, and domination. It reflects the hegemony of the collective, mainstream meaning perspective and existing power relationships that actively support the status quo. Ideology is a form of prereflexive consciousness, which does not question the validity of existing social norms and resists critique of presuppositions. Such social amnesia is manifested in every facet of our lives in the economic, political, social, health, religious, educational, occupational, and familial. Television has become a major force in perpetuating and extending the hegemony of mainstream ideology as, increasingly, will the Internet. The work of Paulo Freire (1970) in traditional village cultures has demonstrated how an adult educator can precipitate as well as facilitate learning that is critically reflective on long-established and oppressive social norms. 3. Psychic Distortions: Psychological distortions have to do with presuppositions generating unwarranted anxiety that impedes taking action. Psychiatrist Roger Gould’s „epigenetic‟ theory of adult development (1978, 1988) suggest that traumatic events in childhood can result in parental prohibitions that though submerged from consciousness continue to inhibit adult action by generating anxiety feelings when there is a risk of breaching them. This dynamic results in a lost function such as the ability to confront, to feel sexual, or take risks that must be regained in one is to become a fully functional adult. Adulthood is a time of regaining such lost functions. The learner must be helped to identify both the particular action that they feel blocked about taking and the source and nature of stress in making a decision to act. The learner is assisted in identifying the source of this inhibition and differentiating between the anxiety that is a function of childhood trauma and the anxiety that is warranted by their immediate adult life situation. With guidance, the adult can learn to distinguish between past and present pressures and between irrational and rational feelings and to challenge distorting assumptions (such as “If I confront, I may lose all control and violently assault”) that inhibit taking the needed action and regaining the lost function. The psychoeducational process of helping adults learn to overcome such ordinary existential psychological distortions can be facilitated by skilled adult counsellors and educators as well as by therapists. It is crucially important that they do so, inasmuch as the most significant adult learning occurs in connection with Life transitions. While psychotherapists make transference inferences in a treatment modality, educators do not but they can provide skilful emotional support and collaborate as co-learners in an educational context. Recent advances in counselling technology greatly enhance their potential for providing this kind of help. For example, Roger Gould’s therapeutic learning programme represents an extraordinary resource for counsellors and educators working with adults who are having trouble dealing with such stressful existential life transitions as divorce, retirement, returning to school or the work force, or a change in job status. This interactive, computerized programme of guided self-study provides the learner with the clinical insights and many of the benefits associated with short-term psychotherapy. The counsellor or educator provides emotional support, helps the learner think through choices posed by the programme, explains its theoretical context, provides supplementary information relevant to the life transition, makes referrals, and leads group discussion as required. This extract briefly adumbrates an emerging transformation theory of adult learning in which the construing of meaning is of central importance. Following Habermas (1984), I make a fundamental distinction between instrumental and communicative learning. I have identified the central function of reflection as that of validating what is known. Reflection, in the context of problem solving, commonly focuses on procedures or methods. It may also focus on premises. Reflection on premises involves a critical view of distorted presuppositions that may be epistemic, sociocultural or psychic. Meaning schemes and perspectives that are not viable are transformed through reflection. Uncritically assimilated meaning perspectives, which determine what, how, and why we learn, may be transformed through critical reflection. Reflection on one‟s own premises can lead to transformative learning. In communicative learning, meaning is validated through critical discourse. The nature of discourse suggests ideal conditions for participation in a consensual assessment of the justification for an expressed or implied idea when its validity is in doubt. These ideal conditions of human communication provide a firm philosophical foundation for adult education. Transformative learning involves a particular function of reflection: reassessing the presuppositions on which our beliefs are based and acting on insights derived from the transformed meaning perspective that results from such reassessments. This learning may occur in the domains of either instrumental or communicative learning. It may involve correcting distorted assumptions epistemic, sociocultural or psychic from prior learning. This extract constitutes the framework in adult learning theory for understanding the efforts of other authors who suggest specific approaches to emancipatory adult education. Emancipatory education is an organized effort to help the learner challenge presuppositions, explore alternative perspectives, transform old ways of understanding, and act on new perspectives. |
Code vectors: understanding programs through embedded abstracted symbolic traces | With the rise of machine learning, there is a great deal of interest in treating programs as data to be fed to learning algorithms. However, programs do not start off in a form that is immediately amenable to most off-the-shelf learning techniques. Instead, it is necessary to transform the program to a suitable representation before a learning technique can be applied.
In this paper, we use abstractions of traces obtained from symbolic execution of a program as a representation for learning word embeddings. We trained a variety of word embeddings under hundreds of parameterizations, and evaluated each learned embedding on a suite of different tasks. In our evaluation, we obtain 93% top-1 accuracy on a benchmark consisting of over 19,000 API-usage analogies extracted from the Linux kernel. In addition, we show that embeddings learned from (mainly) semantic abstractions provide nearly triple the accuracy of those learned from (mainly) syntactic abstractions. |
Students ’ Approaches to Learning and Teachers ’ Approaches to Teaching in Higher Education | Research into learning and teaching in higher education over the last 25 years has provided a variety of concepts, methods, and findings that are of both theoretical interest and practical relevance. It has revealed the relationships between students’ approaches to studying, their conceptions of learning, and their perceptions of their academic context. It has revealed the relationships between teachers’ approaches to teaching, their conceptions of teaching, and their perceptions of the teaching environment. And it has provided a range of tools that can be exploited for developing our understanding of learning and teaching in particular contexts and for assessing and enhancing the student experience on specific courses and programs. |
The evaluation of children in the primary care setting when sexual abuse is suspected. | This clinical report updates a 2005 report from the American Academy of Pediatrics on the evaluation of sexual abuse in children. The medical assessment of suspected child sexual abuse should include obtaining a history, performing a physical examination, and obtaining appropriate laboratory tests. The role of the physician includes determining the need to report suspected sexual abuse; assessing the physical, emotional, and behavioral consequences of sexual abuse; providing information to parents about how to support their child; and coordinating with other professionals to provide comprehensive treatment and follow-up of children exposed to child sexual abuse. |
Organizing integrity: American science and the creation of public interest organizations, 1955-1975. | The power and prestige of science is typically thought to be grounded in the ability of scientists to draw strong distinctions between scientific and nonscientific interests. This article shows that it is also grounded in a contradictory act: the demonstration of the compatibility between scientific and nonscientific interests. Between 1955 and 1975, American political protest forced scientists to find ways to reconcile these contradictions. One way in which this reconciliation was accomplished was through the formation of public interest science organizations, which permitted the preservation of organizational representations of pure, unified science, while simultaneously assuming responsibilities to serve the public good. |
Study on co-pyrolysis characteristics of rice straw and Shenfu bituminous coal blends in a fixed bed reactor. | Co-pyrolysis behaviors of rice straw and Shenfu bituminous coal were studied in a fixed bed reactor under nitrogen atmosphere. The pyrolysis temperatures were 700°C, 800°C and 900°C, respectively. Six different biomass ratios were used. Gas, tar components were analyzed by a gas chromatograph and a gas chromatography-mass spectrometry respectively. Under co-pyrolysis conditions, the gas volume yields are higher than the calculated values. Co-pyrolysis tar contains more phenolics, less oxygenate compounds than calculated values. The addition of biomass changes the atmosphere during the pyrolysis process and promotes tar decomposition. The SEM results show that the differences between the blended char and their parents char are not significant. The results of char yields and ultimate analysis also show that no significant interactions exist between the two kinds of particles. The changes of gas yield and components are caused by the secondary reactions and tar decomposition. |
Class Label Enhancement via Related Instances | Class-instance label propagation algorithms have been successfully used to fuse information from multiple sources in order to enrich a set of unlabeled instances with class labels. Yet, nobody has explored the relationships between the instances themselves to enhance an initial set of class-instance pairs. We propose two graph-theoretic methods (centrality and regularization), which start with a small set of labeled class-instance pairs and use the instance-instance network to extend the class labels to all instances in the network. We carry out a comparative study with state-of-the-art knowledge harvesting algorithm and show that our approach can learn additional class labels while maintaining high accuracy. We conduct a comparative study between class-instance and instance-instance graphs used to propagate the class labels and show that the latter one achieves higher accuracy. |
Fatal truck-bicycle accident involving dragging for 45 km | Vehicle-bicycle accidents with subsequent dragging of the rider over long distances are extremely rare. The case reported here is that of a 16-year-old mentally retarded bike rider who was run over by a truck whose driver failed to notice the accident. The legs of the victim became trapped by the rear axle of the trailer and the body was dragged over 45 km before being discovered under the parked truck. The autopsy revealed that the boy had died from the initial impact and not from the dragging injuries which had caused extensive mutilation. The reports of the technical expert and the forensic pathologist led the prosecutor to drop the case against the truck driver for manslaughter. |
Dynamic programming and influence diagrams | A hstruct -The concept of a super value node is developed to estend the theor? of influence diagrams to allow dynamic programming to be performed within this graphical modeling framework. The operations necessa? to exploit the presence of these nodes and efficiently analyze the models are developed. The key result is that by reprewnting value function separability in the structure of the graph of the influence diagram. formulation is simplified and operations on the model can take advantage of the wparability. Froni the decision analysis perspective. this allows simple exploitation of separabilih in the value function of a decision problem which can significantly reduce memory and computation requirements. Importantly. this allows algorithms to be designed to solve influence diagrams that automatically recognize the opportunih for applying dynamic programming. From the decision processes perspective, influence diagrams with super value nodes allow efficient formulation and solution of nonstandard decision process structures. They a h allow the exploitation of conditional independence between state variables. Examples are provided that demonstrate these advantages. |
Optimisation of machining parameters in interrupted cylindrical grinding using the Grey-based Taguchi method | Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. |
A Probabilistic Address Parser Using Conditional Random Fields and Stochastic Regular Grammar | Automatic semantic annotation of data from databases or the web is an important pre-process for data cleansing and record linkage. It can be used to resolve the problem of imperfect field alignment in a database or identify comparable fields for matching records from multiple sources. The annotation process is not trivial because data values may be noisy, such as abbreviations, variations or misspellings. In particular, overlapping features usually exist in a lexicon-based approach. In this work, we present a probabilistic address parser based on linear-chain conditional random fields (CRFs), which allow more expressive token-level features compared to hidden Markov models (HMMs). In additions, we also proposed two general enhancement techniques to improve the performance. One is taking original semi-structure of the data into account. Another is post-processing of the output sequences of the parser by combining its conditional probability and a score function, which is based on a learned stochastic regular grammar (SRG) that captures segment-level dependencies. Experiments were conducted by comparing the CRF parser to a HMM parser and a semi-Markov CRF parser in two real-world datasets. The CRF parser out-performed the HMM parser and the semi-Markov CRF in both datasets in terms of classification accuracy. Leveraging the structure of the data and combining the linear-chain CRF with the SRG further improved the parser to achieve an accuracy of 97% on a postal dataset and 96% on a company dataset. |
RFID based attendance system | Most educational institutions' administrators are concerned about student irregular attendance. Truancies can affect student overall academic performance. The conventional method of taking attendance by calling names or signing on paper is very time consuming and insecure, hence inefficient. Radio Frequency Identification (RFID) based attendance system is one of the solutions to address this problem. This system can be used to take attendance for student in school, college, and university. It also can be used to take attendance for workers in working places. Its ability to uniquely identify each person based on their RFID tag type of ID card make the process of taking the attendance easier, faster and secure as compared to conventional method. Students or workers only need to place their ID card on the reader and their attendance will be taken immediately. With real time clock capability of the system, attendance taken will be more accurate since the time for the attendance taken will be recorded. The system can be connected to the computer through RS232 or Universal Serial Bus (USB) port and store the attendance taken inside database. An alternative way of viewing the recorded attendance is by using HyperTerminal software. A prototype of the system has been successfully fabricated. |
Hyperbolic Embedding and Routing for Dynamic Graphs | We propose an embedding and routing scheme for arbitrary network connectivity graphs, based on greedy routing and utilizing virtual node coordinates. In dynamic multihop packet-switching communication networks, routing elements can join or leave during network operation or exhibit intermittent failures. We present an algorithm for online greedy graph embedding in the hyperbolic plane that enables incremental embedding of network nodes as they join the network, without disturbing the global embedding. Even a single link or node removal may invalidate the greedy routing success guarantees in network embeddings based on an embedded spanning tree subgraph. As an alternative to frequent reembedding of temporally dynamic network graphs in order to retain the greedy embedding property, we propose a simple but robust generalization of greedy distance routing called Gravity–Pressure (GP) routing. Our routing method always succeeds in finding a route to the destination provided that a path exists, even if a significant fraction of links or nodes is removed subsequent to the embedding. GP routing does not require precomputation or maintenance of special spanning subgraphs and, as demonstrated by our numerical evaluation, is particularly suitable for operation in tandem with our proposed algorithm for online graph embedding. |
Fingerprint Based ATM Security by using ARM 7 1 | The purpose of this project is to increase the security that customer use the ATM machine. Once user's bank card is lost and the password is stolen, the criminal will draw all cash in the shortest time, which will bring enormous financial losses to customer, so to rectify this problem we are implementing this project. The chip of LPC2148 is used for the core of microprocessor in ARM7, furthermore, an improved enhancement algorithm of fingerprint image increase the security that customer use the ATM machine. |
An Over-110-GHz-Bandwidth 2:1 Analog Multiplexer in 0.25-μm InP DHBT Technology | This paper presents an over-110-GHz-bandwidth 2:1 analog multiplexer (AMUX) for ultra-broadband digital-to-analog (D/A) conversion subsystems. The AMUX was designed and fabricated by using newly developed $\pmb{0.25-\mu \mathrm{m}}$ -emitter-width InP double heterojunction bipolar transistors (DHBTs), which have a peak $\pmb{f_{\mathrm{T}}}$ and $\pmb{ f\displaystyle \max}$ of 460 and 480 GHz, respectively. The AMUX IC consists of lumped building blocks, including data-input linear buffers, a clock-input limiting buffer, an AMUX core, and an output linear buffer. The measured 3-dB bandwidth for data and clock paths are both over 110 GHz. In addition, it measures and obtains time-domain large-signal sampling operations of up to 180 GS/s. A 224-Gb/s (112-GBaud) four-level pulse-amplitude modulation (PAM4) signal was successfully generated by using this AMUX. To the best of our knowledge, this AMUX IC has the broadest bandwidth and the fastest sampling rate compared with any other previously reported AMUXes. |
On Linear Variational Surface Deformation Methods | This survey reviews the recent advances in linear variational mesh deformation techniques. These methods were developed for editing detailed high-resolution meshes like those produced by scanning real-world objects. The challenge of manipulating such complex surfaces is threefold: The deformation technique has to be sufficiently fast, robust, intuitive, and easy to control to be useful for interactive applications. An intuitive and, thus, predictable deformation tool should provide physically plausible and aesthetically pleasing surface deformations, which, in particular, requires its geometric details to be preserved. The methods that we survey generally formulate surface deformation as a global variational optimization problem that addresses the differential properties of the edited surface. Efficiency and robustness are achieved by linearizing the underlying objective functional such that the global optimization amounts to solving a sparse linear system of equations. We review the different deformation energies and detail preservation techniques that were proposed in recent years, together with the various techniques to rectify the linearization artifacts. Our goal is to provide the reader with a systematic classification and comparative description of the different techniques, revealing the strengths and weaknesses of each approach in common editing scenarios. |
An Ultra-Low-Power Long Range Battery/Passive RFID Tag for UHF and Microwave Bands With a Current Consumption of 700 nA at 1.5 V | We present for the first time, a fully integrated battery powered RFID integrated circuit (IC) for operation at ultrahigh frequency (UHF) and microwave bands. The battery powered RFID IC can also work as a passive RFID tag without a battery or when the battery has died (i.e., voltage has dropped below 1.3 V); this novel dual passive and battery operation allays one of the major drawbacks of currently available active tags, namely that the tag cannot be used once the battery has died. When powered by a battery, the current consumption is 700 nA at 1.5 V (400 nA if internal signals are not brought out on test pads). This ultra-low-power consumption permits the use of a very small capacity battery of 100 mA-hr for lifetimes exceeding ten years; as a result a battery tag that is very close to a passive tag both in form factor and cost is made possible. The chip is built on a 1-mum digital CMOS process with dual poly layers, EEPROM and Schottky diodes. The RF threshold power at 2.45 GHz is -19 dBm which is the lowest ever reported threshold power for RFID tags and has a range exceeding 3.5 m under FCC unlicensed operation at the 2.4-GHz microwave band. The low threshold is achieved with architectural choices and low-power circuit design techniques. At 915 MHz, based on the experimentally measured tag impedance (92-j837) and the threshold spec of the tag (200 mV), the theoretical minimum range is 24 m. The tag initially is in a "low-power" mode to conserve power and when issued the appropriate command, it operates in "full-power" mode. The chip has on-chip voltage regulators, clock and data recovery circuits, EEPROM and a digital state machine that implements the ISO 18000-4 B protocol in the "full-power" mode. We provide detailed explanation of the clock recovery circuits and the implementation of the binary sort algorithm, which includes a pseudorandom number generator. Other than the antenna board and a battery, no external components are used. |
Examining the Technology Acceptance Model Using Physician Acceptance of Telemedicine Technology | The rapid growth of investment in information technology (IT) by organizations worldwide has made user acceptance an increasingly critical technology implementation a d management issue. While such acceptance has received fairly extensive attention from previous research, additional efforts are needed to examine or validate existing research results, particularly those involving different technologies, user populations, and/or organizational contexts. In response, this paper reports a research work that examined the applicability of the Technology Acceptance Model (ΤΑΜ) in explaining physicians' decisions to accept telemedicine technology in the health-care context. The technology, the user group, and the organizational context are all new to IT acceptance/adoption research. The study also addressed a pragmatic technology management need resulting from millions of dollars invested by healthcare organizations in developing and implementing telemedicine programs in recent years. The model's overall fit, explanatory power, and the individual causal links that it postulates were evaluated by examining the acceptance of telemedicine technology among physicians practicing at public tertiary hospitals in Hong Kong. Our results suggested that ΤΑΜ was able to provide a reasonable depiction of physicians' intention to use telemedicine technology. Perceived usefulness was found to be a significant determinant ofattitude and intention but perceived ease of use was not. The relatively low R-square of the model suggests both the limitations of the parsimonious model and the need for incorporating additional factors or integrating with other IT acceptance models in order to improve its specificity and explanatory utility in a health-care context. Based on the study findings, implications for user technology acceptance research and telemedicine management are discussed. |
Kinetic study of chemoselective acylation of amino-alditol by immobilized lipase in organic solvent: effect of substrate ionization. | The kinetics of the lipase-catalyzed synthesis of oleoyl-N-methylglucamide and 6-O-oleoyl-N-methylglucamine in organic systems were investigated. We have shown that in apolar media, the ionic state of substrates and the ionic state of enzyme microenvironment play an important role in immobilized Candida antarctica lipase activity and chemoselectivity of the reaction. In order to define the optimal conditions of the reaction, to obtain the highest initial rate for amide formation, the influence of acid/N-methylglucamine molar ratio is studied. This ratio determines the protonation states of substrates and of ionizable groups of catalytic site, on which the enzyme activity is dependent. To confirm our hypothesis, we have added to the medium a non-reactive base which is not a substrate of the enzyme. We observed that when the acid/base ratio is higher than 1, the initial rate of ester synthesis increases whereas that of amide synthesis decreases. On the opposite, when the acid/base ratio is lower than 1, the initial rate of amide synthesis becomes preponderant. |
Modeling Conditional Volatility of the Indian Stock Markets | Traditional econometric models assume a constant one period forecast variance. However, many financial time series display volatility clustering, that is, autoregressive conditional heteroskedasticity (ARCH). The aim of this paper is to estimate conditional volatility models in an effort to capture the salient features of stock market volatility in India and evaluate the models in terms of out-ofsample forecast accuracy. The paper also investigates whether there is any leverage effect in Indian companies. The estimation of volatility is made at the macro level on two major market indices, namely, S&P CNX Nifty and BSE Sensex. The fitted model is then evaluated in terms of its forecasting accuracy on these two indices. In addition, 50 individual companies’ share prices currently included in S&P CNX Nifty are used to examine the heteroskedastic behaviour of the Indian stock market at the micro level. The vanilla GARCH (1, 1) model has been fitted to both the market indices. We find: a strong evidence of time-varying volatility a tendency of the periods of high and low volatility to cluster a high persistence and predictability of volatility. Conditional volatility of market return series from January 1991 to June 2003 shows a clear evidence of volatility shifting over the period where violent changes in share prices cluster around the boom of 1992. Though the higher price movement started in response to strong economic fundamentals, the real cause for abrupt movement appears to be the imperfection of the market. The forecasting ability of the fitted GARCH (1, 1) model has been evaluated by estimating parameters initially over trading days of the in-sample period and then using the estimated parameters to later data, thus forming out-of-sample forecasts on two market indices. These outof-sample volatility forecasts have been compared to true realized volatility. Three alternative methods have been followed to measure three pairs of forecast and realized volatility. In each method, the volatility forecasts are evaluated and compared through popular measures. To examine the information content of forecasts, a regression-based efficiency test has also been performed. It is observed that the GARCH (1, 1) model provides reasonably good forecasts of market volatility. While turning to 50 individual underlying shares, it is observed that the GARCH (1, 1) model has been fitted for almost all companies. Only for four companies, GARCH models of higher order may be more successful. In general, volatility seems to be of a persistent nature. Only eight out of 50 shares show significant leverage effects and really need an asymmetric GARCH model such as EGARCH to capture their volatility clustering which is left for future research. The implications of the study are as follows: The various GARCH models provide good forecasts of volatility and are useful for portfolio allocation, performance measurement, option valuation, etc. Given the anticipated high growth of the economy and increasing interest of foreign investors towards the country, it is important to understand the pattern of stock market volatility in India which is time-varying, persistent, and predictable. This may help diversify international portfolios and formulate hedging strategies. KEY WORDS |
Scrum of scrums solution for large size teams using scrum methodology | Scrum is a structured framework to support complex product development. However, Scrum methodology faces a challenge of managing large teams. To address this challenge, in this paper we propose a solution called Scrum of Scrums. In Scrum of Scrums, we divide the Scrum team into teams of the right size, and then organize them hierarchically into a Scrum of Scrums. The main goals of the proposed solution are to optimize communication between teams in Scrum of Scrums; to make the system work after integration of all parts; to reduce the dependencies between the parts of system; and to prevent the duplication of parts in the system. [Qurashi SA, Qureshi MRJ. Scrum of Scrums Solution for Large Size Teams Using Scrum Methodology. Life Sci J 2014;11(8):443-449]. (ISSN:1097-8135). http://www.lifesciencesite.com. 58 |
Dual-Resonance NFC Antenna System Based on NFC Chip Antenna | In this letter, to enhance the performance of the near-field communication (NFC) antenna, an antenna system that has dual resonance is proposed. The antenna is based on NFC chip antenna, and the dual resonance comes from the chip antenna itself, the eddy current on printed circuit board, and a nearby loop that has strong coupling with the chip antenna. The performance of the proposed antenna system is confirmed by measurement. |
Gesture recognition for smart home applications using portable radar sensors | In this article, we consider the design of a human gesture recognition system based on pattern recognition of signatures from a portable smart radar sensor. Powered by AAA batteries, the smart radar sensor operates in the 2.4 GHz industrial, scientific and medical (ISM) band. We analyzed the feature space using principle components and application-specific time and frequency domain features extracted from radar signals for two different sets of gestures. We illustrate that a nearest neighbor based classifier can achieve greater than 95% accuracy for multi class classification using 10 fold cross validation when features are extracted based on magnitude differences and Doppler shifts as compared to features extracted through orthogonal transformations. The reported results illustrate the potential of intelligent radars integrated with a pattern recognition system for high accuracy smart home and health monitoring purposes. |
Visual Abstraction of Large Scale Geospatial Origin-Destination Movement Data | A variety of human movement datasets are represented in an Origin-Destination(OD) form, such as taxi trips, mobile phone locations, etc. As a commonly-used method to visualize OD data, flow map always fails to discover patterns of human mobility, due to massive intersections and occlusions of lines on a 2D geographical map. A large number of techniques have been proposed to reduce visual clutter of flow maps, such as filtering, clustering and edge bundling, but the correlations of OD flows are often neglected, which makes the simplified OD flow map present little semantic information. In this paper, a characterization of OD flows is established based on an analogy between OD flows and natural language processing (NPL) terms. Then, an iterative multi-objective sampling scheme is designed to select OD flows in a vectorized representation space. To enhance the readability of sampled OD flows, a set of meaningful visual encodings are designed to present the interactions of OD flows. We design and implement a visual exploration system that supports visual inspection and quantitative evaluation from a variety of perspectives. Case studies based on real-world datasets and interviews with domain experts have demonstrated the effectiveness of our system in reducing the visual clutter and enhancing correlations of OD flows. |
Red meat enhances the colonic formation of the DNA adduct O6-carboxymethyl guanine: implications for colorectal cancer risk. | Red meat is associated with increased risk of colorectal cancer and increases the endogenous formation of N-nitrosocompounds (NOC). To investigate the genotoxic effects of NOC arising from red meat consumption, human volunteers were fed high (420 g) red meat, vegetarian, and high red meat, high-fiber diets for 15 days in a randomized crossover design while living in a volunteer suite, where food was carefully controlled and all specimens were collected. In 21 volunteers, there was a consistent and significant (P < 0.0001) increase in endogenous formation of NOC with the red meat diet compared with the vegetarian diet as measured by apparent total NOC (ATNC) in feces. In colonic exfoliated cells, the percentage staining positive for the NOC-specific DNA adduct, O(6)-carboxymethyl guanine (O(6)CMG) was significantly (P < 0.001) higher on the high red meat diet. In 13 volunteers, levels were intermediate on the high-fiber, high red meat diet. Fecal ATNC were positively correlated with the percentage of cells staining positive for O(6)CMG (r(2) = 0.56, P = 0.011). The presence of O(6)CMG was also shown in intact small intestine from rats treated with the N-nitrosopeptide N-acetyl-N'-prolyl-N'-nitrosoglycine and in HT-29 cells treated with diazoacetate. This study has shown that fecal NOC arising from red meat include direct acting diazopeptides or N-nitrosopeptides able to form alkylating DNA adducts in the colon. As these O(6)CMG adducts are not repaired, and if other related adducts are formed and not repaired, this may explain the association of red meat with colorectal cancer. |
Boosting for transfer learning with multiple sources | Transfer learning allows leveraging the knowledge of source domains, available a priori, to help training a classifier for a target domain, where the available data is scarce. The effectiveness of the transfer is affected by the relationship between source and target. Rather than improving the learning, brute force leveraging of a source poorly related to the target may decrease the classifier performance. One strategy to reduce this negative transfer is to import knowledge from multiple sources to increase the chance of finding one source closely related to the target. This work extends the boosting framework for transferring knowledge from multiple sources. Two new algorithms, MultiSource-TrAdaBoost, and TaskTrAdaBoost, are introduced, analyzed, and applied for object category recognition and specific object detection. The experiments demonstrate their improved performance by greatly reducing the negative transfer as the number of sources increases. TaskTrAdaBoost is a fast algorithm enabling rapid retraining over new targets. |
Beneficial effects of hypnosis and adverse effects of empathic attention during percutaneous tumor treatment: when being nice does not suffice. | PURPOSE
To determine how hypnosis and empathic attention during percutaneous tumor treatments affect pain, anxiety, drug use, and adverse events.
MATERIALS AND METHODS
For their tumor embolization or radiofrequency ablation, 201 patients were randomized to receive standard care, empathic attention with defined behaviors displayed by an additional provider, or self-hypnotic relaxation including the defined empathic attention behaviors. All had local anesthesia and access to intravenous medication. Main outcome measures were pain and anxiety assessed every 15 minutes by patient self-report, medication use (with 50 mug fentanyl or 1 mg midazolam counted as one unit), and adverse events, defined as occurrences requiring extra medical attention, including systolic blood pressure fluctuations (> or =50 mm Hg change to >180 mm Hg or <105 mm Hg), vasovagal episodes, cardiac events, and respiratory impairment.
RESULTS
Patients treated with hypnosis experienced significantly less pain and anxiety than those in the standard care and empathy groups at several time intervals and received significantly fewer median drug units (mean, 2.0; interquartile range [IQR], 1-4) than patients in the standard (mean, 3.0; IQR, 1.5-5.0; P = .0147) and empathy groups (mean, 3.50; IQR, 2.0-5.9; P = .0026). Thirty-one of 65 patients (48%) in the empathy group had adverse events, which was significantly more than in the hypnosis group (eight of 66; 12%; P = .0001) and standard care group (18 of 70; 26%; P = .0118).
CONCLUSIONS
Procedural hypnosis including empathic attention reduces pain, anxiety, and medication use. Conversely, empathic approaches without hypnosis that provide an external focus of attention and do not enhance patients' self-coping can result in more adverse events. These findings should have major implications in the education of procedural personnel. |
Catch Me If You Can: A Cloud-Enabled DDoS Defense | We introduce a cloud-enabled defense mechanism for Internet services against network and computational Distributed Denial-of-Service (DDoS) attacks. Our approach performs selective server replication and intelligent client re-assignment, turning victim servers into moving targets for attack isolation. We introduce a novel system architecture that leverages a "shuffling" mechanism to compute the optimal re-assignment strategy for clients on attacked servers, effectively separating benign clients from even sophisticated adversaries that persistently follow the moving targets. We introduce a family of algorithms to optimize the runtime client-to-server re-assignment plans and minimize the number of shuffles to achieve attack mitigation. The proposed shuffling-based moving target mechanism enables effective attack containment using fewer resources than attack dilution strategies using pure server expansion. Our simulations and proof-of-concept prototype using Amazon EC2 [1] demonstrate that we can successfully mitigate large-scale DDoS attacks in a small number of shuffles, each of which incurs a few seconds of user-perceived latency. |
Causally motivated attribution for online advertising | In many online advertising campaigns, multiple vendors, publishers or search engines (herein called channels) are contracted to serve advertisements to internet users on behalf of a client seeking specific types of conversion. In such campaigns, individual users are often served advertisements by more than one channel. The process of assigning conversion credit to the various channels is called "attribution," and is a subject of intense interest in the industry. This paper presents a causally motivated methodology for conversion attribution in online advertising campaigns. We discuss the need for the standardization of attribution measurement and offer three guiding principles to contribute to this standardization. Stemming from these principles, we position attribution as a causal estimation problem and then propose two approximation methods as alternatives for when the full causal estimation can not be done. These approximate methods derive from our causal approach and incorporate prior attribution work in cooperative game theory. We argue that in cases where causal assumptions are violated, these approximate methods can be interpreted as variable importance measures. Finally, we show examples of attribution measurement on several online advertising campaign data sets. |
Scaper: A library for soundscape synthesis and augmentation | Sound event detection (SED) in environmental recordings is a key topic of research in machine listening, with applications in noise monitoring for smart cities, self-driving cars, surveillance, bioa-coustic monitoring, and indexing of large multimedia collections. Developing new solutions for SED often relies on the availability of strongly labeled audio recordings, where the annotation includes the onset, offset and source of every event. Generating such precise annotations manually is very time consuming, and as a result existing datasets for SED with strong labels are scarce and limited in size. To address this issue, we present Scaper, an open-source library for soundscape synthesis and augmentation. Given a collection of iso-lated sound events, Scaper acts as a high-level sequencer that can generate multiple soundscapes from a single, probabilistically defined, “specification”. To increase the variability of the output, Scaper supports the application of audio transformations such as pitch shifting and time stretching individually to every event. To illustrate the potential of the library, we generate a dataset of 10,000 sound-scapes and use it to compare the performance of two state-of-the-art algorithms, including a breakdown by soundscape characteristics. We also describe how Scaper was used to generate audio stimuli for an audio labeling crowdsourcing experiment, and conclude with a discussion of Scaper's limitations and potential applications. |
Enterprise Knowledge Management and Emerging Technologies | Improving management of information and knowledge in organizations has long been a major objective, but efforts to address it often foundered. Knowledge typically resides in structured documents, informal discussions that may or may not persist online, and in tacit form. Terminology differences and dispersed contextual information hinder efforts to use formal representations. Features of dynamic emerging technologies — unstructured tagging, web-logs, and search — show strong promise in overcoming past obstacles. They exploit digital representations of less formal language and could greatly increase the value of such representations. |
System-level performance analysis of embedded system using behavioral C/C++ model | Design iteration time in SoC design flow is reduced through performance exploration at a higher level of abstraction. This paper proposes an accurate and fast performance analysis method in early stage of design process using a behavioral model written in C/C++ language. We made a cycle-accurate but fast and flexible compiled instruction set simulator (ISS) and IP models that represent hardware functionality and performance. System performance analyzer configured by the target communication architecture analyzes the performance utilizing event-traces obtained by running the ISS and IP models. This solution is automated and implemented in the tool, HIPA. We obtain diverse performance profiling results and achieve 95% accuracy using an abstracted C model. We also achieve about 20 times speed-up over corresponding co-simulation tools. |
ISSARS: An integrated software environment for structure-specific earthquake ground motion selection | motion selection DTU Orbit (12/12/2018) ISSARS: An integrated software environment for structure-specific earthquake ground motion selection Current practice enables the design and assessment of structures in earthquake prone areas by performing time history analysis with the use of appropriately selected strong ground motions. This study presents a Matlab-based software environment, which is integrated with a finite element analysis package, and aims to improve the efficiency of earthquake ground motion selection by accounting for the variability of critical structural response quantities. This additional selection criterion, which is tailored to the specific structure studied, leads to more reliable estimates of the mean structural response quantities used in design, while fulfils the criteria already prescribed by the European and US seismic codes and guidelines. To demonstrate the applicability of the software environment developed, an existing irregular, multi-storey, reinforced concrete building is studied for a wide range of seismic scenarios. The results highlight the applicability of the software developed and the benefits of applying a structure-specific criterion in the process of selecting suites of earthquake motions for the seismic design and assessment. (C) 2013 Elsevier Ltd. All rights reserved. |
A topology for three-stage Solid State Transformer | Solid State Transformer (SST) is a new type of power transformer based on power electronic converters and high frequency transformers. The SST realizes voltage transformation, galvanic isolation, and power quality improvements in a single device. In the literature, a number of topologies have been introduced for the SST. In this work, employing a modular multilevel converter, a new SST topology is introduced which provides not only high-voltage AC (HVAC), low-voltage AC (LVAC) and low-voltage DC (LVDC) ports, but also high-voltage DC (HVDC) port. Besides, the proposed topology is easily scalable to higher voltage levels. |
The Use of Mobile Money Application and Smallholder Farmer Market Participation: Evidence form Cote d Ivoire and Tanzania | With the growing food security concerns market participation of smallholder farmers has regained the attention of policy makers and agricultural development community. The widespread adoption of information and communication technologies in Sub-Saharan Africa over the last decade have paved the way for the introduction of digital solutions such as mobile money that have a potential to enhance access to input and output markets. Using a conceptual framework based on the Transaction Cost Economics theory, we propose a hypothesis that the ability to make quick and low-cost money transfers through mobile money application can lower the transaction costs associated with hold-up risks of participating in distant markets. This hypothesis is tested using the data from the CGAP survey in Cote d Ivoire and Tanzania. The methods include Heckman Probit model to account for sample selection bias. The findings indicate that the stallholder farmers who use the mobile money for receiving payments from buyers are more likely to sell their product in city and regional markets versus farm gate options such as middleman and village markets. Key words: market participation, mobile money, transaction costs, Sub-Saharan Africa Acknowledgement : |
Perspectives on technology mediated learning in secondary school mathematics classrooms | The introduction of technology resources into mathematics classrooms promises to create opportunities for enhancing students’ learning through active engagement with mathematical ideas; however, little consideration has been given to the pedagogical implications of technology as a mediator of mathematics learning. This paper draws on data from a three year longitudinal study of senior secondary school classrooms to examine pedagogical issues in using technology in mathematics teaching – where “technology” includes not only computers and graphics calculators but also projection devices that allow screen output to be viewed by the whole class. We theorise and illustrate four roles for technology in relation to such teaching and learning interactions – master, servant, partner, and extension of self. Our research shows how technology can facilitate collaborative inquiry, during both small group interactions and whole class discussions where students use the computer or calculator and screen projection to share and test their mathematical understanding. |
Exploiting Software: How to Break Code | To be useful, software must respond to events in a predictable manner. The results can then be used as a window to the interior workings of the code, revealing some of the mechanisms of operations, which may be used to find ways to make it fail in a dangerous way. To some, the window is as clear as a six inch thick pane of lead, but to those with a high level of understanding it can be clear, or at the very least serve as a keyhole. This is an allusion to the old detective stories where someone looks through the keyhole to see what is behind the door. For these reasons, no software that interacts with humans can ever be considered completely secure, and human error in the development of the software can leave the equivalent of keyholes throughout the code. |
A Fast and Accurate Dependency Parser using Neural Networks | Almost all current dependency parsers classify based on millions of sparse indicator features. Not only do these features generalize poorly, but the cost of feature computation restricts parsing speed significantly. In this work, we propose a novel way of learning a neural network classifier for use in a greedy, transition-based dependency parser. Because this classifier learns and uses just a small number of dense features, it can work very fast, while achieving an about 2% improvement in unlabeled and labeled attachment scores on both English and Chinese datasets. Concretely, our parser is able to parse more than 1000 sentences per second at 92.2% unlabeled attachment score on the English Penn Treebank. |
Enhanced differential super class-AB OTA | A fully differential Super Class AB Operational Transconductance Amplifier (OTA) is presented. To provide additional dynamic current boosting and increased gain-bandwidth product (GBW), not only adaptive biasing techniques has been applied at the differential input stage, but also Local Common Mode Feedback (LCMFB). Additionally, Quasi Floating Gate (QFG) transistors are employed, enhancing the performance of the amplifier. The OTA has been fabricated in a standard 0.5-μm CMOS process. Simulation results yield a slew rate improvement by factor 86 and a GBW enhancement by factor 16 when compared to the class A OTA while driving the same 70-pF load. Supply voltages are ±1-V and the value of the quiescent current is 10 μA. The overhead in terms of quiescent power, silicon area, and noise level is small. |
Systemic Immune Suppression Predicts Diminished Merkel Cell Carcinoma – Specific Survival Independent of Stage | Merkel cell carcinoma (MCC) is an aggressive cutaneous malignancy linked to a contributory virus (Merkel cell polyomavirus). Multiple epidemiologic studies have established an increased incidence of MCC among persons with systemic immune suppression. Several forms of immune suppression are associated with increased MCC incidence, including hematologic malignancies, HIV/AIDS, and immunosuppressive medications for autoimmune disease or transplant. Indeed, immune-suppressed individuals represent B10% of MCC patients, a significant overrepresentation relative to the general population. We hypothesized that immune-suppressed patients may have a poorer MCC-specific prognosis and examined a cohort of 471 patients with a combined follow-up of 1,427 years (median 2.1 years). Immune-suppressed patients (n1⁄4 41) demonstrated reduced MCC-specific survival (40% at 3 years) compared with patients with no known systemic immune suppression (n1⁄4 430; 74% MCC-specific survival at 3 years). By competing risk regression analysis, immune suppression was a stage-independent predictor of worsened MCC-specific survival (hazard ratio 3.8, Po0.01). Thus, immune-suppressed individuals have both an increased chance of developing MCC and poorer MCC-specific survival. It may be appropriate to follow these higher-risk individuals more closely, and, when clinically feasible, there may be a benefit of diminishing iatrogenic systemic immune suppression. |
Micturitional dryness and attitude of parents towards enuresis in children attending outpatient unit of a tertiary hospital in Abeokuta, Southwest Nigeria. | BACKGROUND
There is significant variability of the age at which children achieve dryness.
OBJECTIVES
We determine the age at achievement of micturational dryness and attitude of parents about enuresis among urban Nigerian children.
METHOD
A total of 346 questionnaires were administered to parents of children between the ages of 12 - 180 months who came for routine paediatric care at the outpatient unit of Federal Medical Centre, Abeokuta.
RESULTS
At age 36 months, 86 (51.8 %) and 34 (20.5 %) out of 166 children had achieved dryness at daytime and night time respectively. Achievement of dryness was significantly related to low maternal education (p = 0.022) and low social class (p = 0.009). Twenty-four (26.7 %) children had nocturnal enuresis. Four (4.4 %) of these children also had diurnal enuresis. All the parents/guardians were aware about enuresis but only 9.8 % correctly identified it as a health problem. Even though none of the children with enuresis ever visited health facility for their problem, a statistically significant proportion of the parents desire to discuss with health practitioners (p = 0.015).
CONCLUSIONS
The proportion of children achieving dryness by age 36 months is very small when compared with children from developed parts of the world. There is also a high prevalence of enuresis which are not reported. Therefore, health workers in the tropics should as a routine enquire about enuresis in their daily paediatric care particularly for those children from polygamous homes and high social class. |
Role of Innovation in the Development of ICT Software Enterprises in Palestine | The ICT
sector in Palestine is growing in quantity and scope of works endorsed by the
governmental institutions. ICT enterprises working on software development
related works have started two decades ago. Some of these enterprises have
scored success on national and regional levels. The ICT software development
enterprises are so much important in the national innovation system as they are
not only delivering goods but providing diversity of services and tools used in the
process of knowledge generation and implementation and the
knowledge-based-economy. In an effort of assessing the innovation levels in the
Palestinian software development enterprises, the community innovation survey
questionnaire has been translated, tailored and used on a representative
sample. Analysis of the innovation survey questionnaire brings promising results that need to be
carefully studied. It is
found that most enterprises are innovators and having high potentials. However, potentials are found to
be fragmented as national directive policies are not yet developed. For the
development of the sector, public-private-academic partnership is needed. |
Ripple: Overview and Outlook | Ripple is a payment system and a digital currency which evolved completely independently of Bitcoin. Although Ripple holds the second highest market cap after Bitcoin, there are surprisingly no studies which analyze the provisions of Ripple. In this paper, we study the current deployment of the Ripple payment system. For that purpose, we overview the Ripple protocol and outline its security and privacy provisions in relation to the Bitcoin system. We also discuss the consensus protocol of Ripple. Contrary to the statement of the Ripple designers, we show that the current choice of parameters does not prevent the occurrence of forks in the system. To remedy this problem, we give a necessary and sufficient condition to prevent any fork in the system. Finally, we analyze the current usage patterns and trade dynamics in Ripple by extracting information from the Ripple global ledger. As far as we are aware, this is the first contribution which sheds light on the current deployment of the Ripple system. |
Association of out-of-hospital advanced airway management with outcomes after traumatic brain injury and hemorrhagic shock in the ROC hypertonic saline trial. | OBJECTIVE
Prior studies suggest adverse associations between out-of-hospital advanced airway management (AAM) and patient outcomes after major trauma. This secondary analysis of data from the Resuscitation Outcomes Consortium Hypertonic Saline Trial evaluated associations between out-of-hospital AAM and outcomes in patients suffering isolated severe traumatic brain injury (TBI) or haemorrhagic shock.
METHODS
This multicentre study included adults with severe TBI (GCS ≤8) or haemorrhagic shock (SBP ≤70 mm Hg, or (SBP 71-90 mm Hg and heart rate ≥108 bpm)). We compared patients receiving out-of-hospital AAM with those receiving emergency department AAM. We evaluated the associations between airway strategy and patient outcomes (28-day mortality, and 6-month poor neurologic or functional outcome) and airway strategy, adjusting for confounders. Analysis was stratified by (1) patients with isolated severe TBI and (2) patients with haemorrhagic shock with or without severe TBI.
RESULTS
Of 2135 patients, we studied 1116 TBI and 528 shock; excluding 491 who died in the field, did not receive AAM or had missing data. In the shock cohort, out-of-hospital AAM was associated with increased 28-day mortality (adjusted OR 5.14; 95% CI 2.42 to 10.90). In TBI, out-of-hospital AAM showed a tendency towards increased 28-day mortality (adjusted OR 1.57; 95% CI 0.93 to 2.64) and 6-month poor functional outcome (1.63; 1.00 to 2.68), but these differences were not statistically significant. Out-of-hospital AAM was associated with poorer 6-month TBI neurologic outcome (1.80; 1.09 to 2.96).
CONCLUSIONS
Out-of-hospital AAM was associated with increased mortality after haemorrhagic shock. The adverse association between out-of-hospital AAM and injury outcome is most pronounced in patients with haemorrhagic shock. |
Psychological Predictors of Problem Mobile Phone Use | Mobile phone use is banned or illegal under certain circumstances and in some jurisdictions. Nevertheless, some people still use their mobile phones despite recognized safety concerns, legislation, and informal bans. Drawing potential predictors from the addiction literature, this study sought to predict usage and, specifically, problematic mobile phone use from extraversion, self-esteem, neuroticism, gender, and age. To measure problem use, the Mobile Phone Problem Use Scale was devised and validated as a reliable self-report instrument, against the Addiction Potential Scale and overall mobile phone usage levels. Problem use was a function of age, extraversion, and low self-esteem, but not neuroticism. As extraverts are more likely to take risks, and young drivers feature prominently in automobile accidents, this study supports community concerns about mobile phone use, and identifies groups that should be targeted in any intervention campaigns. |
An efficient approach to estimate fractal dimension of textural images | -Fractal dimension is an interesting parameter to characterize roughness in an image. It can be used in texture segmentation, estimation of three-dimensional (3D) shape and other information. A new method is proposed to estimate fractal dimension in a two-dimensional (2D) image which can readily be extended to a 3D image as well. The method has been compared with other existing methods to show that our method is both efficient and accurate. Fractal dimension Texture analysis Image roughness measure Image segmentation Computer vision |
Mammalian Rho GTPases: new insights into their functions from in vivo studies | Rho GTPases are key regulators of cytoskeletal dynamics and affect many cellular processes, including cell polarity, migration, vesicle trafficking and cytokinesis. These proteins are conserved from plants and yeast to mammals, and function by interacting with and stimulating various downstream targets, including actin nucleators, protein kinases and phospholipases. The roles of Rho GTPases have been extensively studied in different mammalian cell types using mainly dominant negative and constitutively active mutants. The recent availability of knockout mice for several members of the Rho family reveals new information about their roles in signalling to the cytoskeleton and in development. |
Neuro-ophthalmology of pupillary function – practical guidelines | An overview how to examine pupillary function and handle pupillary abnormalities is presented. The following issues are discussed: swinging flashlight test, clinical relevance of a relative afferent pupillary defect, anisocoria with normal light reaction, diagnosis and evaluation of Horner’s syndrome, differential diagnosis of impaired light reaction, tonic pupil, third nerve palsy, supranuclear pupillary disorders, iris problems, systemic disease, measurement of sleepiness, and pupillography. |
A Computational Exploration of Problem-Solving Strategies and Gaze Behaviors on the Block Design Task | The block design task, a standardized test of nonverbal reasoning, is often used to characterize atypical patterns of cognition in individuals with developmental or neurological conditions. Many studies suggest that, in addition to looking at quantitative differences in block design speed or accuracy, observing qualitative differences in individuals’ problem-solving strategies can provide valuable information about a person’s cognition. However, it can be difficult to tie theories at the level of problem-solving strategy to predictions at the level of externally observable behaviors such as gaze shifts and patterns of errors. We present a computational architecture that is used to compare different models of problem-solving on the block design task and to generate detailed behavioral predictions for each different strategy. We describe the results of three different modeling experiments and discuss how these results provide greater insight into the analysis of gaze behavior and error patterns on the block design task. |
An Anomaly Detection Method for Medicare Fraud Detection | With the improvement of medical insurance system, the coverage of medicare increases a lot. However, while the expenditure of this system is continuously rising, medicare fraud is causing huge losses for this system. Traditional medicare fraud detection greatly depends on the experience of domain experts, which is not accurate enough and costs much time and labor.In this study, we propose a medicare fraud detection framework based on the technology of anomaly detection. Our method consists of two parts. First part is a spatial density based algorithm, called improved local outlier factor (imLOF), which is more applicable than simple local outlier factor in medical insurance data. Second part is robust regression to depict the linear dependence between variables. Some experiments are conducted on real world data to measure the efficiency of our method. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.