query_id
stringlengths
32
32
query
stringlengths
6
3.9k
positive_passages
listlengths
1
21
negative_passages
listlengths
10
100
subset
stringclasses
7 values
885f22f6056d2d4976c17d9b882a4856
Safe Exploration in Finite Markov Decision Processes with Gaussian Processes
[ { "docid": "24bb26da0ce658ff075fc89b73cad5af", "text": "Recent trends in robot learning are to use trajectory-based optimal control techniques and reinforcement learning to scale complex robotic systems. On the one hand, increased computational power and multiprocessing, and on the other hand, probabilistic reinforcement learning methods and function approximation, have contributed to a steadily increasing interest in robot learning. Imitation learning has helped significantly to start learning with reasonable initial behavior. However, many applications are still restricted to rather lowdimensional domains and toy applications. Future work will have to demonstrate the continual and autonomous learning abilities, which were alluded to in the introduction.", "title": "" } ]
[ { "docid": "ac0b5ac968fafdb06625ecefef7ae002", "text": "With Centers for Disease Control and Prevention prevalence estimates for children with autism spectrum disorder (ASD) at 9.1 per 1,000 (1 in 110), identification and effective treatment of ASD is often characterized as a public health emergency. Emerging technology, especially robotic technology, has been shown to be appealing to these children and such interest can be harnessed to address the limitations while providing intervention services to young children with ASD. Generally the spectrum nature of autism calls for intensive, individualized intervention. However, existing robot-mediated systems tend to have limited adaptive capability that limits individualization. Our current work seeks to bridge this gap by developing a novel adaptive and individualized robot-mediated technology for children with ASD. The system is composed of a humanoid robot with its vision being augmented by several wall-mounted cameras for real-time head tracking using a distributed architecture. Based on the cues from the child's head movement, the robot intelligently adapts itself in an individualized manner to promote joint attention. The developed system is validated with two typically developing children. The validation results of the head tracker and the closed-loop nature of interaction are presented.", "title": "" }, { "docid": "a24b4546eb2da7ce6ce70f45cd16e07d", "text": "This paper examines the state of the art in mobile clinical and health-related apps. A 2012 estimate puts the number of health-related apps at no fewer than 40,000, as healthcare professionals and consumers continue to express concerns about the quality of many apps, calling for some form of app regulatory control or certification to be put in place. We describe the range of apps on offer as of 2013, and then present a brief survey of evaluation studies of medical and health-related apps that have been conducted to date, covering a range of clinical disciplines and topics. Our survey includes studies that highlighted risks, negative issues and worrying deficiencies in existing apps. We discuss the concept of 'apps as a medical device' and the relevant regulatory controls that apply in USA and Europe, offering examples of apps that have been formally approved using these mechanisms. We describe the online Health Apps Library run by the National Health Service in England and the calls for a vetted medical and health app store. We discuss the ingredients for successful apps beyond the rather narrow definition of 'apps as a medical device'. These ingredients cover app content quality, usability, the need to match apps to consumers' general and health literacy levels, device connectivity standards (for apps that connect to glucometers, blood pressure monitors, etc.), as well as app security and user privacy. 'Happtique Health App Certification Program' (HACP), a voluntary app certification scheme, successfully captures most of these desiderata, but is solely focused on apps targeting the US market. HACP, while very welcome, is in ways reminiscent of the early days of the Web, when many \"similar\" quality benchmarking tools and codes of conduct for information publishers were proposed to appraise and rate online medical and health information. It is probably impossible to rate and police every app on offer today, much like in those early days of the Web, when people quickly realised the same regarding informational Web pages. The best first line of defence was, is, and will always be to educate consumers regarding the potentially harmful content of (some) apps.", "title": "" }, { "docid": "667a457dcb1f379abd4e355e429dc40d", "text": "BACKGROUND\nViolent death is a serious problem in the United States. Previous research showing US rates of violent death compared with other high-income countries used data that are more than a decade old.\n\n\nMETHODS\nWe examined 2010 mortality data obtained from the World Health Organization for populous, high-income countries (n = 23). Death rates per 100,000 population were calculated for each country and for the aggregation of all non-US countries overall and by age and sex. Tests of significance were performed using Poisson and negative binomial regressions.\n\n\nRESULTS\nUS homicide rates were 7.0 times higher than in other high-income countries, driven by a gun homicide rate that was 25.2 times higher. For 15- to 24-year-olds, the gun homicide rate in the United States was 49.0 times higher. Firearm-related suicide rates were 8.0 times higher in the United States, but the overall suicide rates were average. Unintentional firearm deaths were 6.2 times higher in the United States. The overall firearm death rate in the United States from all causes was 10.0 times higher. Ninety percent of women, 91% of children aged 0 to 14 years, 92% of youth aged 15 to 24 years, and 82% of all people killed by firearms were from the United States.\n\n\nCONCLUSIONS\nThe United States has an enormous firearm problem compared with other high-income countries, with higher rates of homicide and firearm-related suicide. Compared with 2003 estimates, the US firearm death rate remains unchanged while firearm death rates in other countries decreased. Thus, the already high relative rates of firearm homicide, firearm suicide, and unintentional firearm death in the United States compared with other high-income countries increased between 2003 and 2010.", "title": "" }, { "docid": "55d9baff56af24e1b5651a70c1c16d4d", "text": "Robotic orthoses, or exoskeletons, have the potential to provide effective rehabilitation while overcoming the availability and cost constraints of therapists. However, current orthosis actuation systems use components designed for industrial applications, not specifically for interacting with humans. This can limit orthoses' capabilities and, if their users' needs are not adequately considered, contribute to their abandonment. Here, a user centered review is presented on: requirements for orthosis actuators; the electric, hydraulic, and pneumatic actuators currently used in orthoses and their advantages and limitations; the potential of new actuator technologies, including smart materials, to actuate orthoses; and the future of orthosis actuator research.", "title": "" }, { "docid": "476d6ba19c68e10cad80874b8f0e99a2", "text": "Synthetic aperture radar (SAR) is a general method for generating high-resolution radar maps from low-resolution aperture data which is based on using the relative motion between the radar antenna and the imaged scene. Originally conceived in the early 1950s [1], it is extensively used to image objects on the surface of the Earth and the planets [2]. A synthetic aperture is formed using electromagnetic signals from a physical aperture located at different space-time positions. The synthetic aperture may therefore observe the scene over a large angular sector by moving the physical aperture. Hence, the technique can give a significant improvement in resolution, in principle limited only by the stability of the wave field and other restrictions imposed on the movement of the physical aperture. A physical aperture, on the other hand, provides angular resolution inversely proportional to aperture size such that the spatial resolution degrades with increasing distance to the scene. SAR images of the ground are often generated from pulse echo data acquired by an antenna moving along a nominally linear track. It is well known that the spatial resolution can be made independent of distance to the ground since the antenna can be moved along correspondingly longer tracks [2]. It is therefore possible to produce radar maps with meteror decimeter-resolution from aircraft or spacecraft at very large distances. The resolution in these systems are limited by antenna illumination and system bandwidth but also by other factors, e.g. accuracy of antenna positioning, propagation perturbations, transmitter power, receiver sensitivity, clock stability, and dynamic range. The ultimate limit of SAR spatial resolution is proportional to the wavelength. The finest resolution is determined by the classical uncertainty principle applied to a band-limited wave packet. The area of a resolution cell can be shown to be related to radar system bandwidth B (= fmax fmin, where fmax and fmin are the maximum and minimum electromagnetic frequency, respectively) and aperture angle #2 #1 (the angle over which the antenna is moved and radiating as seen from the imaged ground) according to [3] ¢ASAR = ̧c 2(#2 #1) c", "title": "" }, { "docid": "882f463d187854967709c95ecd1d2fc1", "text": "In this paper, we propose a zoom-out-and-in network for generating object proposals. We utilize different resolutions of feature maps in the network to detect object instances of various sizes. Specifically, we divide the anchor candidates into three clusters based on the scale size and place them on feature maps of distinct strides to detect small, medium and large objects, respectively. Deeper feature maps contain region-level semantics which can help shallow counterparts to identify small objects. Therefore we design a zoom-in sub-network to increase the resolution of high level features via a deconvolution operation. The high-level features with high resolution are then combined and merged with low-level features to detect objects. Furthermore, we devise a recursive training pipeline to consecutively regress region proposals at the training stage in order to match the iterative regression at the testing stage. We demonstrate the effectiveness of the proposed method on ILSVRC DET and MS COCO datasets, where our algorithm performs better than the state-of-the-arts in various evaluation metrics. It also increases average precision by around 2% in the detection system.", "title": "" }, { "docid": "fbc0784d94e09cab75ee5a970786c30b", "text": "Adequate conservation and management of shark populations is becoming increasingly important on a global scale, especially because many species are exceptionally vulnerable to overfishing. Yet, reported catch statistics for sharks are incomplete, and mortality estimates have not been available for sharks as a group. Here, the global catch and mortality of sharks from reported and unreported landings, discards, and shark finning are being estimated at 1.44 million metric tons for the year 2000, and at only slightly less in 2010 (1.41 million tons). Based on an analysis of average shark weights, this translates into a total annual mortality estimate of about 100 million sharks in 2000, and about 97 million sharks in 2010, with a total range of possible values between 63 and 273 million sharks per year. Further, the exploitation rate for sharks as a group was calculated by dividing two independent mortality estimates by an estimate of total global biomass. As an alternative approach, exploitation rates for individual shark populations were compiled and averaged from stock assessments and other published sources. The resulting three independent estimates of the average exploitation rate ranged between 6.4% and 7.9% of sharks killed per year. This exceeds the average rebound rate for many shark populations, estimated from the life history information on 62 shark species (rebound rates averaged 4.9% per year), and explains the ongoing declines in most populations for which data exist. The consequences of these unsustainable catch and mortality rates for marine ecosystems could be substantial. Global total shark mortality, therefore, needs to be reduced drastically in order to rebuild depleted populations and restore marine ecosystems with functional top predators. & 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "410afe59802f6c875b7ee90692ab95a7", "text": "A Bayesian consumer who is uncertain about the quality of an information source will infer that the source is of higher quality when its reports conform to the consumer’s prior expectations. We use this fact to build a model of media bias in which firms slant their reports toward the prior beliefs of their customers in order to build a reputation for quality. Bias emerges in our model even though it can make all market participants worse off. The model predicts that bias will be less severe when consumers receive independent evidence on the true state of the world and that competition between independently owned news outlets can reduce bias. We present a variety of empirical evidence consistent with these predictions.", "title": "" }, { "docid": "efc4af51a92facff03e1009b039139fe", "text": "We decompose the evidence lower bound to show the existence of a term measuring the total correlation between latent variables. We use this to motivate the β-TCVAE (Total Correlation Variational Autoencoder) algorithm, a refinement and plug-in replacement of the β-VAE for learning disentangled representations, requiring no additional hyperparameters during training. We further propose a principled classifier-free measure of disentanglement called the mutual information gap (MIG). We perform extensive quantitative and qualitative experiments, in both restricted and non-restricted settings, and show a strong relation between total correlation and disentanglement, when the model is trained using our framework.", "title": "" }, { "docid": "cf61f1ecc010e5c021ebbfcf5cbfecf6", "text": "Arachidonic acid plays a central role in a biological control system where such oxygenated derivatives as prostaglandins, thromboxanes, and leukotrienes are mediators. The leukotrienes are formed by transformation of arachidonic acid into an unstable epoxide intermediate, leukotriene A4, which can be converted enzymatically by hydration to leukotriene B4, and by addition of glutathione to leukotriene C4. This last compound is metabolized to leukotrienes D4 and E4 by successive elimination of a gamma-glutamyl residue and glycine. Slow-reacting substance of anaphylaxis consists of leukotrienes C4, D4, and E4. The cysteinyl-containing leukotrienes are potent bronchoconstrictors, increase vascular permeability in postcapillary venules, and stimulate mucus secretion. Leukotriene B4 causes adhesion and chemotactic movement of leukocytes and stimulates aggregation, enzyme release, and generation of superoxide in neutrophils. Leukotrienes C4, D4, and E4, which are released from the lung tissue of asthmatic subjects exposed to specific allergens, seem to play a pathophysiological role in immediate hypersensitivity reactions. These leukotrienes, as well as leukotriene B4, have pro-inflammatory effects.", "title": "" }, { "docid": "ccee5411cefccf0f9db35fead317e6b5", "text": "In recent years, deep learning algorithms have become increasingly more prominent for their unparalleled ability to automatically learn discriminant features from large amounts of data. However, within the field of electromyographybased gesture recognition, deep learning algorithms are seldom employed as they require an unreasonable amount of effort from a single person, to generate tens of thousands of examples. This work's hypothesis is that general, informative features can be learned from the large amounts of data generated by aggregating the signals of multiple users, thus reducing the recording burden while enhancing gesture recognition. Consequently, this paper proposes applying transfer learning on aggregated data from multiple users, while leveraging the capacity of deep learning algorithms to learn discriminant features from large datasets. Two datasets comprised of 19 and 17 able-bodied participants respectively (the first one is employed for pre-training) were recorded for this work, using the Myo Armband. A third Myo Armband dataset was taken from the NinaPro database and is comprised of 10 able-bodied participants. Three different deep learning networks employing three different modalities as input (raw EMG, Spectrograms and Continuous Wavelet Transform (CWT)) are tested on the second and third dataset. The proposed transfer learning scheme is shown to systematically and significantly enhance the performance for all three networks on the two datasets, achieving an offline accuracy of 98.31% for 7 gestures over 17 participants for the CWT-based ConvNet and 68.98% for 18 gestures over 10 participants for the raw EMG-based ConvNet. Finally, a use-case study employing eight able-bodied participants suggests that real-time feedback allows users to adapt their muscle activation strategy which reduces the degradation in accuracy normally experienced over time.", "title": "" }, { "docid": "345e6a4f17eeaca196559ed55df3862e", "text": "Synaptic plasticity, the putative basis of learning and memory formation, manifests in various forms and across different timescales. Here we show that the interaction of Hebbian homosynaptic plasticity with rapid non-Hebbian heterosynaptic plasticity is, when complemented with slower homeostatic changes and consolidation, sufficient for assembly formation and memory recall in a spiking recurrent network model of excitatory and inhibitory neurons. In the model, assemblies were formed during repeated sensory stimulation and characterized by strong recurrent excitatory connections. Even days after formation, and despite ongoing network activity and synaptic plasticity, memories could be recalled through selective delay activity following the brief stimulation of a subset of assembly neurons. Blocking any component of plasticity prevented stable functioning as a memory network. Our modelling results suggest that the diversity of plasticity phenomena in the brain is orchestrated towards achieving common functional goals.", "title": "" }, { "docid": "254f437f82e14d889fe6ba15df8369ad", "text": "In academia, scientific research achievements would be inconceivable without academic collaboration and cooperation among researchers. Previous studies have discovered that productive scholars tend to be more collaborative. However, it is often difficult and time-consuming for researchers to find the most valuable collaborators (MVCs) from a large volume of big scholarly data. In this paper, we present MVCWalker, an innovative method that stands on the shoulders of random walk with restart (RWR) for recommending collaborators to scholars. Three academic factors, i.e., coauthor order, latest collaboration time, and times of collaboration, are exploited to define link importance in academic social networks for the sake of recommendation quality. We conducted extensive experiments on DBLP data set in order to compare MVCWalker to the basic model of RWR and the common neighbor-based model friend of friends in various aspects, including, e.g., the impact of critical parameters and academic factors. Our experimental results show that incorporating the above factors into random walk model can improve the precision, recall rate, and coverage rate of academic collaboration recommendations.", "title": "" }, { "docid": "89f6dc2f5c9517ba92ea55f2b693cf0b", "text": "ResNets have recently achieved state-of-the-art results on challenging computer vision tasks. In this paper, we create a novel architecture that improves ResNets by adding the ability to forget and by making the residuals more expressive, yielding excellent results. ResNet in ResNet outperforms architectures with similar amounts of augmentation on CIFAR-10 and establishes a new state-of-the-art on CIFAR-100.", "title": "" }, { "docid": "225b834e820b616e0ccfed7259499fd6", "text": "Introduction: Actinic cheilitis (AC) is a lesion potentially malignant that affects the lips after prolonged exposure to solar ultraviolet (UV) radiation. The present study aimed to assess and describe the proliferative cell activity, using silver-stained nucleolar organizer region (AgNOR) quantification proteins, and to investigate the potential associations between AgNORs and the clinical aspects of AC lesions. Materials and methods: Cases diagnosed with AC were selected and reviewed from Center of Histopathological Diagnosis of the Institute of Biological Sciences, Passo Fundo University, Brazil. Clinical data including clinical presentation of the patients affected with AC were collected. The AgNOR techniques were performed in all recovered cases. The different microscopic areas of interest were printed with magnification of *1000, and in each case, 200 epithelial cell nuclei were randomly selected. The mean quantity in each nucleus for NORs was recorded. One-way analysis of variance was used for statistical analysis. Results: A total of 22 cases of AC were diagnosed. The patients were aged between 46 and 75 years (mean age: 55 years). Most of the patients affected were males presenting asymptomatic white plaque lesions in the lower lip. The mean value quantified for AgNORs was 2.4 ± 0.63, ranging between 1.49 and 3.82. No statistically significant difference was observed associating the quantity of AgNORs with the clinical aspects collected from the patients (p > 0.05). Conclusion: The present study reports the lack of association between the proliferative cell activity and the clinical aspects observed in patients affected by AC through the quantification of AgNORs. Clinical significance: Knowing the potential relation between the clinical aspects of AC and the proliferative cell activity quantified by AgNORs could play a significant role toward the early diagnosis of malignant lesions in the clinical practice. Keywords: Actinic cheilitis, Proliferative cell activity, Silver-stained nucleolar organizer regions.", "title": "" }, { "docid": "cd5a1c3b3dd0de3571132404ec7646a9", "text": "The use of plant metabolites for medicinal and cosmetic purpose is today gaining popularity. The most important step in this exploitation of metabolites is extraction and isolation of compound of interest. These day we can identified two group of extraction technique called conventional technique using cheaper equipment, high amount of solvent and takes long extracting time, and new or green technique using costly equipment, elevated pressure and / or temperatures with short extracting time. After extracting secondary metabolites a step of purification and isolation are required using Chromatographic or NonChromatographic techniques. This paper reviews the different technique of extraction and identification of plant metabolites.", "title": "" }, { "docid": "1823b440ea6c3bd6232084db51d115a1", "text": "Cloud computing is a set of IT services that are provided to a customer over a network on a leased basis and with the ability to scale up or down their service requirements. Usually Cloud Computing services are delivered by a third party provider who owns the infrastructure.Cloud Computing holds the potential to eliminate the requirements for setting up of high-cost computing infrastructure for IT-based solutions and services that theindustry uses. It promises to provide a flexible IT architecture, accessible through internet from lightweight portable devices.This would allow multi-fold increase in the capacity and capabilities of the existing and new software.This new economic model for computing has found fertile ground and is attracting massive global investment. Many industries, such as banking, healthcare and education are moving towards the cloud due to the efficiency of services provided by the pay-per-use pattern based on the resources such as processing power used, transactions carried out, bandwidth consumed, data transferred, or storage space occupied etc.In a cloud computing environment, the entire data resides over a set of networked resources, enabling the data to be accessed through virtual machines. Despite the potential gains achieved from the cloud computing, the organizations are slow in accepting it due to security issues and challenges associated with it. Security is one of the major issues which hamper the growth of cloud. There are various research challenges also there for adopting cloud computing such as well managed service level agreement (SLA), privacy, interoperability and reliability.This research paper presents what cloud computing is, the various cloud models and the overview of the cloud computing architecture. This research paper also analyzes the key research challenges present in cloud computing and offers best practices to service providers as well as enterprises hoping to leverage cloud service to improve their bottom line in this severe economic climate.", "title": "" }, { "docid": "3510bcd9d52729766e2abe2111f8be95", "text": "Metaphors are common elements of language that allow us to creatively stretch the limits of word meaning. However, metaphors vary in their degree of novelty, which determines whether people must create new meanings on-line or retrieve previously known metaphorical meanings from memory. Such variations affect the degree to which general cognitive capacities such as executive control are required for successful comprehension. We investigated whether individual differences in executive control relate to metaphor processing using eye movement measures of reading. Thirty-nine participants read sentences including metaphors or idioms, another form of figurative language that is more likely to rely on meaning retrieval. They also completed the AX-CPT, a domain-general executive control task. In Experiment 1, we examined sentences containing metaphorical or literal uses of verbs, presented with or without prior context. In Experiment 2, we examined sentences containing idioms or literal phrases for the same participants to determine whether the link to executive control was qualitatively similar or different to Experiment 1. When metaphors were low familiar, all people read verbs used as metaphors more slowly than verbs used literally (this difference was smaller for high familiar metaphors). Executive control capacity modulated this pattern in that high executive control readers spent more time reading verbs when a prior context forced a particular interpretation (metaphorical or literal), and they had faster total metaphor reading times when there was a prior context. Interestingly, executive control did not relate to idiom processing for the same readers. Here, all readers had faster total reading times for high familiar idioms than literal phrases. Thus, executive control relates to metaphor but not idiom processing for these readers, and for the particular metaphor and idiom reading manipulations presented.", "title": "" }, { "docid": "6116c5432f18631b078bd568b05907cc", "text": "Standardized criteria for diagnosis and response assessment are needed to interpret and compare clinical trials and for approval of new therapeutic agents by regulatory agencies. Therefore, a National Cancer Institute-sponsored Working Group (NCI-WG) on chronic lymphocytic leukemia (CLL) published guidelines for the design and conduct of clinical trials for patients with CLL in 1988, which were updated in 1996. During the past decade, considerable progress has been achieved in defining new prognostic markers, diagnostic parameters, and treatment options. This prompted the International Workshop on Chronic Lymphocytic Leukemia (IWCLL) to provide updated recommendations for the management of CLL in clinical trials and general practice.", "title": "" }, { "docid": "c35341d3b82dd4921e752b4b774cd501", "text": "The initial concept of a piezoelectric transformer (PT) was proposed by C.A. Rosen, K. Fish, and H.C. Rothenberg and is described in the U.S. Patent 2,830,274, applied for in 1954. Fifty years later, this technology has become one of the most promising alternatives for replacing the magnetic transformers in a wide range of applications. Piezoelectric transformers convert electrical energy into electrical energy by using acoustic energy. These devices are typically manufactured using piezoelectric ceramic materials that vibrate in resonance. With appropriate designs it is possible to step-up and step-down the voltage between the input and output of the piezoelectric transformer, without making use of wires or any magnetic materials. This technology did not reach commercial success until early the 90s. During this period, several companies, mainly in Japan, decided to introduce PTs for applications requiring small size, high step-up voltages, and low electromagnetic interference (EMI) signature. These PTs were developed based on optimizations of the initial Rosen concept, and thus typically referred to as “Rosen-type PTs”. Today’s, PTs are used for backlighting LCD displays in notebook computers, PDAs, and other handheld devices. The PT yearly sales estimate was about over 20 millions in 2000 and industry sources report that production of piezoelectric transformers in Japan is growing steadily at a rate of 10% annually. The reliability achieved in LCD applications and the advances in the related technologies (materials, driving circuitry, housing and manufacturing) have currently spurred enormous interest and confidence in expanding this technology to other fields of application. This, consequently, is expanding the business opportunities for PTs. Currently, the industry trend is moving in two directions: low-cost product market and valueadded product market. Prices of PTs have been declining in recent years, and this trend is expected to continue. Soon (if not already), this technology will become a serious candidate for replacing the magnetic transformers in cost-sensitive applications. Currently, leading makers are reportedly focusing on more value-added products. Two of the key value-added areas are miniaturization and higher output power. Piezoelectric transformers for power applications require lower output impedances, high power capabilities and high efficiency under step-down conditions. Among the different PT designs proposed as alternatives to the classical Rosen configuration, Transoner laminated radial PT has been demonstrated as the most promising technology for achieving high power levels. Higher powers than 100W, with power densities in the range of 30-40 W/cm2 have been demonstrated. Micro-PTs are currently being developed with sizes of less than 5mm diameter and 1mm thickness allowing up to 0.5W power transfer and up to 50 times gain. Smaller sizes could be in the future integrated to power MEMs systems. This paper summarizes the state of the art on the PT technology and introduces the current trends of this industry. HISTORICAL INTRODUCTION It has been 50 years since the development of piezoelectric ceramic transformers began. The first invention on piezoelectric transformers (PTs) has been traditionally associated with the patent of Charles A. Rosen et al., which was disclosed on January 4, 1954 and finally granted on April 8, 1958 [1]. Briefly after this first application, on September 17, 1956, H.Jaffe and Don A. Berlincourt, on behalf of the Clevite Companies, applied for the second patent on PT technology, which was granted on Jan. 24, 1961 [2]. Since then, the PT technology has been growing simultaneously with the progress in piezoceramic technology as well as with the electronics in general. Currently, it is estimated that 25-30 millions of PTs are annually sold commercially for different applications. Thus, the growth of the technology is promising and is expected to expand to many other areas as an alternative to magnetic transformers. In attempt to be historically accurate, it is required to mention that the first studies on PTs initially took place in the late 20s and early 30s. Based on the research of the author of this paper, Alexander McLean Nicolson has the honor of being the first researcher to consider the idea of a piezoelectric transformer. In his patent US1829234 titled “Piezo-electric crystal transformer” [3], Nicolson describes the first research in this field. The work of Nicolson on piezoelectric transformers, recognized in several other patents [4], was limited to the use of piezoelectric crystals with obvious limitations in performance, design and applicability as compared to the later developed piezoceramic materials. Piezoelectric transformers (from now on referred to as piezoelectric ceramic transformers), like magnetic devices, are basically energy converters. A magnetic transformer operates by converting electrical input to magnetic energy and then reconverting that magnetic energy back to electrical output. A PT has an analogous operating mechanism. It converts an electrical input into mechanical energy and subsequently reconverts this mechanical energy back to an electrical output. This mechanical conversion is achieved by a standing wave vibrating at a frequency equal to a multiple of the mechanical resonance frequency of the transformer body, which is typically in the range of 50 to 150 kHz. Recently, PTs operating at 1MHz and higher have also been proposed. Piezoelectric transformers were initially considered as high voltage transformer devices. Two different designs driving the initial steps in the development on these “conventional” PTs were, the so-called Rosen-type PT designs and the contour extensional mode uni-poled PTs. Until early in 90s, the technology evolution was based on improvements in these two basic designs. Although Rosen proposed several types of PT embodiments in his patents and publications, the name of “Rosen-type PT” currently refers to those PTs representing an evolution on the initial rectangular design idea proposed by C. Rosen in 1954, as shown in Figure 1.", "title": "" } ]
scidocsrr
53fd00c1572cfe102ea0ed242e0e6172
Road-Condition Recognition Using 24-GHz Automotive Radar
[ { "docid": "ba6b016ace0c098ab345cd5a01af470d", "text": "This paper describes a vehicle detection system fusing radar and vision data. Radar data are used to locate areas of interest on images. Vehicle search in these areas is mainly based on vertical symmetry. All the vehicles found in different image areas are mixed together, and a series of filters is applied in order to delete false detections. In order to speed up and improve system performance, guard rail detection and a method to manage overlapping areas are also included. Both methods are explained and justified in this paper. The current algorithm analyzes images on a frame-by-frame basis without any temporal correlation. Two different statistics, namely: 1) frame based and 2) event based, are computed to evaluate vehicle detection efficiency, while guard rail detection efficiency is computed in terms of time savings and correct detection rates. Results and problems are discussed, and directions for future enhancements are provided", "title": "" } ]
[ { "docid": "38382c04e7dc46f5db7f2383dcae11fb", "text": "Motor schemas serve as the basic unit of behavior specification for the navigation of a mobile robot. They are multiple concurrent processes that operate in conjunction with associated perceptual schemas and contribute independently to the overall concerted action of the vehicle. The motivation behind the use of schemas for this domain is drawn from neuroscientific, psychological, and robotic sources. A variant of the potential field method is used to produce the appropriate velocity and steering commands for the robot. Simulation results and actual mobile robot experiments demonstrate the feasibility of this approach.", "title": "" }, { "docid": "d18cd051902c5c0e12db489af439c5d5", "text": "Face detection has achieved great success using the region-based methods. In this report, we propose a region-based face detector applying deep networks in a fully convolutional fashion, named Face R-FCN. Based on Region-based Fully Convolutional Networks (R-FCN), our face detector is more accurate and computationally efficient compared with the previous R-CNN based face detectors. In our approach, we adopt the fully convolutional Residual Network (ResNet) as the backbone network. Particularly, we exploit several new techniques including position-sensitive average pooling, multi-scale training and testing and on-line hard example mining strategy to improve the detection accuracy. Over two most popular and challenging face detection benchmarks, FDDB and WIDER FACE, Face R-FCN achieves superior performance over state-of-the-arts.", "title": "" }, { "docid": "0186c053103d06a8ddd054c3c05c021b", "text": "The brain-gut axis is a bidirectional communication system between the central nervous system and the gastrointestinal tract. Serotonin functions as a key neurotransmitter at both terminals of this network. Accumulating evidence points to a critical role for the gut microbiome in regulating normal functioning of this axis. In particular, it is becoming clear that the microbial influence on tryptophan metabolism and the serotonergic system may be an important node in such regulation. There is also substantial overlap between behaviours influenced by the gut microbiota and those which rely on intact serotonergic neurotransmission. The developing serotonergic system may be vulnerable to differential microbial colonisation patterns prior to the emergence of a stable adult-like gut microbiota. At the other extreme of life, the decreased diversity and stability of the gut microbiota may dictate serotonin-related health problems in the elderly. The mechanisms underpinning this crosstalk require further elaboration but may be related to the ability of the gut microbiota to control host tryptophan metabolism along the kynurenine pathway, thereby simultaneously reducing the fraction available for serotonin synthesis and increasing the production of neuroactive metabolites. The enzymes of this pathway are immune and stress-responsive, both systems which buttress the brain-gut axis. In addition, there are neural processes in the gastrointestinal tract which can be influenced by local alterations in serotonin concentrations with subsequent relay of signals along the scaffolding of the brain-gut axis to influence CNS neurotransmission. Therapeutic targeting of the gut microbiota might be a viable treatment strategy for serotonin-related brain-gut axis disorders.", "title": "" }, { "docid": "f3f441c2cf1224746c0bfbb6ce02706d", "text": "This paper addresses the task of finegrained opinion extraction – the identification of opinion-related entities: the opinion expressions, the opinion holders, and the targets of the opinions, and the relations between opinion expressions and their targets and holders. Most existing approaches tackle the extraction of opinion entities and opinion relations in a pipelined manner, where the interdependencies among different extraction stages are not captured. We propose a joint inference model that leverages knowledge from predictors that optimize subtasks of opinion extraction, and seeks a globally optimal solution. Experimental results demonstrate that our joint inference approach significantly outperforms traditional pipeline methods and baselines that tackle subtasks in isolation for the problem of opinion extraction.", "title": "" }, { "docid": "e6dba9e9ad2db632caed6b19b9f5a010", "text": "Efficient and accurate similarity searching on a large time series data set is an important but non- trivial problem. In this work, we propose a new approach to improve the quality of similarity search on time series data by combining symbolic aggregate approximation (SAX) and piecewise linear approximation. The approach consists of three steps: transforming real valued time series sequences to symbolic strings via SAX, pattern matching on the symbolic strings and a post-processing via Piecewise Linear Approximation.", "title": "" }, { "docid": "12519f0131b8d451654ea790c977acd0", "text": "In the early 1980s, Scandinavian software designers who sought to make systems design more participatory and democratic turned to prototyping. The \"Scandinavian challenge\" of making computers more democratic inspired others who became interested in user-centered design; information designers on both sides of the Atlantic began to employ prototyping as a way to encourage user participation and feedback in various design approaches. But, as European and North American researchers have pointed out, prototyping is seen as meeting very different needs in Scandinavia and in the US. Thus design approaches that originate on either side of the Atlantic have implemented prototyping quite differently, have deployed it to meet quite different goals, and have tended to understand prototyping results in different ways.These differences are typically glossed over in technical communication research. Technical communicators have lately become quite excited about prototyping's potential to help design documentation, but the technical communication literature shows little critical awareness of the methodological differences between Scandinavian and US prototyping. In this presentation, I map out some of these differences by comparing prototyping in a variety of design approaches originating in Scandinavia and the US, such as mock-ups, cooperative prototyping, CARD, PICTIVE, and contextual design. Finally, I discuss implications for future technical communication research involving prototyping.", "title": "" }, { "docid": "224ff3cafa187e246150c9bdb9aecd2e", "text": "We present a novel method for recovering 6D object pose in RGB-D images. By contrast with recent holistic or local patch-based method, we combine holistic patches and local patches together to fulfil this task. Our method has three stages, including holistic patch classification, local patch regression and fine 6D pose estimation. In the first stage, we apply a simple Convolutional Neural Network (CNN) to classify all the sampled holistic patches from the scene image. After that, the candidate region of target object can be segmented. In the second stage, as proposed in Doumanoglou et al. [16] and Kehl et al. [17], a Convolutional Autoencoder (CAE) is employed to extract condensed local patch feature, and coarse 6D object pose can be estimated by the regression of feature voting. Finally, we apply Particle Swarm Optimization (PSO) to refine 6D object pose. Our method is evaluated on the LINEMOD dataset [5] and the Occlusion dataset [10, 5], and compared with the state-of-the-art on the same sequences. Experimental results show that our method has high precision and good performance under foreground occlusion and background clutter conditions.", "title": "" }, { "docid": "8397bdb99c650ea07feeb3301698dd79", "text": "This section gives a short survey of the principles and the terminology of phased array radar. Beamforming, radar detection and parameter estimation are described. The concept of subarrays and monopulse estimation with arbitrary subarrays is developed. As a preparation to adaptive beam forming, which is treated in several other sections, the topic of pattern shaping by deterministic weighting is presented in more detail. 1.0 INTRODUCTION Arrays are today used for many applications and the view and terminology is quite different. We give here an introduction to the specific features of radar phased array antennas and the associated signal processing. First the radar principle and the terminology is explained. Beamforming with a large number of array elements is the typical radar feature and the problems with such antennas are in other applications not known. We discuss therefore the special problems of fully filled arrays, large apertures and bandwidth. To reduce cost and space the antenna outputs are usually summed up into subarrays. Digital processing is done only with the subarray outputs. The problems of such partial analogue and digital beamforming, in particular the grating problems are discussed. This topic will be reconsidered for adaptive beamforming, space-time adaptive processing (STAP), and SAR. Radar detection, range and direction estimation is derived from statistical hypotheses testing and parameter estimation theory. The main application of this theory is the derivation of adaptive beamforming to be considered in the following lectures. In this lecture we present as an application the derivation of the monopulse estimator which is in the following lectures extended to monopulse estimators for adaptive arrays or STAP. As beamforming plays a central role in phased arrays and as a preparation to all kinds of adaptive beamforming, a detailed presentation of deterministic antenna pattern shaping and the associated channel accuracy requirements is given. 2.0 FUNDAMENTALS OF RADAR AND ARRAYS 2.1 Nomenclature The radar principle is sketched in Figure 1. A pulse of length τ is transmitted, is reflected at the target and is received again at time t0 at the radar. From this signal travelling time the range is calculated R0= ct0 /2. The process is repeated at the pulse repetition interval (PRI) T. The maximum unambiguous range is Nickel, U. (2006) Fundamentals of Signal Processing for Phased Array Radar. In Advanced Radar Signal and Data Processing (pp. 1-1 – 1-22). Educational Notes RTO-EN-SET-086, Paper 1. Neuilly-sur-Seine, France: RTO. Available from: http://www.rto.nato.int/abstracts.asp. RTO-EN-SET-086 1 1 Report Documentation Page Form Approved OMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 01 SEP 2006 2. REPORT TYPE N/A 3. DATES COVERED 4. TITLE AND SUBTITLE Fundamentals of Signal Processing for Phased Array Radar 5a. CONTRACT NUMBER", "title": "" }, { "docid": "678dd457ead0cccc8ad735aeec3fdcad", "text": "A general model of decentralized stochastic control called partial history sharing information structure is presented. In this model, at each step the controllers share part of their observation and control history with each other. This general model subsumes several existing models of information sharing as special cases. Based on the information commonly known to all the controllers, the decentralized problem is reformulated as an equivalent centralized problem from the perspective of a coordinator. The coordinator knows the common information and selects prescriptions that map each controller's local information to its control actions. The optimal control problem at the coordinator is shown to be a partially observable Markov decision process (POMDP) which is solved using techniques from Markov decision theory. This approach provides 1) structural results for optimal strategies and 2) a dynamic program for obtaining optimal strategies for all controllers in the original decentralized problem. Thus, this approach unifies the various ad-hoc approaches taken in the literature. In addition, the structural results on optimal control strategies obtained by the proposed approach cannot be obtained by the existing generic approach (the person-by-person approach) for obtaining structural results in decentralized problems; and the dynamic program obtained by the proposed approach is simpler than that obtained by the existing generic approach (the designer's approach) for obtaining dynamic programs in decentralized problems.", "title": "" }, { "docid": "a794be8dfed40c3de9e15715aa64cc79", "text": "In the winter of 1991 I (GR) sent to Nature a report on a surprising set of neurons that we (Giuseppe Di Pellegrino, Luciano Fadiga, Leonardo Fogassi, Vittorio Gallese) had found in the ventral premotor cortex of the monkey. The fundamental characteristic of these neurons was that they discharged both when the monkey performed a certain motor act (e.g., grasping an object) and when it observed another individual (monkey or human) performing that or a similar motor act (Di Pellegrino et al. 1992). These neurons are now known as mirror neurons (Fig. 1). Nature rejected our paper for its “lack of general interest” and suggested publication in a specialized journal. At this point I called Prof. Otto Creutzfeld, the then Coordinating Editor of Experimental Brain Research. I told him that I thought we found something really interesting and asked him to read our manuscript before sending it to the referees. After a few days he called me back saying that indeed our Wndings were, according to him, of extraordinary interest. Our article appeared in Experimental Brain Research a few months later. The idea of sending our report on mirror neurons to Experimental Brain Research, rather than to another neuroscience journal, was motivated by a previous positive experience with that journal. A few years earlier, Experimental Brain Research accepted an article in which we presented (Rizzolatti et al. 1988) a new view (something that typically referees did not like) on the organization of the ventral premotor cortex of the monkey and reported the Wndings that paved the way for the discovery of mirror neurons. In that article we described how, in the ventral premotor cortex (area F5) of the monkey, there are neurons that respond both when the monkey performs a motor act (e.g., grasping or holding) and when it observes an object whose physical features Wt the type of grip coded by that neuron (e.g., precision grip/small objects; whole hand/large objects). These neurons (now known as “canonical neurons”, Murata et al. 1997) and neurons with similar properties, described by Sakata et al. (1995) in the parietal cortex are now universally considered the neural substrate of the mechanism through which object aVordances are translated into motor acts (see Jeannerod et al. 1995). We performed the experiments on the motor properties of F5 in 1988 using an approach that should almost necessarily lead to the discovery of mirror neurons if these neurons existed in area F5. In order to test the F5 neurons with objects that may interest the monkeys, we used pieces of food of diVerent size and shape. To give the monkey some food, we had, of course, to grasp it. To our surprise we found that some F5 neurons discharged not when the monkey looked at the food, but when the experimenter grasped it. The mirror mechanism was discovered. The next important role of Experimental Brain Research in the discovery of mirror neurons was its acceptance in G. Rizzolatti (&) · M. Fabbri-Destro Dipartimento di Neuroscienze, Sezione Fisiologia, Università di Parma, via Volturno, 39, 43100 Parma, Italy e-mail: [email protected]", "title": "" }, { "docid": "980d771f582372785214fd133fd58db2", "text": "With the increasing interest in deeper understanding of the loss surface of many non-convex deep models, this paper presents a unifying framework to study the local/global optima equivalence of the optimization problems arising from training of such non-convex models. Using the local openness property of the underlying training models, we provide simple sufficient conditions under which any local optimum of the resulting optimization problem is globally optimal. We first completely characterize the local openness of matrix multiplication mapping in its range. Then we use our characterization to: 1) show that every local optimum of two layer linear networks is globally optimal. Unlike many existing results in the literature, our result requires no assumption on the target data matrix Y , and input data matrixX . 2) develop almost complete characterization of the local/global optima equivalence of multi-layer linear neural networks. We provide various counterexamples to show the necessity of each of our assumptions. 3) show global/local optima equivalence of non-linear deep models having certain pyramidal structure. Unlike some existing works, our result requires no assumption on the differentiability of the activation functions and can go beyond “full-rank” cases.", "title": "" }, { "docid": "5d614e53ddd6675acf9f8d3931b3dc20", "text": "3D reconstruction methods based on active stereo technique have been widely used for many practical systems. Many of these systems are configured with a single camera and a single projector. Since such systems can only capture one side of the target object, several attempts have been conducted to enlarge the captured area, especially multi-projector systems attract many researchers. For multi-projector based systems, overlap between multiple pattern projections is a serious problem. Even if different color channels are used for each projector, complete separation is not possible because of color crosstalks. Another open problem is decoding errors of the projected patterns, which causes a failure on extracting positional information of the projected pattern form the captured image. Among several reasons for such errors, color crosstalks are crucial because their features are similar to the main signal and difficult to be decomposed. In this paper, we solve these problems by utilizing machine learning techniques where a convolutional neural network is trained to extract low dimensional pattern features for each projector. In addition, it is trained to suppress the color crosstalks from different projectors. Using this new technique, we succeeded in reconstructing 3D shapes from images where multiple patterns are overlapped.", "title": "" }, { "docid": "fb0b06eb6238c008bef7d3b2e9a80792", "text": "An N-dimensional image is divided into “object” and “background” segments using a graph cut approach. A graph is formed by connecting all pairs of neighboring image pixels (voxels) by weighted edges. Certain pixels (voxels) have to be a priori identified as object or background seeds providing necessary clues about the image content. Our objective is to find the cheapest way to cut the edges in the graph so that the object seeds are completely separated from the background seeds. If the edge cost is a decreasing function of the local intensity gradient then the minimum cost cut should produce an object/background segmentation with compact boundaries along the high intensity gradient values in the image. An efficient, globally optimal solution is possible via standard min-cut/max-flow algorithms for graphs with two terminals. We applied this technique to interactively segment organs in various 2D and 3D medical images.", "title": "" }, { "docid": "948e65673f679fe37027f4dc496397f8", "text": "Online courses are growing at a tremendous rate, and although we have discovered a great deal about teaching and learning in the online environment, there is much left to learn. One variable that needs to be explored further is procrastination in online coursework. In this mixed methods study, quantitative methods were utilized to evaluate the influence of online graduate students’ attributions for academic outcomes to ability, effort, context, and luck on their tendency to procrastinate. Additionally, qualitative methods were utilized to explore students’ attributional beliefs about their tendency to procrastinate in their online coursework. Collectively, results indicated that ability, effort, context, and luck influenced procrastination in this sample of graduate students. A discussion of these findings, implications for instructors, and recommendations for future research ensues. Online course offerings and degree programs have recently increased at a rapid rate and have gained in popularity among students (Allen & Seaman, 2010, 2011). Garrett (2007) reported that half of prospective students surveyed about postsecondary programs expressed a preference for online and hybrid programs, typically because of the flexibility and convenience (Daymont, Blau, & Campbell, 2011). Advances in learning management systems such as Blackboard have facilitated the dramatic increase in asynchronous programs. Although the research literature concerning online learning has blossomed over the past decade, much is left to learn about important variables that impact student learning and achievement. The purpose of this mixed methods study was to better understand the relationship between online graduate students’ attributional beliefs and their tendency to procrastinate. The approach to this objective was twofold. First, quantitative methods were utilized to evaluate the influence of students’ attributions for academic outcomes to ability, effort, context, and luck on their tendency to procrastinate. Second, qualitative methods were utilized to explore students’ attributional beliefs about their tendency to procrastinate in their online coursework. Journal of Interactive Online Learning Rakes, Dunn, and Rakes", "title": "" }, { "docid": "16e03a9071e84f20236aa84dca70a56c", "text": "In this paper, we report on findings from an ethnographic study of how people use their calendars for personal information management (PIM). Our participants were faculty, staff and students who were not required to use or contribute to any specific calendaring solution, but chose to do so anyway. The study was conducted in three parts: first, an initial survey provided broad insights into how calendars were used; second, this was followed up with personal interviews of a few participants which were transcribed and content-analyzed; and third, examples of calendar artifacts were collected to inform our analysis. Findings from our study include the use of multiple reminder alarms, the reliance on paper calendars even among regular users of electronic calendars, and wide use of calendars for reporting and life-archival purposes. We conclude the paper with a discussion of what these imply for designers of interactive calendar systems and future work in PIM research.", "title": "" }, { "docid": "129a85f7e611459cf98dc7635b44fc56", "text": "Pain in the oral and craniofacial system represents a major medical and social problem. Indeed, a U.S. Surgeon General’s report on orofacial health concludes that, ‘‘. . .oral health means much more than healthy teeth. It means being free of chronic oral-facial pain conditions. . .’’ [172]. Community-based surveys indicate that many subjects commonly report pain in the orofacial region, with estimates of >39 million, or 22% of Americans older than 18 years of age, in the United States alone [108]. Other population-based surveys conducted in the United Kingdom [111,112], Germany [91], or regional pain care centers in the United States [54] report similar occurrence rates [135]. Importantly, chronic widespread body pain, patient sex and age, and psychosocial factors appear to serve as risk factors for chronic orofacial pain [1,2,92,99,138]. In addition to its high degree of prevalence, the reported intensities of various orofacial pain conditions are similar to that observed with many spinal pain disorders (Fig. 1). Moreover, orofacial pain is derived from many unique target tissues, such as the meninges, cornea, tooth pulp, oral/ nasal mucosa, and temporomandibular joint (Fig. 2), and thus has several unique physiologic characteristics compared with the spinal nociceptive system [23]. Given these considerations, it is not surprising that accurate diagnosis and effective management of orofacial pain conditions represents a significant health care problem. Publications in the field of orofacial pain demonstrate a steady increase over the last several decades (Fig. 3). This is a complex literature; a recent bibliometric analysis of orofacial pain articles published in 2004–2005 indicated that 975 articles on orofacial pain were published in 275 journals from authors representing 54 countries [142]. Thus, orofacial pain disorders represent a complex constellation of conditions with an equally diverse literature base. Accordingly, this review will focus on a summary of major research foci on orofacial pain without attempting to provide a comprehensive review of the entire literature.", "title": "" }, { "docid": "3b6b746f4467fd53ade1d6d2798c45b7", "text": "We present a new deep learning architecture (called Kdnetwork) that is designed for 3D model recognition tasks and works with unstructured point clouds. The new architecture performs multiplicative transformations and shares parameters of these transformations according to the subdivisions of the point clouds imposed onto them by kdtrees. Unlike the currently dominant convolutional architectures that usually require rasterization on uniform twodimensional or three-dimensional grids, Kd-networks do not rely on such grids in any way and therefore avoid poor scaling behavior. In a series of experiments with popular shape recognition benchmarks, Kd-networks demonstrate competitive performance in a number of shape recognition tasks such as shape classification, shape retrieval and shape part segmentation.", "title": "" }, { "docid": "0aefbe4b8d84c1d5829571ae61ca091a", "text": "More than 30% of U.S. adults report having experienced low back pain within the preceding three months. Although most low back pain is nonspecific and self-limiting, a subset of patients develop chronic low back pain, defined as persistent symptoms for longer than three months. Low back pain is categorized as nonspecific low back pain without radiculopathy, low back pain with radicular symptoms, or secondary low back pain with a spinal cause. Imaging should be reserved for patients with red flags for cauda equina syndrome, recent trauma, risk of infection, or when warranted before treatment (e.g., surgical, interventional). Prompt recognition of cauda equina syndrome is critical. Patient education should be combined with evidence-guided pharmacologic therapy. Goals of therapy include reducing the severity of pain symptoms, pain interference, and disability, as well as maximizing activity. Validated tools such as the Oswestry Disability Index can help assess symptom severity and functional change in patients with chronic low back pain. Epidural steroid injections do not improve pain or disability in patients with spinal stenosis. Spinal manipulation therapy produces small benefits for up to six months. Because long-term data are lacking for spinal surgery, patient education about realistic outcome expectations is essential.", "title": "" }, { "docid": "b1a538752056e91fd5800911f36e6eb0", "text": "BACKGROUND\nThe current, so-called \"Millennial\" generation of learners is frequently characterized as having deep understanding of, and appreciation for, technology and social connectedness. This generation of learners has also been molded by a unique set of cultural influences that are essential for medical educators to consider in all aspects of their teaching, including curriculum design, student assessment, and interactions between faculty and learners.\n\n\nAIM\n The following tips outline an approach to facilitating learning of our current generation of medical trainees.\n\n\nMETHOD\n The method is based on the available literature and the authors' experiences with Millennial Learners in medical training.\n\n\nRESULTS\n The 12 tips provide detailed approaches and specific strategies for understanding and engaging Millennial Learners and enhancing their learning.\n\n\nCONCLUSION\n With an increased understanding of the characteristics of the current generation of medical trainees, faculty will be better able to facilitate learning and optimize interactions with Millennial Learners.", "title": "" }, { "docid": "fff6c1ca2fde7f50c3654f1953eb97e6", "text": "This paper concerns new techniques for making requirements specifications precise, concise, unambiguous, and easy to check for completeness and consistency. The techniques are well-suited for complex real-time software systems; they were developed to document the requirements of existing flight software for the Navy's A-7 aircraft. The paper outlines the information that belongs in a requirements document and discusses the objectives behind the techniques. Each technique is described and illustrated with examples from the A-7 document. The purpose of the paper is to introduce the A-7 document as a model of a disciplined approach to requirements specification; the document is available to anyone who wishes to see a fully worked-out example of the approach.", "title": "" } ]
scidocsrr
f2c7a3e02db74be023c9bd11c4264bc3
Internet Over-Users' Psychological Profiles: A Behavior Sampling Analysis on Internet Addiction
[ { "docid": "ae72dc57784a9b3bb05dea9418e28914", "text": "This study explores Internet addiction among some of the Taiwan's college students. Also covered are a discussion of the Internet as a form of addiction, and related literature on this issue. This study used the Uses and Grati®cations theory and the Play theory in mass communication. Nine hundred and ten valid surveys were collected from 12 universities and colleges around Taiwan. The results indicated that Internet addiction does exist among some of Taiwan's college students. In particular, 54 students were identi®ed as Internet addicts. It was found that Internet addicts spent almost triple the number of hours connected to the Internet as compare to non-addicts, and spent signi®cantly more time on BBSs, the WWW, e-mail and games than non-addicts. The addict group found the Internet entertaining, interesting, interactive, and satisfactory. The addict group rated Internet impacts on their studies and daily life routines signi®cantly more negatively than the non-addict group. The study also found that the most powerful predictor of Internet addiction is the communication pleasure score, followed by BBS use hours, sex, satisfaction score, and e-mail-use hours. 7 2000 Elsevier Science Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "5e182532bfd10dee3f8d57f14d1f4455", "text": "Camera calibrating is a crucial problem for further metric scene measurement. Many techniques and some studies concerning calibration have been presented in the last few years. However, it is still di1cult to go into details of a determined calibrating technique and compare its accuracy with respect to other methods. Principally, this problem emerges from the lack of a standardized notation and the existence of various methods of accuracy evaluation to choose from. This article presents a detailed review of some of the most used calibrating techniques in which the principal idea has been to present them all with the same notation. Furthermore, the techniques surveyed have been tested and their accuracy evaluated. Comparative results are shown and discussed in the article. Moreover, code and results are available in internet. ? 2002 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "65c4d3f99a066c235bb5d946934bee05", "text": "This paper describes a new Augmented Reality (AR) system called HoloLens developed by Microsoft, and the interaction model for supporting collaboration in this space with other users. Whereas traditional AR collaboration is between two or more head-mounted displays (HMD) users, we describe collaboration between a single HMD user and others who join the space by hitching on the view of the HMD user. The remote companions participate remotely through Skype-enabled devices such as tablets or PC's. The interaction is novel in the use of a 3D space with digital objects where the interaction by remote parties can be achieved asynchronously and reflected back to the primary user. We describe additional collaboration scenarios possible with this arrangement.", "title": "" }, { "docid": "7dc6c67b76ed30574a27305fa1596966", "text": "The ultrawideband (UWB) planar antenna is designed as a circular metallic patch fed by a coplanar waveguide (CPW). This antenna provides the impedance bandwidth of the wideband response from 2.5 to 12 GHz. To achieve the notched characteristics at desirable frequencies, the electric ring resonator (ERR) incorporated into the CPW feedline is proposed for use in the planar configuration of the UWB antenna. The notched frequency band is controlled by dimensions of the ERR structure. The single-notched band can be obtained by placing a single ERR beneath the CPW structure. For implementation of the multinotch band, a modified multimode structure of the ERR is examined. Reconfigurability of the first notched band is provided by using a digital variable capacitor (DVC) instead of ERR's quasi-lumped capacitance. The results of simulations and measurements are in a good agreement.", "title": "" }, { "docid": "70a0d815aaee61633e42ec33ec55eb72", "text": "Massive amounts of data are available in today’s Digital Libraries (DLs). The challenge is to find relevant information quickly and easily, and to use it effectively. A standard way to access DLs is via a text -based query issued by a single user. Typically, the query results in a potentially very long ordered list of matching documents, that makes it hard for users to find what they are looking for. This paper presents iScape, a shared virtual desktop world dedicated to the collaborative exploration and management of information. Data mining and information visualization techniques are applied to extract and visualize semantic relationships in search results. A three-dimensional (3-D) online browser system is exploited to facilitate complex and sophisticated human-computer and human-human interaction. Informal user studies have been conducted to compare the iScape world with a text -based, a 2-D visual Web interface, and a 3-D non-collaborative CAVE interface. We conclude with a discussion.", "title": "" }, { "docid": "a33486dfec199cd51e885d6163082a96", "text": "In this study, the aim is to examine the most popular eSport applications at a global scale. In this context, the App Store and Google Play Store application platforms which have the highest number of users at a global scale were focused on. For this reason, the eSport applications included in these two platforms constituted the sampling of the present study. A data collection form was developed by the researcher of the study in order to collect the data in the study. This form included the number of the countries, the popularity ratings of the application, the name of the application, the type of it, the age limit, the rating of the likes, the company that developed it, the version and the first appearance date. The study was conducted with the Qualitative Research Method, and the Case Study design was made use of in this process; and the Descriptive Analysis Method was used to analyze the data. As a result of the study, it was determined that the most popular eSport applications at a global scale were football, which ranked the first, basketball, billiards, badminton, skateboarding, golf and dart. It was also determined that the popularity of the mobile eSport applications changed according to countries and according to being free or paid. It was determined that the popularity of these applications differed according to the individuals using the App Store and Google Play Store application markets. As a result, it is possible to claim that mobile eSport applications have a wide usage area at a global scale and are accepted widely. In addition, it was observed that the interest in eSport applications was similar to that in traditional sports. However, in the present study, a certain date was set, and the interest in mobile eSport applications was analyzed according to this specific date. In future studies, different dates and different fields like educational sciences may be set to analyze the interest in mobile eSport applications. In this way, findings may be obtained on the change of the interest in mobile eSport applications according to time. The findings of the present study and similar studies may have the quality of guiding researchers and system/software developers in terms of showing the present status of the topic and revealing the relevant needs.", "title": "" }, { "docid": "d7c2d97fbd7591bdd53e711ed5582f6c", "text": "Progress in Information and Communication Technologies (ICTs) is shaping more and more the healthcare domain. ICTs adoption provides new opportunities, as well as discloses novel and unforeseen application scenarios. As a result, the overall health sector is potentially benefited, as the quality of medical services is expected to be enhanced and healthcare costs are reduced, in spite of the increasing demand due to the aging population. Notwithstanding the above, the scientific literature appears to be still quite scattered and fragmented, also due to the interaction of scientific communities with different background, skills, and approaches. A number of specific terms have become of widespread use (e.g., regarding ICTs-based healthcare paradigms as well as at health-related data formats), but without commonly-agreed definitions. While scientific surveys and reviews have also been proposed, none of them aims at providing a holistic view of how today ICTs are able to support healthcare. This is the more and more an issue, as the integrated application of most if not all the main ICTs pillars is the most agreed upon trend, according to the Industry 4.0 paradigm about ongoing and future industrial revolution. In this paper we aim at shedding light on how ICTs and healthcare are related, identifying the most popular ICTs-based healthcare paradigms, together with the main ICTs backing them. Studying more than 300 papers, we survey outcomes of literature analyses and results from research activities carried out in this field. We characterize the main ICTs-based healthcare paradigms stemmed out in recent years fostered by the evolution of ICTs. Dissecting the scientific literature, we also identify the technological pillars underpinning the novel applications fueled by these technological advancements. Guided by the scientific literature, we review a number of application scenarios gaining momentum thanks to the beneficial impact of ICTs. As the evolution of ICTs enables to gather huge and invaluable data from numerous and highly varied sources in easier ways, here we also focus on the shapes that this healthcare-related data may take. This survey provides an up-to-date picture of the novel healthcare applications enabled by the ICTs advancements, with a focus on their specific hottest research challenges. It helps the interested readership (from both technological and medical fields) not to lose orientation in the complex landscapes possibly generated when advanced ICTs are adopted in application scenarios dictated by the critical healthcare domain.", "title": "" }, { "docid": "ef8d88d57858706ba269a8f3aaa989f3", "text": "The mid 20 century witnessed some serious attempts in studies of play and games with an emphasis on their importance within culture. Most prominently, Johan Huizinga (1944) maintained in his book Homo Ludens that the earliest stage of culture is in the form of play and that culture proceeds in the shape and the mood of play. He also claimed that some elements of play crystallised as knowledge such as folklore, poetry and philosophy as culture advanced.", "title": "" }, { "docid": "e45f84027c87259c6826d9755be363e7", "text": "Preterm infants are susceptible to inflammation-induced white matter injury but the exposures that lead to this are uncertain. Histologic chorioamnionitis (HCA) reflects intrauterine inflammation, can trigger a fetal inflammatory response, and is closely associated with premature birth. In a cohort of 90 preterm infants with detailed placental histology and neonatal brain magnetic resonance imaging (MRI) data at term equivalent age, we used Tract-based Spatial Statistics (TBSS) to perform voxel-wise statistical comparison of fractional anisotropy (FA) data and computational morphometry analysis to compute the volumes of whole brain, tissue compartments and cerebrospinal fluid, to test the hypothesis that HCA is an independent antenatal risk factor for preterm brain injury. Twenty-six (29%) infants had HCA and this was associated with decreased FA in the genu, cingulum cingulate gyri, centrum semiovale, inferior longitudinal fasciculi, limbs of the internal capsule, external capsule and cerebellum (p < 0.05, corrected), independent of degree of prematurity, bronchopulmonary dysplasia and postnatal sepsis. This suggests that diffuse white matter injury begins in utero for a significant proportion of preterm infants, which focuses attention on the development of methods for detecting fetuses and placentas at risk as a means of reducing preterm brain injury.", "title": "" }, { "docid": "28a86caf1d86c58941f72c71699fabb1", "text": "Dicing of ultrathin (e.g. <; 75um thick) “via-middle” 3DI/TSV semiconductor wafers proves to be challenging because the process flow requires the dicing step to occur after wafer thinning and back side processing. This eliminates the possibility of using any type of “dice-before-grind” techniques. In addition, the presence of back side alignment marks, TSVs, or other features in the dicing street can add challenges for the dicing process. In this presentation, we will review different dicing processes used for 3DI/TSV via-middle products. Examples showing the optimization process for a 3DI/TSV memory device wafer product are provided.", "title": "" }, { "docid": "22d233c7f0916506d2fc23b3a8ef4633", "text": "CD69 is a type II C-type lectin involved in lymphocyte migration and cytokine secretion. CD69 expression represents one of the earliest available indicators of leukocyte activation and its rapid induction occurs through transcriptional activation. In this study we examined the molecular mechanism underlying mouse CD69 gene transcription in vivo in T and B cells. Analysis of the 45-kb region upstream of the CD69 gene revealed evolutionary conservation at the promoter and at four noncoding sequences (CNS) that were called CNS1, CNS2, CNS3, and CNS4. These regions were found to be hypersensitive sites in DNase I digestion experiments, and chromatin immunoprecipitation assays showed specific epigenetic modifications. CNS2 and CNS4 displayed constitutive and inducible enhancer activity in transient transfection assays in T cells. Using a transgenic approach to test CNS function, we found that the CD69 promoter conferred developmentally regulated expression during positive selection of thymocytes but could not support regulated expression in mature lymphocytes. Inclusion of CNS1 and CNS2 caused suppression of CD69 expression, whereas further addition of CNS3 and CNS4 supported developmental-stage and lineage-specific regulation in T cells but not in B cells. We concluded CNS1-4 are important cis-regulatory elements that interact both positively and negatively with the CD69 promoter and that differentially contribute to CD69 expression in T and B cells.", "title": "" }, { "docid": "b91291a9b64ef7668633c2a3df82285a", "text": "Recent work has managed to learn crosslingual word embeddings without parallel data by mapping monolingual embeddings to a shared space through adversarial training. However, their evaluation has focused on favorable conditions, using comparable corpora or closely-related languages, and we show that they often fail in more realistic scenarios. This work proposes an alternative approach based on a fully unsupervised initialization that explicitly exploits the structural similarity of the embeddings, and a robust self-learning algorithm that iteratively improves this solution. Our method succeeds in all tested scenarios and obtains the best published results in standard datasets, even surpassing previous supervised systems. Our implementation is released as an open source project at https://github. com/artetxem/vecmap.", "title": "" }, { "docid": "7cc3da275067df8f6c017da37025856c", "text": "A simple, green method is described for the synthesis of Gold (Au) and Silver (Ag) nanoparticles (NPs) from the stem extract of Breynia rhamnoides. Unlike other biological methods for NP synthesis, the uniqueness of our method lies in its fast synthesis rates (~7 min for AuNPs) and the ability to tune the nanoparticle size (and subsequently their catalytic activity) via the extract concentration used in the experiment. The phenolic glycosides and reducing sugars present in the extract are largely responsible for the rapid reduction rates of Au(3+) ions to AuNPs. Efficient reduction of 4-nitrophenol (4-NP) to 4-aminophenol (4-AP) in the presence of AuNPs (or AgNPs) and NaBH(4) was observed and was found to depend upon the nanoparticle size or the stem extract concentration used for synthesis.", "title": "" }, { "docid": "2ab6fbff57bd8f084e81ccdee2e6fba7", "text": "The proposed model of an electric vehicle charging station is suitable for the fast DC charging of multiple electric vehicles. The station consists of a single grid-connected inverter with a DC bus where the electric vehicles are connected. The control of the individual electric vehicle charging processes is decentralized, while a separate central control deals with the power transfer from the AC grid to the DC bus. The electric power exchange does not rely on communication links between the station and vehicles, and a smooth transition to vehicle-to-grid mode is also possible. Design guidelines and modeling are explained in an educational way to support implementation in Matlab/Simulink. Simulations are performed in Matlab/Simulink to illustrate the behavior of the station. The results show the feasibility of the model proposed and the capability of the control system for fast DC charging and also vehicle-to-grid.", "title": "" }, { "docid": "c1f907a8dc5308e07df76c69fd0deb45", "text": "Emotion regulation has been conceptualized as a process by which individuals modify their emotional experiences, expressions, and physiology and the situations eliciting such emotions in order to produce appropriate responses to the ever-changing demands posed by the environment. Thus, context plays a central role in emotion regulation. This is particularly relevant to the work on emotion regulation in psychopathology, because psychological disorders are characterized by rigid responses to the environment. However, this recognition of the importance of context has appeared primarily in the theoretical realm, with the empirical work lagging behind. In this review, the author proposes an approach to systematically evaluate the contextual factors shaping emotion regulation. Such an approach consists of specifying the components that characterize emotion regulation and then systematically evaluating deviations within each of these components and their underlying dimensions. Initial guidelines for how to combine such dimensions and components in order to capture substantial and meaningful contextual influences are presented. This approach is offered to inspire theoretical and empirical work that it is hoped will result in the development of a more nuanced and sophisticated understanding of the relationship between context and emotion regulation.", "title": "" }, { "docid": "13dec2ecb04ce7fb535b666ff6bc5517", "text": "The Cosmetic Ingredient Review Expert Panel (Panel) assessed the safety of talc for use in cosmetics. The safety of talc has been the subject of much debate through the years, partly because the relationship between talc and asbestos is commonly misunderstood. Industry specifications state that cosmetic-grade talc must contain no detectable fibrous, asbestos minerals. Therefore, the large amount of available animal and clinical data the Panel relied on in assessing the safety of talc only included those studies on talc that did not contain asbestos. The Panel concluded that talc is safe for use in cosmetics in the present practices of use and concentration (some cosmetic products are entirely composed of talc). Talc should not be applied to the skin when the epidermal barrier is missing or significantly disrupted.", "title": "" }, { "docid": "7ea56b976524d77b7234340318f7e8dc", "text": "Market Integration and Market Structure in the European Soft Drinks Industry: Always Coca-Cola? by Catherine Matraves* This paper focuses on the question of European integration, considering whether the geographic level at which competition takes place differs across the two major segments of the soft drinks industry: carbonated soft drinks and mineral water. Our evidence shows firms are competing at the European level in both segments. Interestingly, the European market is being integrated through corporate strategy, defined as increased multinationality, rather than increased trade flows. To interpret these results, this paper uses the new theory of market structure where the essential notion is that in endogenous sunk cost industries such as soft drinks, the traditional inverse structure-size relation may break down, due to the escalation of overhead expenditures.", "title": "" }, { "docid": "0e3f43a28c477ae0e15a8608d3a1d4a5", "text": "This report provides an overview of the current state of the art deep learning architectures and optimisation techniques, and uses the ADNI hippocampus MRI dataset as an example to compare the effectiveness and efficiency of different convolutional architectures on the task of patch-based 3dimensional hippocampal segmentation, which is important in the diagnosis of Alzheimer’s Disease. We found that a slightly unconventional ”stacked 2D” approach provides much better classification performance than simple 2D patches without requiring significantly more computational power. We also examined the popular ”tri-planar” approach used in some recently published studies, and found that it provides much better results than the 2D approaches, but also with a moderate increase in computational power requirement. Finally, we evaluated a full 3D convolutional architecture, and found that it provides marginally better results than the tri-planar approach, but at the cost of a very significant increase in computational power requirement. ar X iv :1 50 5. 02 00 0v 1 [ cs .L G ] 8 M ay 2 01 5", "title": "" }, { "docid": "4594f0649665596ea708dccaf0557de3", "text": "Detection of spam Twitter social networks is one of the significant research areas to discover unauthorized user accounts. A number of research works have been carried out to solve these issues but most of the existing techniques had not focused on various features and doesn't group similar user trending topics which become their major limitation. Trending topics collects the current Internet trends and topics of argument of each and every user. In order to overcome the problem of feature extraction,this work initially extracts many features such as user profile features, user activity features, location based features and text and content features. Then the extracted text features use Jenson-Shannon Divergence (JSD) measure to characterize each labeled tweet using natural language models. Different features are extracted from collected trending topics data in twitter. After features are extracted, clusters are formed to group similar trending topics of tweet user profile. Fuzzy K-means (FKM) algorithm primarily cluster the similar user profiles with same trending topics of tweet and centers are determined to similar user profiles with same trending topics of tweet from fuzzy membership function. Moreover, Extreme learning machine (ELM) algorithm is applied to analyze the growing characteristics of spam with similar topics in twitter from clustering result and acquire necessary knowledge in the detection of spam. The results are evaluated with F-measure, True Positive Rate (TPR), False Positive Rate (FPR) and Classification Accuracy with improved detection results.", "title": "" }, { "docid": "1e8cc72ad8ee3368b092aa5a96e782f9", "text": "This paper presents a newly developed implementation of remote message passing, remote actor creation and actor migration in SALSA Lite. The new runtime and protocols are implemented using SALSA Lite’s lightweight actors and asynchronous message passing, and provide significant performance improvements over SALSA version 1.1.5. Actors in SALSA Lite can now be local, the default lightweight actor implementation; remote, actors which can be referenced remotely and send remote messages, but cannot migrate; or mobile, actors that can be remotely referenced, send remote messages and migrate to different locations. Remote message passing in SALSA Lite is twice as fast, actor migration is over 17 times as fast, and remote actor creation is two orders of magnitude faster. Two new benchmarks for remote message passing and migration show this implementation has strong scalability in terms of concurrent actor message passing and migration. The costs of using remote and mobile actors are also investigated. For local message passing, remote actors resulted in no overhead, and mobile actors resulted in 30% overhead. Local creation of remote and mobile actors was more expensive with 54% overhead for remote actors and 438% for mobile actors. In distributed scenarios, creating mobile actors remotely was only 6% slower than creating remote actors remotely, and passing messages between mobile actors on different theaters was only 5.55% slower than passing messages between remote actors. These results highlight the benefits of our approach in implementing the distributed runtime over a core set of efficient lightweight actors, as well as provide insights into the costs of implementing remote message passing and actor mobility.", "title": "" }, { "docid": "ff50d07261681dcc210f01593ad2c109", "text": "A mathematical model of the system composed of two sensors, the semicircular canal and the sacculus, is suggested. The model is described by three lines of blocks, each line of which has the following structure: a biomechanical block, a mechanoelectrical transduction mechanism, and a block describing the hair cell ionic currents and membrane potential dynamics. The response of this system to various stimuli (head rotation under gravity and falling) is investigated. Identification of the model parameters was done with the experimental data obtained for the axolotl (Ambystoma tigrinum) at the Institute of Physiology, Autonomous University of Puebla, Mexico. Comparative analysis of the semicircular canal and sacculus membrane potentials is presented.", "title": "" } ]
scidocsrr
ce3297413cf4b6406a80e7a62690dd3b
Filtering for Texture Classification: A Comparative Study
[ { "docid": "b29caaa973e60109fbc2f68e0eb562a6", "text": "This correspondence introduces a new approach to characterize textures at multiple scales. The performance of wavelet packet spaces are measured in terms of sensitivity and selectivity for the classification of twenty-five natural textures. Both energy and entropy metrics were computed for each wavelet packet and incorporated into distinct scale space representations, where each wavelet packet (channel) reflected a specific scale and orientation sensitivity. Wavelet packet representations for twenty-five natural textures were classified without error by a simple two-layer network classifier. An analyzing function of large regularity ( 0 2 0 ) was shown to be slightly more efficient in representation and discrimination than a similar function with fewer vanishing moments (Ds) . In addition, energy representations computed from the standard wavelet decomposition alone (17 features) provided classification without error for the twenty-five textures included in our study. The reliability exhibited by texture signatures based on wavelet packets analysis suggest that the multiresolution properties of such transforms are beneficial for accomplishing segmentation, classification and subtle discrimination of texture.", "title": "" } ]
[ { "docid": "60ea2144687d867bb4f6b21e792a8441", "text": "Stochastic gradient descent is a simple approach to find the local minima of a cost function whose evaluations are corrupted by noise. In this paper, we develop a procedure extending stochastic gradient descent algorithms to the case where the function is defined on a Riemannian manifold. We prove that, as in the Euclidian case, the gradient descent algorithm converges to a critical point of the cost function. The algorithm has numerous potential applications, and is illustrated here by four examples. In particular a novel gossip algorithm on the set of covariance matrices is derived and tested numerically.", "title": "" }, { "docid": "8133c24cd74ddee80fad02e159f1c80f", "text": "In this paper, we propose a method for detecting humans in imagery taken from a UAV. This is a challenging problem due to small number of pixels on target, which makes it more difficult to distinguish people from background clutter, and results in much larger searchspace. We propose a method for human detection based on a number of geometric constraints obtained from the metadata. Specifically, we obtain the orientation of groundplane normal, the orientation of shadows cast by humans in the scene, and the relationship between human heights and the size of their corresponding shadows. In cases when metadata is not available we propose a method for automatically estimating shadow orientation from image data. We utilize the above information in a geometry based shadow, and human blob detector, which provides an initial estimation for locations of humans in the scene. These candidate locations are then classified as either human or clutter using a combination of wavelet features, and a Support Vector Machine. Our method works on a single frame, and unlike motion detection based methods, it bypasses the global motion compensation process, and allows for detection of stationary and slow moving humans, while avoiding the search across the entire image, which makes it more accurate and very fast. We show impressive results on sequences from the VIVID dataset and our own data, and provide comparative analysis.", "title": "" }, { "docid": "18a985c7960ee6c94f3f8bde503c07ce", "text": "Computer-controlled, human-like virtual agents (VAs), are often embedded into immersive virtual environments (IVEs) in order to enliven a scene or to assist users. Certain constraints need to be fulfilled, e.g., a collision avoidance strategy allowing users to maintain their personal space. Violating this flexible protective zone causes discomfort in real-world situations and in IVEs. However, no studies on collision avoidance for small-scale IVEs have been conducted yet. Our goal is to close this gap by presenting the results of a controlled user study in a CAVE. 27 participants were immersed in a small-scale office with the task of reaching the office door. Their way was blocked either by a male or female VA, representing their co-worker. The VA showed different behavioral patterns regarding gaze and locomotion. Our results indicate that participants preferred collaborative collision avoidance: they expect the VA to step aside in order to get more space to pass while being willing to adapt their own walking paths.", "title": "" }, { "docid": "b93919bbb2dab3a687cccb71ee515793", "text": "The processing and analysis of colour images has become an important area of study and application. The representation of the RGB colour space in 3D-polar coordinates (hue, saturation and brightness) can sometimes simplify this task by revealing characteristics not visible in the rectangular coordinate representation. The literature describes many such spaces (HLS, HSV, etc.), but many of them, having been developed for computer graphics applications, are unsuited to image processing and analysis tasks. We describe the flaws present in these colour spaces, and present three prerequisites for 3D-polar coordinate colour spaces well-suited to image processing and analysis. We then derive 3D-polar coordinate representations which satisfy the prerequisites, namely a space based on the norm which has efficient linear transform functions to and from the RGB space; and an improved HLS (IHLS) space. The most important property of this latter space is a “well-behaved” saturation coordinate which, in contrast to commonly used ones, always has a small numerical value for near-achromatic colours, and is completely independent of the brightness function. Three applications taking advantage of the good properties of the IHLS space are described: the calculation of a saturation-weighted hue mean and of saturation-weighted hue histograms, and feature extraction using mathematical morphology. 1Updated July 16, 2003. 2Jean Serra is with the Centre de Morphologie Mathématique, Ecole des Mines de Paris, 35 rue Saint-Honoré, 77305 Fontainebleau cedex, France.", "title": "" }, { "docid": "8f8d97a8b6443f87bef63e8a15382185", "text": "Semantic publishing is the use of Web and Semantic Web technologies to enhance the meaning of a published journal article, to facilitate its automated discovery, to enable its linking to semantically related articles, to provide access to data within the article in actionable form, and to facilitate integration of data between articles. Recently, semantic publishing has opened the possibility of a major step forward in the digital publishing world. For this to succeed, new semantic models and visualization tools are required to fully meet the specific needs of authors and publishers. In this article, we introduce the principles and architectures of two new ontologies central to the task of semantic publishing: FaBiO, the FRBR-aligned Bibliographic Ontology, an ontology for recording and publishing bibliographic records of scholarly endeavours on the Semantic Web, and CiTO, the Citation Typing Ontology, an ontology for the characterization of bibliographic citations both factually and rhetorically. We present those two models step by step, in order to emphasise their features and to stress their advantages relative to other pre-existing information models. Finally, we review the uptake of FaBiO and CiTO within the academic and publishing communities.", "title": "" }, { "docid": "6ef52ad99498d944e9479252d22be9c8", "text": "The problem of detecting rectangular structures in images arises in many applications, from building extraction in aerial images to particle detection in cryo-electron microscopy. This paper proposes a new technique for rectangle detection using a windowed Hough transform. Every pixel of the image is scanned, and a sliding window is used to compute the Hough transform of small regions of the image. Peaks of the Hough image (which correspond to line segments) are then extracted, and a rectangle is detected when four extracted peaks satisfy certain geometric conditions. Experimental results indicate that the proposed technique produced promising results for both synthetic and natural images.", "title": "" }, { "docid": "45c917e024842ff7e087e4c46a05be25", "text": "A centrifugal pump that employs a bearingless motor with 5-axis active control has been developed. In this paper, a novel bearingless canned motor pump is proposed, and differences from the conventional structure are explained. A key difference between the proposed and conventional bearingless canned motor pumps is the use of passive magnetic bearings; in the proposed pump, the amount of permanent magnets (PMs) is reduced by 30% and the length of the rotor is shortened. Despite the decrease in the total volume of PMs, the proposed structure can generate large suspension forces and high torque compared with the conventional design by the use of the passive magnetic bearings. In addition, levitation and rotation experiments demonstrated that the proposed motor is suitable for use as a bearingless canned motor pump.", "title": "" }, { "docid": "64c6012d2e97a1059161c295ae3b9cdb", "text": "One of the most popular user activities on the Web is watching videos. Services like YouTube, Vimeo, and Hulu host and stream millions of videos, providing content that is on par with TV. While some of this content is popular all over the globe, some videos might be only watched in a confined, local region.\n In this work we study the relationship between popularity and locality of online YouTube videos. We investigate whether YouTube videos exhibit geographic locality of interest, with views arising from a confined spatial area rather than from a global one. Our analysis is done on a corpus of more than 20 millions YouTube videos, uploaded over one year from different regions. We find that about 50% of the videos have more than 70% of their views in a single region. By relating locality to viralness we show that social sharing generally widens the geographic reach of a video. If, however, a video cannot carry its social impulse over to other means of discovery, it gets stuck in a more confined geographic region. Finally, we analyze how the geographic properties of a video's views evolve on a daily basis during its lifetime, providing new insights on how the geographic reach of a video changes as its popularity peaks and then fades away.\n Our results demonstrate how, despite the global nature of the Web, online video consumption appears constrained by geographic locality of interest: this has a potential impact on a wide range of systems and applications, spanning from delivery networks to recommendation and discovery engines, providing new directions for future research.", "title": "" }, { "docid": "792907ad8871e63f6b39d344452ca66a", "text": "This paper presents the design of a hardware-efficient, low-power image processing system for next-generation wireless endoscopy. The presented system is composed of a custom CMOS image sensor, a dedicated image compressor, a forward error correction (FEC) encoder protecting radio transmitted data against random and burst errors, a radio data transmitter, and a controller supervising all operations of the system. The most significant part of the system is the image compressor. It is based on an integer version of a discrete cosine transform and a novel, low complexity yet efficient, entropy encoder making use of an adaptive Golomb-Rice algorithm instead of Huffman tables. The novel hardware-efficient architecture designed for the presented system enables on-the-fly compression of the acquired image. Instant compression, together with elimination of the necessity of retransmitting erroneously received data by their prior FEC encoding, significantly reduces the size of the required memory in comparison to previous systems. The presented system was prototyped in a single, low-power, 65-nm field programmable gate arrays (FPGA) chip. Its power consumption is low and comparable to other application-specific-integrated-circuits-based systems, despite FPGA-based implementation.", "title": "" }, { "docid": "f45231d78fb8a88cd70b4960a6d375f9", "text": "In this article the design and the construction of an ultrawideband (UWB) 3 dB hybrid coupler are presented. The coupler is realized in broadside stripline technology to cover the operating bandwidth 0.5 - 18 GHz (more than five octaves). Detailed electromagnetic design has been carried to optimize performances according to bandwidth. The comparison between simulations and measurements validated the design approach. The first prototype guaranteed an insertion loss lower than 5 dB and a phase shift equal to 90° +/- 5° in bandwidth", "title": "" }, { "docid": "69f0e023f4e4b7521b2f2fe5bbee3dfc", "text": "In this study, we propose a method for estimating fixation distance on the basis of measurements of vergence eye movements. The aim of this approach is to control the lens focus of automatic focusing glasses. To reduce user effort at the time of calibration, the calibration was performed at infinite distance gazing, and the parameters were determined from the premeasured pupillary distance at infinity and iris diameters. To clarify the effectiveness of the proposed method, we conducted evaluation experiments using prototype glasses. The results showed that even participants requiring myopic correction could perform accurate motion vergence movements. Fixation distance estimation showed that, with the eye calibrated at infinite distance gazing, shorter distances could be estimated with an average accuracy exceeding 90%.", "title": "" }, { "docid": "c6e001e6e4964553f9087094e221cb4c", "text": "Brain cells normally respond adaptively to bioenergetic challenges resulting from ongoing activity in neuronal circuits, and from environmental energetic stressors such as food deprivation and physical exertion. At the cellular level, such adaptive responses include the “strengthening” of existing synapses, the formation of new synapses, and the production of new neurons from stem cells. At the molecular level, bioenergetic challenges result in the activation of transcription factors that induce the expression of proteins that bolster the resistance of neurons to the kinds of metabolic, oxidative, excitotoxic, and proteotoxic stresses involved in the pathogenesis of brain disorders including stroke, and Alzheimer’s and Parkinson’s diseases. Emerging findings suggest that lifestyles that include intermittent bioenergetic challenges, most notably exercise and dietary energy restriction, can increase the likelihood that the brain will function optimally and in the absence of disease throughout life. Here, we provide an overview of cellular and molecular mechanisms that regulate brain energy metabolism, how such mechanisms are altered during aging and in neurodegenerative disorders, and the potential applications to brain health and disease of interventions that engage pathways involved in neuronal adaptations to metabolic stress.", "title": "" }, { "docid": "56dabbcf36d734211acc0b4a53f23255", "text": "Cloud computing is a way to increase the capacity or add capabilities dynamically without investing in new infrastructure, training new personnel, or licensing new software. It extends Information Technology’s (IT) existing capabilities. In the last few years, cloud computing has grown from being a promising business concept to one of the fast growing segments of the IT industry. But as more and more information on individuals and companies are placed in the cloud, concerns are beginning to grow about just how safe an environment it is. Despite of all the hype surrounding the cloud, enterprise customers are still reluctant to deploy their business in the cloud. Security is one of the major issues which reduces the growth of cloud computing and complications with data privacy and data protection continue to plague the market. The advent of an advanced model should not negotiate with the required functionalities and capabilities present in the current model. A new model targeting at improving features of an existing model must not risk or threaten other important features of the current model. The architecture of cloud poses such a threat to the security of the existing technologies when deployed in a cloud environment. Cloud service users need to be vigilant in understanding the risks of data breaches in this new environment. In this paper, a survey of the different security risks that pose a threat to the cloud is presented. This paper is a survey more specific to the different security issues that has emanated due to the nature of the service delivery models of a cloud computing system. & 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "116c2138e2517b51cb1a01e4edbd515b", "text": "This technical demo presents a novel emotion-based music retrieval platform, called Mr. Emo, for organizing and browsing music collections. Unlike conventional approaches which quantize emotions into classes, Mr. Emo defines emotions by two continuous variables arousal and valence and employs regression algorithms to predict them. Associated with arousal and valence values (AV values), each music sample becomes a point in the arousal-valence emotion plane, so a user can easily retrieve music samples of certain emotion(s) by specifying a point or a trajectory in the emotion plane. Being content centric and functionally powerful, such emotion-based retrieval complements traditional keyword- or artist-based retrieval. The demo shows the effectiveness and novelty of music retrieval in the emotion plane.", "title": "" }, { "docid": "ca70bf377f8823c2ecb1cdd607c064ec", "text": "To date, few studies have compared the effectiveness of topical silicone gels versus that of silicone gel sheets in preventing scars. In this prospective study, we compared the efficacy and the convenience of use of the 2 products. We enrolled 30 patients who had undergone a surgical procedure 2 weeks to 3 months before joining the study. These participants were randomly assigned to 2 treatment arms: one for treatment with a silicone gel sheet, and the other for treatment with a topical silicone gel. Vancouver Scar Scale (VSS) scores were obtained for all patients; in addition, participants completed scoring patient questionnaires 1 and 3 months after treatment onset. Our results reveal not only that no significant difference in efficacy exists between the 2 products but also that topical silicone gels are more convenient to use. While previous studies have advocated for silicone gel sheets as first-line therapies in postoperative scar management, we maintain that similar effects can be expected with topical silicone gel. The authors recommend that, when clinicians have a choice of silicone-based products for scar prevention, they should focus on each patient's scar location, lifestyle, and willingness to undergo scar prevention treatment.", "title": "" }, { "docid": "15e07234e5f6f746138fdff4f24eea98", "text": "An unusual case of self-strangulation with an elastic band is described. The victim was a young Hispanic male with a complicated psychiatric history, including suicide attempts. Mechanisms of strangulation and mechanical asphyxial death are discussed briefly.", "title": "" }, { "docid": "99bac31f4d0df12cf25f081c96d9a81a", "text": "Residual networks, which use a residual unit to supplement the identity mappings, enable very deep convolutional architecture to operate well, however, the residual architecture has been proved to be diverse and redundant, which may leads to low-efficient modeling. In this work, we propose a competitive squeeze-excitation (SE) mechanism for the residual network. Re-scaling the value for each channel in this structure will be determined by the residual and identity mappings jointly, and this design enables us to expand the meaning of channel relationship modeling in residual blocks. Modeling of the competition between residual and identity mappings cause the identity flow to control the complement of the residual feature maps for itself. Furthermore, we design a novel inner-imaging competitive SE block to shrink the consumption and re-image the global features of intermediate network structure, by using the inner-imaging mechanism, we can model the channel-wise relations with convolution in spatial. We carry out experiments on the CIFAR, SVHN, and ImageNet datasets, and the proposed method can challenge state-of-the-art results.", "title": "" }, { "docid": "0e5342c34826a8766f80edcb6258ca44", "text": "Purpose – The purpose of this paper is to offer an overall view of knowledge management challenges for global business via discussing the challenges and proposing theoretical and managerial implications. Design/methodology/approach – Based on a comprehensive literature review, the paper identifies six main knowledge management challenges faced by global business today. Then, the challenges are discussed in relation to managerial practice. Findings – The paper argues that developing a working definition of knowledge, dealing with tacit knowledge and utilization of information technology, adaptation to cultural complexity, attention to human resources, developing new organizational structures, and coping with increased competition are the main knowledge management challenges faced by global business today. Practical implications – Suggested implications include a more significant managerial emphasis on considering and dealing with the knowledge management challenges in a holistic manner, taking into account all internal and external factors influencing the knowledge-management process. Originality/value – The paper evaluates the critical findings of the literature within the historical progress of knowledge management, clarifies the main knowledge management challenges faced by global business organizations today, and offers a basic framework for further studies.", "title": "" }, { "docid": "edc578384d991eefa0929a1f41cfda4b", "text": "This paper investigates the use of additive layer manufacturing (ALM) for waveguide components based on two Ku-band sidearm orthomode transducers (OMT). The advantages and disadvantages of the ALM manufacturing regarding RF waveguide components are discussed and measurement results are compared to those of an equal OMT manufactured by conventional techniques. The paper concludes with an outlook to the capability of advanced manufacturing techniques for RF space applications as well as ongoing development activities.", "title": "" }, { "docid": "6476066913e37c88e94cc83c15b05f43", "text": "The Aduio-visual Speech Recognition (AVSR) which employs both the video and audio information to do Automatic Speech Recognition (ASR) is one of the application of multimodal leaning making ASR system more robust and accuracy. The traditional models usually treated AVSR as inference or projection but strict prior limits its ability. As the revival of deep learning, Deep Neural Networks (DNN) becomes an important toolkit in many traditional classification tasks including ASR, image classification, natural language processing. Some DNN models were used in AVSR like Multimodal Deep Autoencoders (MDAEs), Multimodal Deep Belief Network (MDBN) and Multimodal Deep Boltzmann Machine (MDBM) that actually work better than traditional methods. However, such DNN models have several shortcomings: (1) They don’t balance the modal fusion and temporal fusion, or even haven’t temporal fusion; (2)The architecture of these models isn’t end-to-end, the training and testing getting cumbersome. We propose a DNN model, Auxiliary Multimodal LSTM (am-LSTM), to overcome such weakness. The am-LSTM could be trained and tested in one time, alternatively easy to train and preventing overfitting automatically. The extensibility and flexibility are also take into consideration. The experiments shows that am-LSTM is much better than traditional methods and other DNN models in three datasets: AVLetters, AVLetters2, AVDigits.", "title": "" } ]
scidocsrr
1f63bab0ec4d7fc81fb24773fa0a6873
Non-Invasive Brain-to-Brain Interface (BBI): Establishing Functional Links between Two Brains
[ { "docid": "0b7142ade987ca6f2683fc3fe6179fcb", "text": "The Psychophysics Toolbox is a software package that supports visual psychophysics. Its routines provide an interface between a high-level interpreted language (MATLAB on the Macintosh) and the video display hardware. A set of example programs is included with the Toolbox distribution.", "title": "" } ]
[ { "docid": "4f5b25a07b34ac83eb32b65ec419aef4", "text": "A layout algorithm is presented that allows the automatic drawing of data flow diagrams, a diagrammatic representation widely used in the functional analysis of information systems. A grid standard is defined for such diagrams, and aesthetics for good readability are identified. The layout algorithm receives as input an abstract graph specifying connectivity relations between the elements of the diagram, and produces as output a corresponding diagram according to the aesthetics. The basic strategy is to build incrementally the layout; first, a good topology is constructed with few crossings between edges; subsequently, the shape of the diagram is determined in terms of angles appearing along edges. and finally dimensions are given to the graph, obtaining a grid skeleton for the diagram.", "title": "" }, { "docid": "1e77561120fd88f86cdd68d64a8ebd58", "text": "Climate warming has created favorable conditions for the range expansion of many southern Ponto-Caspian freshwater fish and mollusks through the Caspian-Volga-Baltic “invasion corridor.” Some parasites can be used as “biological tags” of migration activity and generic similarity of new host populations in the Middle and Upper Volga. The study demonstrates a low biodiversity of parasites even of the most common estuarial invaders sampled from the northern reservoir such as the Ponto-Caspian kilka Clupeonella cultriventris (16 species), tubenose goby Proterorhinus semilunaris (19 species), and round goby Neogobius (=Appollonia) malanostomus (14 species). In 2000–2010, only a few cases of a significant increase in occurrence (up to 80–100%) and abundance indexes were recorded for some nonspecific parasites such as peritricha ciliates Epistilys lwoffi, Trichodina acuta, and Ambiphrya ameiuri on the gills of the tubenose goby; the nematode Contracoecum microcephalum and the acanthocephalan Pomphorhynchus laevis from the round goby; and metacercariae of trematodes Bucaphalus polymorphus and Apophallus muehlingi from the muscles of kilka. In some water bodies, the occurrence of the trematode Bucephalus polymorphus tended to decrease after a partial replacement of its intermediate host zebra mussel Dreissena polymorpha by D. bugensi (quagga mussel). High occurrence of parthenites of Apophallus muehlingi in the mollusk Lithoglyphus naticoides was recorded in the Upper Volga (up to 70%) as compared to the Middle Volga (34%). Fry of fish with a considerable degree of muscle injury caused by the both trematode species have lower mobility and become more available food objects for birds and carnivorous fish.", "title": "" }, { "docid": "103ec725b4c07247f1a8884610ea0e42", "text": "In this paper we have introduced the notion of distance between two single valued neutrosophic sets and studied its properties. We have also defined several similarity measures between them and investigated their characteristics. A measure of entropy of a single valued neutrosophic set has also been introduced.", "title": "" }, { "docid": "7a6876aa158c9bc717bd77319f4d2494", "text": "Scripts encode knowledge of prototypical sequences of events. We describe a Recurrent Neural Network model for statistical script learning using Long Short-Term Memory, an architecture which has been demonstrated to work well on a range of Artificial Intelligence tasks. We evaluate our system on two tasks, inferring held-out events from text and inferring novel events from text, substantially outperforming prior approaches on both tasks.", "title": "" }, { "docid": "e1008ecca5798a7c5c6048a945b2d25d", "text": "In this paper, we show for the first time how gradient TD (GTD) reinforcement learning methods can be formally derived as true stochastic gradient algorithms, not with respect to their original objective functions as previously attempted, but rather using derived primal-dual saddle-point objective functions. We then conduct a saddle-point error analysis to obtain finite-sample bounds on their performance. Previous analyses of this class of algorithms use stochastic approximation techniques to prove asymptotic convergence, and no finite-sample analysis had been attempted. Two novel GTD algorithms are also proposed, namely projected GTD2 and GTD2-MP, which use proximal “mirror maps” to yield improved convergence guarantees and acceleration, respectively. The results of our theoretical analysis imply that the GTD family of algorithms are comparable and may indeed be preferred over existing least squares TD methods for off-policy learning, due to their linear complexity. We provide experimental results showing the improved performance of our accelerated gradient TD methods.", "title": "" }, { "docid": "b32b02b7230b6d5520e30de6b19b7496", "text": "We prove that an adiabatic theorem generally holds for slow tapers in photonic crystals and other strongly grated waveguides with arbitrary index modulation, exactly as in conventional waveguides. This provides a guaranteed pathway to efficient and broad-bandwidth couplers with, e.g., uniform waveguides. We show that adiabatic transmission can only occur, however, if the operating mode is propagating (nonevanescent) and guided at every point in the taper. Moreover, we demonstrate how straightforward taper designs in photonic crystals can violate these conditions, but that adiabaticity is restored by simple design principles involving only the independent band structures of the intermediate gratings. For these and other analyses, we develop a generalization of the standard coupled-mode theory to handle arbitrary nonuniform gratings via an instantaneous Bloch-mode basis, yielding a continuous set of differential equations for the basis coefficients. We show how one can thereby compute semianalytical reflection and transmission through crystal tapers of almost any length, using only a single pair of modes in the unit cells of uniform gratings. Unlike other numerical methods, our technique becomes more accurate as the taper becomes more gradual, with no significant increase in the computation time or memory. We also include numerical examples comparing to a well-established scattering-matrix method in two dimensions.", "title": "" }, { "docid": "a5bfeab5278eb5bbe45faac0535f0b81", "text": "In modern computer systems, system event logs have always been the primary source for checking system status. As computer systems become more and more complex, the interaction between software and hardware increases frequently. The components will generate enormous log information, including running reports and fault information. The sheer quantity of data is a great challenge for analysis relying on the manual method. In this paper, we implement a management and analysis system of log information, which can assist system administrators to understand the real-time status of the entire system, classify logs into different fault types, and determine the root cause of the faults. In addition, we improve the existing fault correlation analysis method based on the results of system log classification. We apply the system in a cloud computing environment for evaluation. The results show that our system can classify fault logs automatically and effectively. With the proposed system, administrators can easily detect the root cause of faults.", "title": "" }, { "docid": "a66765e24b6cfdab2cc0b30de8afd12e", "text": "A broadband transition structure from rectangular waveguide (RWG) to microstrip line (MSL) is presented for the realization of the low-loss packaging module using Low-temperature co-fired ceramic (LTCC) technology at W-band. In this transition, a cavity structure is buried in LTCC layers, which provides the wide bandwidth, and a laminated waveguide (LWG) transition is designed, which provides the low-loss performance, as it reduces the radiation loss of conventional direct transition between RWG and MSL. The design procedure is also given. The measured results show that the insertion loss of better than 0.7 dB from 86 to 97 GHz can be achieved.", "title": "" }, { "docid": "a288a610a6cd4ff32b3fff4e2124aee0", "text": "According to the survey done by IBM business consulting services in 2006, global CEOs stated that business model innovation will have a greater impact on operating margin growth, than product or service innovation. We also noticed that some enterprises in China's real estate industry have improved their business models for sustainable competitive advantage and surplus profit in recently years. Based on the case studies of Shenzhen Vanke, as well as literature review, a framework for business model innovation has been developed. The framework provides an integrated means of making sense of new business model. These include critical dimensions of new customer value propositions, technological innovation, collaboration of the business infrastructure and the economic feasibility of a new business model.", "title": "" }, { "docid": "355d4250c2091c4325903096dd5a2b61", "text": "It has been realized that resilience as a concept involves several contradictory definitions, both for instance resilience as agile adjustment and as robust resistance to situations. Our analysis of resilience concepts and models suggest that beyond simplistic definitions, it is possible to draw up a systemic resilience model (SyRes) that maintains these opposing characteristics without contradiction. We outline six functions in a systemic model, drawing primarily on resilience engineering, and disaster response: anticipation, monitoring, response, recovery, learning, and self-monitoring. The model consists of four areas: Event-based constraints, Functional Dependencies, Adaptive Capacity and Strategy. The paper describes dependencies between constraints, functions and strategies. We argue that models such as SyRes should be useful both for envisioning new resilience methods and metrics, as well as for engineering and evaluating resilient systems.", "title": "" }, { "docid": "17e087f27a3178e46dbe14fb25027641", "text": "Social media has become an important tool for the business of marketers. Increasing exposure and traffics are the main two benefits of social media marketing. Most marketers are using social media to develop loyal fans and gain marketplace intelligence. Marketers reported increased benefits across all categories since 2013 and trademarks increased the number of loyal fans and sales [1]. Therefore, 2013 was a significant year for social media. Feeling the power of Instagram may be one of the most interesting cases. Social media is an effective key for fashion brands as they allow them to communicate directly with their consumers, promote various events and initiatives, and build brand awareness. As the increasing use of visual info graphic and marketing practices in social media, trademarks has begun to show more interest in Instagram. There is also no language barriers in Instagram and provides visuals which are very crucial for fashion industry. The purpose of this study is to determine and contrast the content sharing types of 10 well-known fashion brands (5 Turkish brands and 5 international brands), and to explain their attitude in Instagram. Hence, the content of Instagram accounts of those brands were examined according to post type (photo/video), content type (9 elements), number of likes and reviews, photo type (amateur/professional), shooting place (studio/outdoor/shops/etc.), and brand comments on their posts. This study provides a snapshot of how fashion brands utilize Instagram in their efforts of marketing.", "title": "" }, { "docid": "066e0f4902bb4020c6d3fad7c06ee519", "text": "Automatic traffic light detection (TLD) plays an important role for driver-assistance system and autonomous vehicles. State-of-the-art TLD systems showed remarkable results by exploring visual information from static frames. However, traffic lights from different countries, regions, and manufactures are always visually distinct. The existing large intra-class variance makes the pre-trained detectors perform good on one dataset but fail on the others with different origins. One the other hand, LED traffic lights are widely used because of better energy efficiency. Based on the observation LED traffic light flashes in proportion to the input AC power frequency, we propose a hybrid TLD approach which combines the temporally frequency analysis and visual information using high-speed camera. Exploiting temporal information is shown to be very effective in the experiments. It is considered to be more robust than visual information-only methods.", "title": "" }, { "docid": "5df4c47f9b1d1bffe19a622e9e3147ac", "text": "Regeneration of load-bearing segmental bone defects is a major challenge in trauma and orthopaedic surgery. The ideal bone graft substitute is a biomaterial that provides immediate mechanical stability, while stimulating bone regeneration to completely bridge defects over a short period. Therefore, selective laser melted porous titanium, designed and fine-tuned to tolerate full load-bearing, was filled with a physiologically concentrated fibrin gel loaded with bone morphogenetic protein-2 (BMP-2). This biomaterial was used to graft critical-sized segmental femoral bone defects in rats. As a control, porous titanium implants were either left empty or filled with a fibrin gels without BMP-2. We evaluated bone regeneration, bone quality and mechanical strength of grafted femora using in vivo and ex vivo µCT scanning, histology, and torsion testing. This biomaterial completely regenerated and bridged the critical-sized bone defects within eight weeks. After twelve weeks, femora were anatomically re-shaped and revealed open medullary cavities. More importantly, new bone was formed throughout the entire porous titanium implants and grafted femora regained more than their innate mechanical stability: torsional strength exceeded twice their original strength. In conclusion, combining porous titanium implants with a physiologically concentrated fibrin gels loaded with BMP-2 improved bone regeneration in load-bearing segmental defects. This material combination now awaits its evaluation in larger animal models to show its suitability for grafting load-bearing defects in trauma and orthopaedic surgery.", "title": "" }, { "docid": "820727a0489e2d865288a7b5444eaa62", "text": "For the networked control system (NCSs) with short network-induced time delay, the online fault detection method is proposed. The Markov jumping model for NCSs is established under the condition that the network-induced time delay can be governed by the Markov chain. The feasible solution of the reduced order fault detection filter is achieved based on the robust filtering method, and the non-convex optimization problem with the constraint of matrix rank is obtained. The local optimal solution of the optimization problem is considered based on the alternating projection method and the parameters of the fault detection filter are presented. Finally, the numerical examples show that the proposed approach can restrain the impact of the delays, and detect the faults quickly and effectively.", "title": "" }, { "docid": "40050e8f3ad386e4604514ec49bcb52e", "text": "Imperforate hymen is a malformation that is easy to diagnose, even in countries with limited health care coverage. Unrecognized at birth, it becomes evident at puberty because of the development of a hematocolpos, which requires surgical intervention. This situation can be avoided with a complete examination of the infant at birth. This case report describes four patients whom we saw from 1995 through 2001 at the Bangui (Central African Republic) Pediatric Center and Community Hospital.", "title": "" }, { "docid": "d27c289d63717a0b38ccde7539448fe4", "text": "We present a new, fully automatic algorithm for liver tumors segmentation in follow-up CT studies. The inputs are a baseline CT scan and a delineation of the tumors in it and a follow-up scan; the outputs are the tumors delineations in the follow-up CT scan. The algorithm consists of four steps: 1) deformable registration of the baseline scan and tumors delineations to the followup CT scan; 2) automatic segmentation of the liver; 3) training a Convolutional Neural Network (CNN) as a voxel classifier on all baseline; 4) segmentation of the tumor in the follow-up study with the learned classifier. The main novelty of our method is the combination of follow-up based detection with CNN-based segmentation. Our experimental results on 67 tumors from 21 patients with ground-truth segmentations approved by a radiologist yield an average overlap error of 16.26% (std=10.33).", "title": "" }, { "docid": "03cd6ef0cc0dab9f33b88dd7ae4227c2", "text": "The dopaminergic system plays a pivotal role in the central nervous system via its five diverse receptors (D1–D5). Dysfunction of dopaminergic system is implicated in many neuropsychological diseases, including attention deficit hyperactivity disorder (ADHD), a common mental disorder that prevalent in childhood. Understanding the relationship of five different dopamine (DA) receptors with ADHD will help us to elucidate different roles of these receptors and to develop therapeutic approaches of ADHD. This review summarized the ongoing research of DA receptor genes in ADHD pathogenesis and gathered the past published data with meta-analysis and revealed the high risk of DRD5, DRD2, and DRD4 polymorphisms in ADHD.", "title": "" }, { "docid": "497d72ce075f9bbcb2464c9ab20e28de", "text": "Eukaryotic organisms radiated in Proterozoic oceans with oxygenated surface waters, but, commonly, anoxia at depth. Exceptionally preserved fossils of red algae favor crown group emergence more than 1200 million years ago, but older (up to 1600-1800 million years) microfossils could record stem group eukaryotes. Major eukaryotic diversification ~800 million years ago is documented by the increase in the taxonomic richness of complex, organic-walled microfossils, including simple coenocytic and multicellular forms, as well as widespread tests comparable to those of extant testate amoebae and simple foraminiferans and diverse scales comparable to organic and siliceous scales formed today by protists in several clades. Mid-Neoproterozoic establishment or expansion of eukaryophagy provides a possible mechanism for accelerating eukaryotic diversification long after the origin of the domain. Protists continued to diversify along with animals in the more pervasively oxygenated oceans of the Phanerozoic Eon.", "title": "" }, { "docid": "2923e6f0760006b6a049a5afa297ca56", "text": "Six years ago in this journal we discussed the work of Arthur T. Murray, who endeavored to explore artificial intelligence using the Forth programming language [1]. His creation, which he called MIND.FORTH, was interesting in its ability to understand English sentences in the form: subject-verb-object. It also had the capacity to learn new things and to form mental associations between recent experiences and older memories. In the intervening years, Mr. Murray has continued to develop his MIND.FORTH: he has translated it into Visual BASIC, PERL and Javascript, he has written a book [2] on the subject, and he maintains a wiki web site where anyone may suggest changes or extensions to his design [3]. MIND.FORTH is necessarily complex and opaque by virtue of its functionality; therefore it may be challenging for a newcomer to grasp. However, the more dedicated student will find much of value in this code. Murray himself has become quite a controversial figure.", "title": "" } ]
scidocsrr
eb3189bb764aa601560feb37f0ab2f9b
Word-Level Confidence Estimation for Machine Translation
[ { "docid": "c76f00a8fa53c307da2d464d060a171f", "text": "The field of speech recognition has clearly benefited from precisely defined testing conditions and objective performance measures such as word error rate. In the development and evaluation of new methods, the question arises whether the empirically observed difference in performance is due to a genuine advantage of one system over the other, or just an effect of chance. However, many publications still do not concern themselves with the statistical significance of the results reported. We present a bootstrap method for significance analysis which is, at the same time, intuitive, precise and and easy to use. Unlike some methods, we make no (possibly ill-founded) approximations and the results are immediately interpretable in terms of word error rate.", "title": "" }, { "docid": "724388aac829af9671a90793b1b31197", "text": "We present a statistical phrase-based translation model that useshierarchical phrases — phrases that contain subphrases. The model is formally a synchronous context-free grammar but is learned from a bitext without any syntactic information. Thus it can be seen as a shift to the formal machinery of syntaxbased translation systems without any linguistic commitment. In our experiments using BLEU as a metric, the hierarchical phrasebased model achieves a relative improvement of 7.5% over Pharaoh, a state-of-the-art phrase-based system.", "title": "" } ]
[ { "docid": "6fe0408bc012bdcb0d927bba87666168", "text": "In this paper, we describe a new structure for designing low power potentiostats, which are suitable for electrochemical sensors used in biomedical implants. The low power consumption is a result of using just one operational amplifier in the structure. The structure is also inherently very low noise because it amplifies the output current of the sensor in current mode which can then be converted to the desirable variable; i.e., voltage, frequency, pulse width, etc. Finally we present a new topology for the design of a low power operational amplifier dedicated to driving super capacitive chemical sensors.", "title": "" }, { "docid": "2c9cfc7bf3b88f27046b9366b6053867", "text": "The purpose of this thesis project is to study and evaluate a UWB Synthetic Aperture Radar (SAR) data image formation algorithm, that was previously less familiar and, that has recently got much attention in this field. Certain properties of it made it acquire a status in radar signal processing branch. This is a fast time-domain algorithm named Local Backprojection (LBP). The LBP algorithm has been implemented for SAR image formation. The algorithm has been simulated in MATLAB using standard values of pertinent parameters. Later, an evaluation of the LBP algorithm has been performed and all the comments, estimation and judgment have been done on the basis of the resulting images. The LBP has also been compared with the basic time-domain algorithm Global Backprojection (GBP) with respect to the SAR images. The specialty of LBP algorithm is in its reduced computational load than in GBP. LBP is a two-stage algorithm — it forms the beam first for a particular subimage and, in a later stage, forms the image of that subimage area. The signal data collected from the target is processed and backprojected locally for every subimage individually. This is the reason of naming it Local backprojection. After the formation of all subimages, these are arranged and combined coherently to form the full SAR image.", "title": "" }, { "docid": "22e0999378d14c695b70f136207f66b9", "text": "Narcissism is associated with morally questionable behavior in the workplace, but little is known about the role of specific dimensions of narcissism or the mechanism behind these effects. The current study assessed academic dishonesty among college students. One hundred and ninety-nine participants either self-reported or reported others’ cheating behavior and completed the Narcissistic Personality Inventory (NPI; Raskin & Terry, 1988). The exhibitionism dimension of the NPI predicted greater cheating; this effect was explained by the lack of guilt. The effects of exhibitionism held for the self but not other-report conditions, highlighting the key role of the self in narcissism. Findings held when controlling for relevant demographic variables and other narcissism factors. Thus the narcissists’ ambitions for their own academic achievement lead to cheating in school, facilitated by a lack of guilt for their immoral behavior. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "d7e794a106f29f5ebe917c2e7b6007eb", "text": "In this paper, several recent theoretical conceptions of technology-mediated education are examined and a study of 2159 online learners is presented. The study validates an instrument designed to measure teaching, social, and cognitive presence indicative of a community of learners within the community of inquiry (CoI) framework [Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical inquiry in a textbased environment: Computer conferencing in higher education. The Internet and Higher Education, 2, 1–19; Garrison, D. R., Anderson, T., & Archer, W. (2001). Critical thinking, cognitive presence, and computer conferencing in distance education. American Journal of Distance Education, 15(1), 7–23]. Results indicate that the survey items cohere into interpretable factors that represent the intended constructs. Further it was determined through structural equation modeling that 70% of the variance in the online students’ levels of cognitive presence, a multivariate measure of learning, can be modeled based on their reports of their instructors’ skills in fostering teaching presence and their own abilities to establish a sense of social presence. Additional analysis identifies more details of the relationship between learner understandings of teaching and social presence and its impact on their cognitive presence. Implications for online teaching, policy, and faculty development are discussed. ! 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "93064713fe271a9e173d790de09f2da6", "text": "Network science is an interdisciplinary endeavor, with methods and applications drawn from across the natural, social, and information sciences. A prominent problem in network science is the algorithmic detection of tightly connected groups of nodes known as communities. We developed a generalized framework of network quality functions that allowed us to study the community structure of arbitrary multislice networks, which are combinations of individual networks coupled through links that connect each node in one network slice to itself in other slices. This framework allows studies of community structure in a general setting encompassing networks that evolve over time, have multiple types of links (multiplexity), and have multiple scales.", "title": "" }, { "docid": "b01c62a4593254df75c1e390487982fa", "text": "This paper addresses the question \"why and how is it that we say the same thing differently to different people, or even to the same person in different circumstances?\" We vary the content and form of our text in order to convey more information than is contained in the literal meanings of our words. This information expresses the speaker's interpersonal goals toward the hearer and, in general, his or her perception of the pragmatic aspects of the conversation. This paper discusses two insights that arise when one studies this question: the existence of a level of organization that mediates between communicative goals and generator decisions, and the interleaved planningrealization regime and associated monitoring required for generation. To illustrate these ideas, a computer program is described which contains plans and strategies to produce stylistically appropriate texts from a single representation under various settings that model pragmatic circumstances.", "title": "" }, { "docid": "897fb39d295defc4b6e495236a2c74b1", "text": "Generative modeling of high-dimensional data is a key problem in machine learning. Successful approaches include latent variable models and autoregressive models. The complementary strengths of these approaches, to model global and local image statistics respectively, suggest hybrid models combining the strengths of both models. Our contribution is to train such hybrid models using an auxiliary loss function that controls which information is captured by the latent variables and what is left to the autoregressive decoder. In contrast, prior work on such hybrid models needed to limit the capacity of the autoregressive decoder to prevent degenerate models that ignore the latent variables and only rely on autoregressive modeling. Our approach results in models with meaningful latent variable representations, and which rely on powerful autoregressive decoders to model image details. Our model generates qualitatively convincing samples, and yields stateof-the-art quantitative results.", "title": "" }, { "docid": "1c8f6d6c599d19f61c7e384d06ee6b09", "text": "In this paper an approach for obtaining unique solutions to forward and inverse kinematics of a spherical parallel manipulator (SPM) system with revolute joints is proposed. Kinematic analysis of a general SPM with revolute joints is revisited and the proposed approach is formulated in the form of easy-to-follow algorithms that are described in detail. A graphical verification method using SPM computer-aided-design (CAD) models is presented together with numerical and experimental examples that confirm the correctness of the proposed approach. It is expected that this approach can be applied to SPMs with different geometries and can be useful in designing real-time control systems of SPMs.", "title": "" }, { "docid": "c13254d6dfde0cbb195dc36587114e15", "text": "Recently, detecting the traces introduced by the content-preserving image manipulations has received a great deal of attention from forensic analyzers. It is well known that the median filter is a widely used nonlinear denoising operator. Therefore, the detection of median filtering is of important realistic significance in image forensics. In this letter, a novel local texture operator, named the second-order local ternary pattern (LTP), is proposed for median filtering detection. The proposed local texture operator encodes the local derivative direction variations by using a 3-valued coding function and is capable of effectively capturing the changes of local texture caused by median filtering. In addition, kernel principal component analysis (KPCA) is exploited to reduce the dimensionality of the proposed feature set, making the computational cost manageable. The experiment results have shown that the proposed scheme performs better than several state-of-the-art approaches investigated.", "title": "" }, { "docid": "ca88e6aab6f65f04bfc7a7eb470a31e1", "text": "We construct protocols for secure multiparty computation with the help of a computationally powerful party, namely the “cloud”. Our protocols are simultaneously efficient in a number of metrics: • Rounds: our protocols run in 4 rounds in the semi-honest setting, and 5 rounds in the malicious setting. • Communication: the number of bits exchanged in an execution of the protocol is independent of the complexity of function f being computed, and depends only on the length of the inputs and outputs. • Computation: the computational complexity of all parties is independent of the complexity of the function f , whereas that of the cloud is linear in the size of the circuit computing f . In the semi-honest case, our protocol relies on the “ring learning with errors” (RLWE) assumption, whereas in the malicious case, security is shown under the Ring LWE assumption as well as the existence of simulation-extractable NIZK proof systems and succinct non-interactive arguments. In the malicious setting, we also relax the communication and computation requirements above, and only require that they be “small” – polylogarithmic in the computation size and linear in the size of the joint size of the inputs. Our constructions leverage the key homomorphic property of the recent fully homomorphic encryption scheme of Brakerski and Vaikuntanathan (CRYPTO 2011, FOCS 2011). Namely, these schemes allow combining encryptions of messages under different keys to produce an encryption (of the sum of the messages) under the sum of the keys. We also design an efficient, non-interactive threshold decryption protocol for these fully homomorphic encryption schemes. ∗This work was partially supported by the Check Point Institute for Information Security and by the Israeli Centers of Research Excellence (I-CORE) program (center No. 4/11). †This work was partially supported by an NSERC Discovery Grant, by DARPA under Agreement number FA875011-2-0225. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the author and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA or the U.S. Government.", "title": "" }, { "docid": "d5e5d79b8a06d4944ee0c3ddcd84ce4c", "text": "Recent years have observed a significant progress in information retrieval and natural language processing with deep learning technologies being successfully applied into almost all of their major tasks. The key to the success of deep learning is its capability of accurately learning distributed representations (vector representations or structured arrangement of them) of natural language expressions such as sentences, and effectively utilizing the representations in the tasks. This tutorial aims at summarizing and introducing the results of recent research on deep learning for information retrieval, in order to stimulate and foster more significant research and development work on the topic in the future.\n The tutorial mainly consists of three parts. In the first part, we introduce the fundamental techniques of deep learning for natural language processing and information retrieval, such as word embedding, recurrent neural networks, and convolutional neural networks. In the second part, we explain how deep learning, particularly representation learning techniques, can be utilized in fundamental NLP and IR problems, including matching, translation, classification, and structured prediction. In the third part, we describe how deep learning can be used in specific application tasks in details. The tasks are search, question answering (from either documents, database, or knowledge base), and image retrieval.", "title": "" }, { "docid": "7267e5082c890dfa56a745d3b28425cc", "text": "Natural Orifice Translumenal Endoscopic Surgery (NOTES) has recently attracted lots of attention, promising surgical procedures with fewer complications, better cosmesis, lower pains and faster recovery. Several robotic systems were developed aiming to enable abdominal surgeries in a NOTES manner. Although these robotic systems demonstrated the surgical concept, characteristics which could fully enable NOTES procedures remain unclear. This paper presents the development of an endoscopic continuum testbed for finalizing system characteristics of a surgical robot for NOTES procedures, which include i) deployability (the testbed can be deployed in a folded endoscope configuration and then be unfolded into a working configuration), ii) adequate workspace, iii) sufficient distal dexterity (e.g. suturing capability), and iv) desired mechanics properties (e.g. enough load carrying capability). Continuum mechanisms were implemented in the design and a diameter of 12mm of this testbed in its endoscope configuration was achieved. Results of this paper could be used to form design references for future development of NOTES robots.", "title": "" }, { "docid": "853ac793e92b97d41e5ef6d1bc16d504", "text": "We present a systematic study of parameters used in the construction of semantic vector space models. Evaluation is carried out on a variety of similarity tasks, including a compositionality dataset, using several source corpora. In addition to recommendations for optimal parameters, we present some novel findings, including a similarity metric that outperforms the alternatives on all tasks considered.", "title": "" }, { "docid": "0814c6823460ee2adddb0cb590a57441", "text": "Intracellular signaling pathways are reliant on protein phosphorylation events that are controlled by a balance of kinase and phosphatase activity. Although kinases have been extensively studied, the role of phosphatases in controlling specific cell signaling pathways has been less so. Leukocyte common antigen-related protein (LAR) is a member of the LAR subfamily of receptor-like protein tyrosine phosphatases (RPTPs). LAR is known to regulate the activity of a number of receptor tyrosine kinases, including platelet-derived growth factor receptor (PDGFR). To gain insight into the signaling pathways regulated by LAR, including those that are PDGF-dependent, we have carried out the first systematic analysis of LAR-regulated signal transduction using SILAC-based quantitative proteomic and phosphoproteomic techniques. We haveanalyzed differential phosphorylation between wild-type mouse embryo fibroblasts (MEFs) and MEFs in which the LAR cytoplasmic phosphatase domains had been deleted (LARΔP), and found a significant change in abundance of phosphorylation on 270 phosphosites from 205 proteins because of the absence of the phosphatase domains of LAR. Further investigation of specific LAR-dependent phosphorylation sites and enriched biological processes reveal that LAR phosphatase activity impacts on a variety of cellular processes, most notably regulation of the actin cytoskeleton. Analysis of putative upstream kinases that may play an intermediary role between LAR and the identified LAR-dependent phosphorylation events has revealed a role for LAR in regulating mTOR and JNK signaling.", "title": "" }, { "docid": "58e0e5c5a8fdbb14403173600f551a9b", "text": "Charisma, the ability to command authority on the basis of personal qualities, is more difficult to define than to identify. How do charismatic leaders such as Fidel Castro or Pope John Paul II attract and retain their followers? We present results of an analysis of subjective ratings of charisma from a corpus of American political speech. We identify the associations between charisma ratings and ratings of other personal attributes. We also examine acoustic/prosodic and lexical features of this speech and correlate these with charisma ratings.", "title": "" }, { "docid": "77a92d896da31390bb0bd0c593361c6b", "text": "Non-inflammatory cystic lesions of the pancreas are increasingly recognized. Two distinct entities have been defined, i.e., intraductal papillary mucinous neoplasm (IPMN) and mucinous cystic neoplasm (MCN). Ovarian-type stroma has been proposed as a requisite to distinguish MCN from IPMN. Some other distinct features to characterize IPMN and MCN have been identified, but there remain ambiguities between the two diseases. In view of the increasing frequency with which these neoplasms are being diagnosed worldwide, it would be helpful for physicians managing patients with cystic neoplasms of the pancreas to have guidelines for the diagnosis and treatment of IPMN and MCN. The proposed guidelines represent a consensus of the working group of the International Association of Pancreatology.", "title": "" }, { "docid": "44491cab59a3f26d559edce907c50fd3", "text": "Grammatical error correction (GEC) systems strive to correct both global errors in word order and usage, and local errors in spelling and inflection. Further developing upon recent work on neural machine translation, we propose a new hybrid neural model with nested attention layers for GEC. Experiments show that the new model can effectively correct errors of both types by incorporating word and character-level information, and that the model significantly outperforms previous neural models for GEC as measured on the standard CoNLL14 benchmark dataset. Further analysis also shows that the superiority of the proposed model can be largely attributed to the use of the nested attention mechanism, which has proven particularly effective in correcting local errors that involve small edits in orthography.", "title": "" }, { "docid": "f4d85ad52e37bd81058bfff830a52f0a", "text": "A number of antioxidants and trace minerals have important roles in immune function and may affect health in transition dairy cows. Vitamin E and beta-carotene are important cellular antioxidants. Selenium (Se) is involved in the antioxidant system via its role in the enzyme glutathione peroxidase. Inadequate dietary vitamin E or Se decreases neutrophil function during the perpariturient period. Supplementation of vitamin E and/or Se has reduced the incidence of mastitis and retained placenta, and reduced duration of clinical symptoms of mastitis in some experiments. Research has indicated that beta-carotene supplementation may enhance immunity and reduce the incidence of retained placenta and metritis in dairy cows. Marginal copper deficiency resulted in reduced neutrophil killing and decreased interferon production by mononuclear cells. Copper supplementation of a diet marginal in copper reduced the peak clinical response during experimental Escherichia coli mastitis. Limited research indicated that chromium supplementation during the transition period may increase immunity and reduce the incidence of retained placenta.", "title": "" }, { "docid": "63063c0a2b08f068c11da6d80236fa87", "text": "This paper addresses the problem of hallucinating the missing high-resolution (HR) details of a low-resolution (LR) video while maintaining the temporal coherence of the hallucinated HR details by using dynamic texture synthesis (DTS). Most existing multi-frame-based video super-resolution (SR) methods suffer from the problem of limited reconstructed visual quality due to inaccurate sub-pixel motion estimation between frames in a LR video. To achieve high-quality reconstruction of HR details for a LR video, we propose a texture-synthesis-based video super-resolution method, in which a novel DTS scheme is proposed to render the reconstructed HR details in a time coherent way, so as to effectively address the temporal incoherence problem caused by traditional texture synthesis based image SR methods. To further reduce the complexity of the proposed method, our method only performs the DTS-based SR on a selected set of key-frames, while the HR details of the remaining non-key-frames are simply predicted using the bi-directional overlapped block motion compensation. Experimental results demonstrate that the proposed method achieves significant subjective and objective quality improvement over state-of-the-art video SR methods.", "title": "" } ]
scidocsrr
26540b5ac1ce3dc2254bf1812a088a0e
An Overview of Grounded Theory Design in Educational Research
[ { "docid": "481018ae479f8a6b8669972156d234d6", "text": "AIM\nThis paper is a report of a discussion of the arguments surrounding the role of the initial literature review in grounded theory.\n\n\nBACKGROUND\nResearchers new to grounded theory may find themselves confused about the literature review, something we ourselves experienced, pointing to the need for clarity about use of the literature in grounded theory to help guide others about to embark on similar research journeys.\n\n\nDISCUSSION\nThe arguments for and against the use of a substantial topic-related initial literature review in a grounded theory study are discussed, giving examples from our own studies. The use of theoretically sampled literature and the necessity for reflexivity are also discussed. Reflexivity is viewed as the explicit quest to limit researcher effects on the data by awareness of self, something seen as integral both to the process of data collection and the constant comparison method essential to grounded theory.\n\n\nCONCLUSION\nA researcher who is close to the field may already be theoretically sensitized and familiar with the literature on the study topic. Use of literature or any other preknowledge should not prevent a grounded theory arising from the inductive-deductive interplay which is at the heart of this method. Reflexivity is needed to prevent prior knowledge distorting the researcher's perceptions of the data.", "title": "" }, { "docid": "e0edee10df7529ef31c1941075461963", "text": "Although grounded theory and qualitative content analysis are similar in some respects, they differ as well; yet the differences between the two have rarely been made clear in the literature. The purpose of this article was to clarify ambiguities and reduce confusion about grounded theory and qualitative content analysis by identifying similarities and differences in the two based on a literature review and critical reflection on the authors’ own research. Six areas of difference emerged: (a) background and philosophical base, (b) unique characteristics of each method, (c) goals and rationale of each method, (d) data analysis process, (e) outcomes of the research, and (f) evaluation of trustworthiness. This article provides knowledge that can assist researchers and students in the selection of appropriate research methods for their inquiries.", "title": "" } ]
[ { "docid": "9cb383f53922a89bc356ba1dbd9f1fe1", "text": "We develop the first approximate inference algorithm for 1-Best (and M-Best) decoding in bidirectional neural sequence models by extending Beam Search (BS) to reason about both forward and backward time dependencies. Beam Search (BS) is a widely used approximate inference algorithm for decoding sequences from unidirectional neural sequence models. Interestingly, approximate inference in bidirectional models remains an open problem, despite their significant advantage in modeling information from both the past and future. To enable the use of bidirectional models, we present Bidirectional Beam Search (BiBS), an efficient algorithm for approximate bidirectional inference. To evaluate our method and as an interesting problem in its own right, we introduce a novel Fill-in-the-Blank Image Captioning task which requires reasoning about both past and future sentence structure to reconstruct sensible image descriptions. We use this task as well as the Visual Madlibs dataset to demonstrate the effectiveness of our approach, consistently outperforming all baseline methods.", "title": "" }, { "docid": "dba103d14ee756dc1cc3002ae481d1d9", "text": "This book chapter describes an approach of evaluating user experience in video games and advanced interaction games (tabletop games) by using heuristics. We provide a short overview of computer games with a focus on advanced interaction games and explain the concept of user-centred design for games. Furthermore we describe the history of heuristics for video games and the role of user experience of games in general. We propose a framework consisting of three sets of heuristics (game play/game story, virtual interface and tabletop specific) to detect the most critical issues in video games as well as advanced interaction games. To assess its applicability we compare the results of expert evaluations of five current games with the user experience-based ratings of various game review sites. In the conclusion we provide an outlook on possible extensions of our approach.", "title": "" }, { "docid": "80fbb743aa5b9e49378dfa38961f9dec", "text": "We demonstrated a W-band high-power-density MMIC power amplifier with 80 nm InAlGaN/GaN HEMTs. The MMIC consists of two-stage cascade units, each of which has two transistors with the same gate periphery for a high gain and low-loss matching circuit. The MMIC achieved a maximum output power of 1.15 W and maximum PAE of 12.3 % at 86 GHz under CW operation. Its power density reached 3.6 W/mm, representing the highest performance of the W-band GaN HEMT MMIC power amplifier.", "title": "" }, { "docid": "7e7651261be84e2e05cde0ac9df69e6d", "text": "Searching a large database to find a sequence that is most similar to a query can be prohibitively expensive, particularly if individual sequence comparisons involve complex operations such as warping. To achieve scalability, \"pruning\" heuristics are typically employed to minimize the portion of the database that must be searched with more complex matching. We present an approximate pruning technique which involves embedding sequences in a Euclidean space. Sequences are embedded using a convolutional network with a form of attention that integrates over time, trained on matching and non-matching pairs of sequences. By using fixed-length embeddings, our pruning method effectively runs in constant time, making it many orders of magnitude faster than full dynamic time warping-based matching for large datasets. We demonstrate our approach on a large-scale musical score-to-audio recording retrieval task.", "title": "" }, { "docid": "5572ab4560ef280e72c50d8def00e4ab", "text": "Methylation of N6-adenosine (m6A) has been observed in many different classes of RNA, but its prevalence in microRNAs (miRNAs) has not yet been studied. Here we show that a knockdown of the m6A demethylase FTO affects the steady-state levels of several miRNAs. Moreover, RNA immunoprecipitation with an anti-m6A-antibody followed by RNA-seq revealed that a significant fraction of miRNAs contains m6A. By motif searches we have discovered consensus sequences discriminating between methylated and unmethylated miRNAs. The epigenetic modification of an epigenetic modifier as described here adds a new layer to the complexity of the posttranscriptional regulation of gene expression.", "title": "" }, { "docid": "375d5fcb41b7fb3a2f60822720608396", "text": "We present a full-stack design to accelerate deep learning inference with FPGAs. Our contribution is two-fold. At the software layer, we leverage and extend TVM, the end-to-end deep learning optimizing compiler, in order to harness FPGA-based acceleration. At the the hardware layer, we present the Versatile Tensor Accelerator (VTA) which presents a generic, modular, and customizable architecture for TPU-like accelerators. Our results take a ResNet-18 description in MxNet and compiles it down to perform 8-bit inference on a 256-PE accelerator implemented on a low-cost Xilinx Zynq FPGA, clocked at 100MHz. Our full hardware acceleration stack will be made available for the community to reproduce, and build upon at http://github.com/uwsaml/vta.", "title": "" }, { "docid": "7fed6f57ba2e17db5986d47742dc1a9c", "text": "Partial Least Squares Regression (PLSR) is a linear regression technique developed to deal with high-dimensional regressors and one or several response variables. In this paper we introduce robustified versions of the SIMPLS algorithm being the leading PLSR algorithm because of its speed and efficiency. Because SIMPLS is based on the empirical cross-covariance matrix between the response variables and the regressors and on linear least squares regression, the results are affected by abnormal observations in the data set. Two robust methods, RSIMCD and RSIMPLS, are constructed from a robust covariance matrix for high-dimensional data and robust linear regression. We introduce robust RMSECV and RMSEP values for model calibration and model validation. Diagnostic plots are constructed to visualize and classify the outliers. Several simulation results and the analysis of real data sets show the effectiveness and the robustness of the new approaches. Because RSIMPLS is roughly twice as fast as RSIMCD, it stands out as the overall best method.", "title": "" }, { "docid": "c157b149d334b2cc1f718d70ef85e75e", "text": "The large inter-individual variability within the normal population, the limited reproducibility due to habituation or fatigue, and the impact of instruction and the subject's motivation, all constitute a major problem in posturography. These aspects hinder reliable evaluation of the changes in balance control in the case of disease and complicate objectivation of the impact of therapy and sensory input on balance control. In this study, we examine whether measurement of balance control near individualized limits of stability and under very challenging sensory conditions might reduce inter- and intra-individual variability compared to the well-known Sensory Organization Test (SOT). To do so, subjects balance on a platform on which instability increases automatically until body orientation or body sway velocity surpasses a safety limit. The maximum tolerated platform instability is then used as a measure for balance control under 10 different sensory conditions. Ninety-seven healthy subjects and 107 patients suffering from chronic dizziness (whiplash syndrome (n = 25), Meniere's disease (n = 28), acute (n = 28) or gradual (n = 26) peripheral function loss) were tested. In both healthy subjects and patients this approach resulted in a low intra-individual variability (< 14.5(%). In healthy subjects and patients, balance control was maximally affected by closure of the eyes and by vibration of the Achilles' tendons. The other perturbation techniques applied (sway referenced vision or platform, cooling of the foot soles) were less effective. Combining perturbation techniques reduced balance control even more, but the effect was less than the linear summation of the effect induced by the techniques applied separately. The group averages of healthy subjects show that vision contributed maximum 37%, propriocepsis minimum 26%, and labyrinths maximum 44% to balance control in healthy subjects. However, a large inter-individual variability was observed. Balance control of each patient group was less than in healthy subjects in all sensory conditions. Similar to healthy subjects, patients also show a large inter-individual variability, which results in a low sensitivity of the test. With the exception of some minor differences between Whiplash and Meniere patients, balance control did not differ between the four patient groups. This points to a low specificity of the test. Balance control was not correlated with the outcome of the standard vestibular examination. This study strengthens our notion that the contribution of the sensory inputs to balance control differs considerably per individual and may simply be due to differences in the vestibular function related to the specific pathology, but also to differences in motor learning strategies in relation to daily life requirements. It is difficult to provide clinically relevant normative data. We conclude that, like the SOT, the current test is merely a functional test of balance with limited diagnostic value.", "title": "" }, { "docid": "13425c8273119fc36218d7b8d240b056", "text": "The factor of safety for slopes (FS) has been traditionally evaluated using two-dimensional limit equilibrium methods (LEM). However the FS of a slope can also be computed with FLAC by reducing the soil shear strength in stages until the slope fails. This method is called the shear strength reduction technique (SSR). Many authors have pointed out several advantages of SSR over the limit equilibrium methods. But usually they checked the effectiveness of SSR on rather small models of simple geometry. In this study, the accuracy of the SSR was investigated through comparisons with limit analysis solutions. FS estimated by SSR was compared with FS obtained from Fellenius, Bishop, Morgenstern-Price and Janbu.", "title": "" }, { "docid": "c1d9f361e9818e9328148563a7307444", "text": "In the domain of online advertising, our aim is to serve the best ad to a user who visits a certain webpage, to maximize the chance of a desired action to be performed by this user after seeing the ad. While it is possible to generate a different prediction model for each user to tell if he/she will act on a given ad, the prediction result typically will be quite unreliable with huge variance, since the desired actions are extremely sparse, and the set of users is huge (hundreds of millions) and extremely volatile, i.e., a lot of new users are introduced everyday, or are no longer valid. In this paper we aim to improve the accuracy in finding users who will perform the desired action, by assigning each user to a cluster, where the number of clusters is much smaller than the number of users (in the order of hundreds). Each user will fall into the same cluster with another user if their event history are similar. For this purpose, we modify the probabilistic latent semantic analysis (pLSA) model by assuming the independence of the user and the cluster id, given the history of events. This assumption helps us to identify a cluster of a new user without re-clustering all the users. We present the details of the algorithm we employed as well as the distributed implementation on Hadoop, and some initial results on the clusters that were generated by the algorithm.", "title": "" }, { "docid": "a582cfeb4833708d3d8a166bf5adce90", "text": "OBJECTIVE\nThe purpose of this study is to describe the prevalence, morphology, size, and location of left atrial abnormalities including diverticula and accessory appendages in consecutive patients undergoing cardiac-gated CT for coronary artery evaluation.\n\n\nMATERIALS AND METHODS\nRoutine retrospectively gated contrast-enhanced 64-MDCT angiography (0.75-mm collimation, 330-milliseconds gantry rotation time) was performed in 529 consecutive patients. CT data sets were evaluated using axial, sagittal, coronal, and interactive multiplanar reconstructions; maximum intensity projections (MIPs); and interactive volume rendering. The presence, type, and location of left atrial appendages and diverticula were recorded.\n\n\nRESULTS\nOne hundred twenty-one patients had left atrial accessory appendages (n = 20) or left atrial diverticula (n = 81) or both (n = 20). One hundred four left atrial diverticula were found in 101 of the 529 patients (20%) and 44 accessory appendages in 41 patients (8%). Of the atrial diverticula, 88% were superior and anterior, 9% were right lateral superior, and 3% were inferior. Of accessory appendages, 34% were inferior posterior, 32% were left inferior, 18% were superior anterior, 14% were inferior posterior, and 2% were right inferior posterior. The average sizes of diverticula were 6.4 +/- 2.5 x 6.2 +/- 2.4 mm, and accessory appendages were 4.9 +/- 2.1 x 3.9 +/- 2.4 mm.\n\n\nCONCLUSION\nLeft atrial diverticula and accessory appendages are commonly found on cardiac-gated CT.", "title": "" }, { "docid": "00cdaa724f262211919d4c7fc5bb0442", "text": "With Tor being a popular anonymity network, many attacks have been proposed to break its anonymity or leak information of a private communication on Tor. However, guaranteeing complete privacy in the face of an adversary on Tor is especially difficult because Tor relays are under complete control of world-wide volunteers. Currently, one can gain private information, such as circuit identifiers and hidden service identifiers, by running Tor relays and can even modify their behaviors with malicious intent. This paper presents a practical approach to effectively enhancing the security and privacy of Tor by utilizing Intel SGX, a commodity trusted execution environment. We present a design and implementation of Tor, called SGX-Tor, that prevents code modification and limits the information exposed to untrusted parties. We demonstrate that our approach is practical and effectively reduces the power of an adversary to a traditional network-level adversary. Finally, SGX-Tor incurs moderate performance overhead; the end-to-end latency and throughput overheads for HTTP connections are 3.9% and 11.9%, respectively.", "title": "" }, { "docid": "c621f8fb5ea935707aae0b8b7fa21301", "text": "Several database systems have implemented temporal data support, partly according to the model specified in the last SQL standard and partly according to other, older temporal models. In this article we use the most important temporal concepts to investigate their implementations in enterprise database systems. Also, we discuss strengths and weaknesses of these implementations and give suggestions for future extensions.", "title": "" }, { "docid": "146387ae8853279d21f0b4c2f9b3e400", "text": "We address a class of manipulation problems where the robot perceives the scene with a depth sensor and can move its end effector in a space with six degrees of freedom – 3D position and orientation. Our approach is to formulate the problem as a Markov decision process (MDP) with abstract yet generally applicable state and action representations. Finding a good solution to the MDP requires adding constraints on the allowed actions. We develop a specific set of constraints called hierarchical SE(3) sampling (HSE3S) which causes the robot to learn a sequence of gazes to focus attention on the task-relevant parts of the scene. We demonstrate the effectiveness of our approach on three challenging pick-place tasks (with novel objects in clutter and nontrivial places) both in simulation and on a real robot, even though all training is done in simulation.", "title": "" }, { "docid": "084b2787a6b79de789334c4dc8c14702", "text": "Renewable energy is a key technology in reducing global carbon dioxide emissions. Currently, penetration of intermittent renewable energies in most power grids is low, such that the impact of renewable energy's intermittency on grid stability is controllable. Utility scale energy storage systems can enhance stability of power grids with increasing share of intermittent renewable energies. With the grid communication network in smart grids, mobile battery systems in battery electric vehicles and plug-in hybrid electric vehicles can also be used for energy storage and ancillary services in smart grids. This paper will review the stationary and mobile battery systems for grid voltage and frequency stability control in smart grids with increasing shares of intermittent renewable energies. An optimization algorithm on vehicle-to-grid operation will also be presented.", "title": "" }, { "docid": "010fd9fcd9afb973a1930fbb861654c9", "text": "We show that the Winternitz one-time signature scheme is existentially unforgeable under adaptive chosen message attacks when instantiated with a family of pseudorandom functions. Our result halves the signature size at the same security level, compared to previous results, which require a collision resistant hash function. We also consider security in the strong sense and show that the Winternitz one-time signature scheme is strongly unforgeable assuming additional properties of the pseudorandom function family. In this context we formally define several key-based security notions for function families and investigate their relation to pseudorandomness. All our reductions are exact and in the standard model and can directly be used to estimate the output length of the hash function required to meet a certain security level.", "title": "" }, { "docid": "39180c1e2636a12a9d46d94fe3ebfa65", "text": "We present a novel machine learning based algorithm extending the interaction space around mobile devices. The technique uses only the RGB camera now commonplace on off-the-shelf mobile devices. Our algorithm robustly recognizes a wide range of in-air gestures, supporting user variation, and varying lighting conditions. We demonstrate that our algorithm runs in real-time on unmodified mobile devices, including resource-constrained smartphones and smartwatches. Our goal is not to replace the touchscreen as primary input device, but rather to augment and enrich the existing interaction vocabulary using gestures. While touch input works well for many scenarios, we demonstrate numerous interaction tasks such as mode switches, application and task management, menu selection and certain types of navigation, where such input can be either complemented or better served by in-air gestures. This removes screen real-estate issues on small touchscreens, and allows input to be expanded to the 3D space around the device. We present results for recognition accuracy (93% test and 98% train), impact of memory footprint and other model parameters. Finally, we report results from preliminary user evaluations, discuss advantages and limitations and conclude with directions for future work.", "title": "" }, { "docid": "4d01ae932a42807760cd81605edc0cf2", "text": "A dual mass vibratory gyroscope sensor demonstrates the quadrature frequency modulated (QFM) operating mode, where the frequency of the circular orbit of a proof mass is measured to detect angular rate. In comparison to the mode-matched open loop rate mode, the QFM mode receives the same benefit of improved SNR but without the penalties of unreliable scale factor and decreased bandwidth. A matched pair of gyroscopes, integrated onto the same die, is used for temperature compensation, resulting in 6 ppb relative frequency tracking error, or an Allan deviation of 370 deg/hr with a 70 kHz resonant frequency. The integrated CMOS electronics achieve a capacitance resolution of 0.1 zF/rt-Hz with nominal 6 fF sense electrodes.", "title": "" }, { "docid": "028a64817ebf6975a55de11213b16eb6", "text": "This paper is the result of research conducted in response to a client’s question of the importance of color in their new school facility. The resulting information is a compilation of studies conducted by color psychologists, medical and design professionals. Introduction From psychological reactions to learned cultural interpretations, human reaction and relationship to color is riddle with complexities. The variety of nuances, however, does not dilute the amazing power of color on humans and its ability to enhance our experience of the learning environment. To formulate a better understanding of color’s impact, one must first form a basic understanding of Carl Jung’s theory of the collective unconscious. According to Jung, all of us are born with a basic psyche that can later be differentiated based upon personal experience. This basic psyche reflects the evolutionary traits that have helped humans to survive throughout history. For example, an infant has a pre-disposed affinity for two dark spots next to each other, an image that equals their visual interpretation of a human face. This affinity for the shapes is not learned, but preprogrammed into the collective unconscious of all human children. Just as we are programmed to identify with the human face, our body has a basic interpretation and reaction to certain colors. As proven in recent medical studies, however, the psychological reaction to color does not preclude the basic biological reaction that stems from human evolution. The human ability to see a wide range of color and our reaction to color is clearly articulated in Frank Mahnke’s color pyramid. The pyramid lists six levels of our color experience in an increasingly personalized interpretation. The clear hierarchy of the graphic, however, belies the immediate impact that mood, age and life experiences play in the moment to moment personal interpretation of color. Balancing the research of color interpretation with these personal interpretations becomes the designer’s task as environmental color choices are made. Understanding our Biological Processing When discussing color experiences in terms of the physical reactions of blood pressure, eyestrain and brain development, the power and importance of a well-designed environment crosses cultural and personal barriers. It does not cancel the importance of these experiences, but it does provide an objective edge to the argument for careful color application in an often subjective decisionmaking realm. Color elicits a total response from human beings because the energy produced by the light that carries color effects our body functions and influences our mind and emotion. In 1976, Rikard Kuller demonstrated how color and visual patterning affects not only the cortex but also the entire central nervous system1. Color has been shown to alter the level of alpha brain wave activity, which is used in the medical field to measure human alertness. In addition, it has been found that when color is transmitted through the human eye, the brain releases the hormone, hypothalamus, which affects our moods, mental clarity and energy level. Experiencing color, however, is not limited to our visual comprehension of hues. In a study conducted by Harry Wohlfarth and Catharine Sam of the University of Alberta, they learned that the change in the color environment of 14 severally handicapped and behaviorlly disturbed 8-11 Wednesday June 18, 2003 NeoCON The Impact of Color on Learning year olds resulted in a drop in blood pressure and reduction in aggressive behavior in both blind and sighted children. This passage of the benefits of varying color’s energy is plausible when one considers that color is after all light waves that bounce around and are absorbed by all surfaces. Further study by Antonio F. Torrice, resulted in his thesis that specific colors impact certain physical systems in the human body.2 In Torrice’s study, he proposes that the following systems are influenced by these particular hues: Motor Skill Activity – Red, Circulatory System – Orange, Cardiopulmonary – Yellow, Speech Skill Activity – Green, Eyes, Ears and Nose – Blue, Nonverbal Activity – Violet. The analysis of our biological reaction and processing of color is quickly linked to the psychological reactions that often simultaneously manifest themselves. The psychological reactions to color are particularly apparent in the qualitative descriptions (anxiety, aggression, sadness, quiet) offered in color analysis. Introducing Color in Schools When discussing color with school districts it is important to approach color choices as functional color rather than from a standpoint of aesthetics. Functional color focuses on using color to achieve an end result such as increased attention span and lower levels of eye fatigue. These color schemes are not measured by criteria of beauty but rather by tangible evidence.3 The following are the results of a variety of tests conducted on the impact of color in the environment. Viewed together, the results of these studies demonstrate a basic guideline for designers when evaluating color applications for schools. The tests do not offer a definitive color scheme for each school environment, but provide the functional guidelines and reasons why color is an important element in school interiors. Relieves eye fatigue: Eye strain is a medical ailment diagnosed by increased blinking, dilation of the pupil when light intensity is static, reduction in the ability to focus on clear objects and an inability to distinguish small differences in brightness. End wall treatments in a classroom can help to reduce instances of eyestrain for students by helping the eye to relax as students look up from a task. Studies suggest that the end wall colors should be a medium hue with the remaining walls a neutral tint such as Oyster white, Sandstone or Beige. The end wall treatment also helps to relieve the visual monotony of a classroom and stimulate a student’s brain. Increases productivity and accuracy As demonstrated by an environmental color coordination study conducted by the US Navy, in the three years following the introduction of color into the environment a drop of accident frequency from 6.4 to 4.6 or 28% was noted.4 This corroborates an independent study demonstrating white and off-white business environments resulted in a 25% or more drop in human efficiency.5 Color’s demonstrated effectiveness on improving student’s attention span as well as both student and teacher’s sense of time, is a further reason as to how color can increase the productivity in a classroom. The mental stimulation passively received by the color in a room, helps the student and teacher stay focused on the task at hand. This idea is further supported by Harry Wohlfarth’s 1983 study of four elementary schools that notes that schools that received improved lighting and color showed the largest improvements in academic performance and IQ scores.6 Wednesday June 18, 2003 NeoCON The Impact of Color on Learning The demonstrated negatives of monotone environments also support the positives demonstrated by colorful environments. For example, apes left alone surrounded by blank walls were found to withdraw into themselves in a manner similar to schizophrenics. Humans were also found to turn inward in monotone environments, which may induce feelings of anxiety, fear and distress resulting from understimulation. This lack of stimulation further creates a sense of restlessness, excessive emotional response, difficulty in concentration and irritation. Aids in wayfinding With the growing focusing on smaller learning communities, many schools are organizing their facilites around a school within a school plan. Using color to further articulate these smaller learning communities aids in developing place identity. The color can create a system of order and help to distinguish important and unimportant elements in the environment. The use of color and graphics to aid wayfinding is particularly important for primary school children who starting at the age of three have begun to recognize and match colors and finds design’s that emphasize a child as a unique and separate person can be stimulating. Supports developmental processes Being sensitive to each age group’s different responses to color is key in creating an environment stimulating to their educational experience. Children’s rejection or acceptance of certain colors is a mirror of their development into adulthood.7 Younger children find high contrast and bright colors stimulating with a growing pechant for colors that create patterns. Once students transition into adolescence, however, the cooler colors and more subdued hues provide enough stimulation to them without proving distracting or stress-inducing. Guidelines for Academic Environments: Frank H. Mahnke in his book, Color, Environment and Human Response, offers designers guidelines specifically for integrating color in the educational environment. His guidelines stem from his own research in the fields of color and environmental psychology. • Preschool and Elementary school prefer a warm, bright color scheme that compliments their natural extroverted nature . • Cool colors are recommended for upper grade and secondary classrooms for their ability to focus concentration. • Hallways can have more colored range than in the classroom and be used to give the school a distintive personality. • Libraries utilize a pale or light green creating an effect that enhances quietness and concentration. Additional color application guidelines gleaned from the many sources reviewed are: • Maximum ratio of brightness difference of 3 to 1 between ceiling and furniture finish. (White celining at 90% reflectance, desk finish at 30% reflectance) • Brightness ratio in general field of view is within 5 to 1 promotion smooth unencumbered vision that enables average school tasks to be performed comfortably • End wall treatments in mediu", "title": "" }, { "docid": "3b4622a4ad745fc0ffb3b6268eb969fa", "text": "Eruptive syringomas: unresponsiveness to oral isotretinoin A 22-year-old man of Egyptian origin was referred to our department due to exacerbation of pruritic pre-existing papular dermatoses. The skin lesions had been present since childhood. The family history was negative for a similar condition. The patient complained of exacerbation of the pruritus during physical activity under a hot climate and had moderate to severe pruritus during his work. Physical examination revealed multiple reddish-brownish smooth-surfaced, symmetrically distributed papules 2–4 mm in diameter on the patient’s trunk, neck, axillae, and limbs (Fig. 1). The rest of the physical examination was unremarkable. The Darier sign was negative. A skin biopsy was obtained from a representative lesion on the trunk. Histopathologic examination revealed a small, wellcircumscribed neoplasm confined to the upper dermis, composed of small solid and ductal structures relatively evenly distributed in a sclerotic collagenous stroma. The solid elements were of various shapes (round, oval, curvilinear, “comma-like,” or “tadpole-like”) (Fig. 2). These microscopic features and the clinical presentation were consistent with the diagnosis of eruptive syringomas. Our patient was treated with a short course of oral antihistamines without any effect and subsequently with low-dose isotretinoin (10 mg/day) for 5 months. No improvement of the skin eruption was observed while cessation of the pruritus was accomplished. Syringoma is a common adnexal tumor with differentiation towards eccrine acrosyringium composed of small solid and ductal elements embedded in a sclerotic stroma and restricted as a rule to the upper to mid dermis, usually presenting clinically as multiple lesions on the lower eyelids and cheeks of adolescent females. A much less common variant is the eruptive or disseminated syringomas, which involve primarily young women. Eruptive syringomas are characterized by rapid development during a short period of time of hundreds of small (1–5 mm), ill-defined, smooth surfaced, skin-colored, pink, yellowish, or brownish papules typically involving the face, trunk, genitalia, pubic area, and extremities but can occur principally in any site where eccrine glands are found. The pathogenesis of eruptive syringoma remains unclear. Some authors have recently challenged the traditional notion that eruptive syringomas are neoplastic lesions. Chandler and Bosenberg presented evidence that eruptive syringomas result from autoimmune destruction of the acrosyringium and proposed the term autoimmune acrosyringitis with ductal cysts. Garrido-Ruiz et al. support the theory that eruptive syringomas may represent a hyperplastic response of the eccrine duct to an inflammatory reaction. In a recent systematic review by Williams and Shinkai the strongest association of syringomas was with Down’s syndrome (183 reported cases, 22.2%). Syringomas are also associated with diabetes mellitus (17 reported cases, 2.1%), Ehlers–Danlos", "title": "" } ]
scidocsrr
e7b348bdd5435c5867447254f105b01f
Improving Computer-Aided Detection Using Convolutional Neural Networks and Random View Aggregation
[ { "docid": "4f400f8e774ebd050ba914011da73514", "text": "This paper summarizes the method of polyp detection in colonoscopy images and provides preliminary results to participate in ISBI 2015 Grand Challenge on Automatic Polyp Detection in Colonoscopy videos. The key aspect of the proposed method is to learn hierarchical features using convolutional neural network. The features are learned in different scales to provide scale-invariant features through the convolutional neural network, and then each pixel in the colonoscopy image is classified as polyp pixel or non-polyp pixel through fully connected network. The result is refined via smooth filtering and thresholding step. Experimental result shows that the proposed neural network can classify patches of polyp and non-polyp region with an accuracy of about 90%.", "title": "" } ]
[ { "docid": "589078a80d4034d4929676d359c16398", "text": "This paper describes the University of Sheffield’s submission for the WMT16 Multimodal Machine Translation shared task, where we participated in Task 1 to develop German-to-English and Englishto-German statistical machine translation (SMT) systems in the domain of image descriptions. Our proposed systems are standard phrase-based SMT systems based on the Moses decoder, trained only on the provided data. We investigate how image features can be used to re-rank the n-best list produced by the SMT model, with the aim of improving performance by grounding the translations on images. Our submissions are able to outperform the strong, text-only baseline system for both directions.", "title": "" }, { "docid": "9c05452b964c67b8f79ce7dfda4a72e5", "text": "The Internet is evolving rapidly toward the future Internet of Things (IoT) which will potentially connect billions or even trillions of edge devices which could generate huge amount of data at a very high speed and some of the applications may require very low latency. The traditional cloud infrastructure will run into a series of difficulties due to centralized computation, storage, and networking in a small number of datacenters, and due to the relative long distance between the edge devices and the remote datacenters. To tackle this challenge, edge cloud and edge computing seem to be a promising possibility which provides resources closer to the resource-poor edge IoT devices and potentially can nurture a new IoT innovation ecosystem. Such prospect is enabled by a series of emerging technologies, including network function virtualization and software defined networking. In this survey paper, we investigate the key rationale, the state-of-the-art efforts, the key enabling technologies and research topics, and typical IoT applications benefiting from edge cloud. We aim to draw an overall picture of both ongoing research efforts and future possible research directions through comprehensive discussions.", "title": "" }, { "docid": "b82c2865524e34fd61f1555fc9ba5fbf", "text": "Optimization of decision problems in stochastic environments is usually concerned with maximizing the probability of achieving the goal and minimizing the expected episode length. For interacting agents in time-critical applications, learning of the possibility of scheduling of subtasks (events) or the full task is an additional relevant issue. Besides, there exist highly stochastic problems where the actual trajectories show great variety from episode to episode, but completing the task takes almost the same amount of time. The identification of sub-problems of this nature may promote e.g., planning, scheduling and segmenting Markov decision processes. In this work, formulae for the average duration as well as the standard deviation of the duration of events are derived. We show, that the emerging Bellman-type equation is a simple extension of Sobel’s work (1982) and that methods of dynamic programming as well as methods of reinforcement learning can be applied. Computer demonstration on a toy problem serve to highlight the principle.", "title": "" }, { "docid": "0e002aae88332f8143e6f3a19c4c578b", "text": "While attachment research has demonstrated that parents' internal working models of attachment relationships tend to be transmitted to their children, affecting children's developmental trajectories, this study specifically examines associations between adult attachment status and observable parent, child, and dyadic behaviors among children with autism and associated neurodevelopmental disorders of relating and communicating. The Adult Attachment Interview (AAI) was employed to derive parental working models of attachment relationships. The Functional Emotional Assessment Scale (FEAS) was used to determine the quality of relational and functional behaviors in parents and their children. The sample included parents and their 4- to 16-year-old children with autism and associated neurodevelopmental disorders. Hypothesized relationships between AAI classifications and FEAS scores were supported. Significant correlations were found between AAI classification and FEAS scores, indicating that children with autism spectrum disorders whose parents demonstrated secure attachment representations were better able to initiate and respond in two-way pre-symbolic gestural communication; organize two-way social problem-solving communication; and engage in imaginative thinking, symbolic play, and verbal communication. These findings lend support to the relevance of the parent's state of mind pertaining to attachment status to child and parent relational behavior in cases wherein the child has been diagnosed with autism or an associated neurodevelopmental disorder of relating and communicating. A model emerges from these findings of conceptualizing relationships between parental internal models of attachment relationships and parent-child relational and functional levels that may aid in differentiating interventions.", "title": "" }, { "docid": "cc9741eb6e5841ddf10185578f26a077", "text": "The context of prepaid mobile telephony is specific in the way that customers are not contractually linked to their operator and thus can cease their activity without notice. In order to estimate the retention efforts which can be engaged towards each individual customer, the operator must distinguish the customers presenting a strong churn risk from the other. This work presents a data mining application leading to a churn detector. We compare artificial neural networks (ANN) which have been historically applied to this problem, to support vectors machines (SVM) which are particularly effective in classification and adapted to noisy data. Thus, the objective of this article is to compare the application of SVM and ANN to churn detection in prepaid cellular telephony. We show that SVM gives better results than ANN on this specific problem.", "title": "" }, { "docid": "752eea750f91318c3c45d250059cb597", "text": "To estimate the value functions of policies from exploratory data, most model-free offpolicy algorithms rely on importance sampling, where the use of importance sampling ratios often leads to estimates with severe variance. It is thus desirable to learn off-policy without using the ratios. However, such an algorithm does not exist for multi-step learning with function approximation. In this paper, we introduce the first such algorithm based on temporal-difference (TD) learning updates. We show that an explicit use of importance sampling ratios can be eliminated by varying the amount of bootstrapping in TD updates in an action-dependent manner. Our new algorithm achieves stability using a two-timescale gradient-based TD update. A prior algorithm based on lookup table representation called Tree Backup can also be retrieved using action-dependent bootstrapping, becoming a special case of our algorithm. In two challenging off-policy tasks, we demonstrate that our algorithm is stable, effectively avoids the large variance issue, and can perform substantially better than its state-of-the-art counterpart.", "title": "" }, { "docid": "ddc556ae150e165dca607e4a674583ae", "text": "Increasing patient numbers, changing demographics and altered patient expectations have all contributed to the current problem with 'overcrowding' in emergency departments (EDs). The problem has reached crisis level in a number of countries, with significant implications for patient safety, quality of care, staff 'burnout' and patient and staff satisfaction. There is no single, clear definition of the cause of overcrowding, nor a simple means of addressing the problem. For some hospitals, the option of ambulance diversion has become a necessity, as overcrowded waiting rooms and 'bed-block' force emergency staff to turn patients away. But what are the options when ambulance diversion is not possible? Christchurch Hospital, New Zealand is a tertiary level facility with an emergency department that sees on average 65,000 patients per year. There are no other EDs to whom patients can be diverted, and so despite admission rates from the ED of up to 48%, other options need to be examined. In order to develop a series of unified responses, which acknowledge the multifactorial nature of the problem, the Emergency Department Cardiac Analogy model of ED flow, was developed. This model highlights the need to intervene at each of three key points, in order to address the issue of overcrowding and its associated problems.", "title": "" }, { "docid": "d7e61562c913fa9fa265fd8ef5288cb5", "text": "For our project, we consider the task of classifying the gender of an author of a blog, novel, tweet, post or comment. Previous attempts have considered traditional NLP models such as bag of words and n-grams to capture gender differences in authorship, and apply it to a specific media (e.g. formal writing, books, tweets, or blogs). Our project takes a novel approach by applying deep learning models developed by Lai et al to directly learn the gender of blog authors. We further refine their models and present a new deep learning model, the Windowed Recurrent Convolutional Neural Network (WRCNN), for gender classification. Our approaches are tested and trained on several datasets: a blog dataset used by Mukherjee et al, and two datasets representing 19th and 20th century authors, respectively. We report an accuracy of 86% on the blog dataset with our WRCNN model, comparable with state-of-the-art implementations.", "title": "" }, { "docid": "7115c7f17faa8712dbdeac631f022ae4", "text": "Scientific workflows, like other applications, benefit from the cloud computing, which offers access to virtually unlimited resources provisioned elastically on demand. In order to efficiently execute a workflow in the cloud, scheduling is required to address many new aspects introduced by cloud resource provisioning. In the last few years, many techniques have been proposed to tackle different cloud environments enabled by the flexible nature of the cloud, leading to the techniques of different designs. In this paper, taxonomies of cloud workflow scheduling problem and techniques are proposed based on analytical review. We identify and explain the aspects and classifications unique to workflow scheduling in the cloud environment in three categories, namely, scheduling process, task and resource. Lastly, review of several scheduling techniques are included and classified onto the proposed taxonomies. We hope that our taxonomies serve as a stepping stone for those entering this research area and for further development of scheduling technique.", "title": "" }, { "docid": "0d83d1dc97d65d9aa4969e016a360451", "text": "This paper proposes and evaluates a novel analytical performance model to study the efficiency and scalability of software-defined infrastructure (SDI) to host adaptive applications. The SDI allows applications to communicate their adaptation requirements at run-time. Adaptation scenarios require computing and networking resources to be provided to applications in a timely manner to facilitate seamless service delivery. Our analytical model yields the response time of realizing adaptations on the SDI and reveals the scalability limitations. We conduct extensive testbed experiments on a cloud environment to verify the accuracy and fidelity of the model. Cloud service providers can leverage the proposed model to perform capacity planning and bottleneck analysis when they accommodate adaptive applications.", "title": "" }, { "docid": "bdde191440caa21c1f162ffa70f8075f", "text": "There is a strong trend in using permanent magnet synchronous machines for very high speed, high power applications due to their high efficiencies, versatility and compact nature. To increase power output for a given speed, rotor design becomes critical in order to maximize rotor volume and hence torque output for a given electrical loading and cooling capability. The two main constraints on rotor volume are mechanical, characterized by stresses in the rotor and resonant speeds of the rotor assembly. The level of mechanical stresses sustained in rotors increases with their radius and speed and, as this is pushed higher, previously minor effects become important in rotor design. This paper describes an observed shear stress concentration in sleeved permanent magnet rotors, caused by the Poisson effect, which can lead to magnet cracking and rotor failure. A simple analytical prediction of the peak shear stress is presented and methods for mitigating it are recommended.", "title": "" }, { "docid": "41b8fb6fd9237c584ce0211f94a828be", "text": "Over the last few years, two of the main research directions in machine learning of natural language processing have been the study of semi-supervised learning algorithms as a way to train classifiers when the labeled data is scarce, and the study of ways to exploit knowledge and global information in structured learning tasks. In this paper, we suggest a method for incorporating domain knowledge in semi-supervised learning algorithms. Our novel framework unifies and can exploit several kinds of task specific constraints. The experimental results presented in the information extraction domain demonstrate that applying constraints helps the model to generate better feedback during learning, and hence the framework allows for high performance learning with significantly less training data than was possible before on these tasks.", "title": "" }, { "docid": "159222cde67c2d08e0bde7996b422cd6", "text": "Superficial thrombophlebitis of the dorsal vein of the penis, known as penile Mondor’s disease, is an uncommon genital disease. We report on a healthy 44-year-old man who presented with painful penile swelling, ecchymosis, and penile deviation after masturbation, which initially imitated a penile fracture. Thrombosis of the superficial dorsal vein of the penis without rupture of corpus cavernosum was found during surgical exploration. The patient recovered without erectile dysfunction.", "title": "" }, { "docid": "71ff52158a45b1869500630cd5cb041b", "text": "Heat shock proteins (HSPs) are a set of highly conserved proteins that can serve as intestinal gate keepers in gut homeostasis. Here, effects of a probiotic, Lactobacillus rhamnosus GG (LGG), and two novel porcine isolates, Lactobacillus johnsonii strain P47-HY and Lactobacillus reuteri strain P43-HUV, on cytoprotective HSP expression and gut barrier function, were investigated in a porcine IPEC-J2 intestinal epithelial cell line model. The IPEC-J2 cells polarized on a permeable filter exhibited villus-like cell phenotype with development of apical microvilli. Western blot analysis detected HSP expression in IPEC-J2 and revealed that L. johnsonii and L. reuteri strains were able to significantly induce HSP27, despite high basal expression in IPEC-J2, whereas LGG did not. For HSP72, only the supernatant of L. reuteri induced the expression, which was comparable to the heat shock treatment, which indicated that HSP72 expression was more stimulus specific. The protective effect of lactobacilli was further studied in IPEC-J2 under an enterotoxigenic Escherichia coli (ETEC) challenge. ETEC caused intestinal barrier destruction, as reflected by loss of cell-cell contact, reduced IPEC-J2 cell viability and transepithelial electrical resistance, and disruption of tight junction protein zonula occludens-1. In contrast, the L. reuteri treatment substantially counteracted these detrimental effects and preserved the barrier function. L. johnsonii and LGG also achieved barrier protection, partly by directly inhibiting ETEC attachment. Together, the results indicate that specific strains of Lactobacillus can enhance gut barrier function through cytoprotective HSP induction and fortify the cell protection against ETEC challenge through tight junction protein modulation and direct interaction with pathogens.", "title": "" }, { "docid": "53c0564d82737d51ca9b7ea96a624be4", "text": "In part 1 of this article, an occupational therapy model of practice for children with attention deficit hyperactivity disorder (ADHD) was described (Chu and Reynolds 2007). It addressed some specific areas of human functioning related to children with ADHD in order to guide the practice of occupational therapy. The model provides an approach to identifying and communicating occupational performance difficulties in relation to the interaction between the child, the environment and the demands of the task. A family-centred occupational therapy assessment and treatment package based on the model was outlined. The delivery of the package was underpinned by the principles of the family-centred care approach. Part 2 of this two-part article reports on a multicentre study, which was designed to evaluate the effectiveness and acceptability of the proposed assessment and treatment package and thereby to offer some validation of the delineation model. It is important to note that no treatment has yet been proved to ‘cure’ the condition of ADHD or to produce any enduring effects in affected children once the treatment is withdrawn. So far, the only empirically validated treatments for children with ADHD with substantial research evidence are psychostimulant medication, behavioural and educational management, and combined medication and behavioural management (DuPaul and Barkley 1993, A family-centred occupational therapy assessment and treatment package for children with attention deficit hyperactivity disorder (ADHD) was evaluated. The package involves a multidimensional evaluation and a multifaceted intervention, which are aimed at achieving a goodness-of-fit between the child, the task demands and the environment in which the child carries out the task. The package lasts for 3 months, with 12 weekly contacts with the child, parents and teacher. A multicentre study was carried out, with 20 occupational therapists participating. Following a 3-day training course, they implemented the package and supplied the data that they had collected from 20 children. The outcomes were assessed using the ADHD Rating Scales, pre-intervention and post-intervention. The results showed behavioural improvement in the majority of the children. The Measure of Processes of Care – 20-item version (MPOC-20) provided data on the parents’ perceptions of the family-centredness of the package and also showed positive ratings. The results offer some support for the package and the guiding model of practice, but caution should be exercised in generalising the results because of the small sample size, lack of randomisation, absence of a control group and potential experimenter effects from the research therapists. A larger-scale randomised controlled trial should be carried out to evaluate the efficacy of an improved package.", "title": "" }, { "docid": "8ff481b3b35b74356d876c28513dc703", "text": "This paper describes the ScratchJr research project, a collaboration between Tufts University's Developmental Technologies Research Group, MIT's Lifelong Kindergarten Group, and the Playful Invention Company. Over the past five years, dozens of ScratchJr prototypes have been designed and studied with over 300 K-2nd grade students, teachers and parents. ScratchJr allows children ages 5 to 7 years to explore concepts of computer programming and digital content creation in a safe and fun environment. This paper describes the progression of major prototypes leading to the current public version, as well as the educational resources developed for use with ScratchJr. Future directions and educational implications are also discussed.", "title": "" }, { "docid": "7832707feef1e81c3a01e974c37a960b", "text": "Most current commercial automated fingerprint-authentication systems on the market are based on the extraction of the fingerprint minutiae, and on medium resolution (500 dpi) scanners. Sensor manufacturers tend to reduce the sensing area in order to adapt it to low-power mobile hand-held communication systems and to lower the cost of their devices. An interesting alternative is designing a novel fingerprintauthentication system capable of dealing with an image from a small, high resolution (1000 dpi) sensor area based on combined level 2 (minutiae) and level 3 (sweat pores) feature extraction. In this paper, we propose a new strategy and implementation of a series of techniques for automatic level 2 and level 3 feature extraction in fragmentary fingerprint comparison. The main challenge in achieving high reliability while using a small portion of a fingerprint for matching is that there may not be a sufficient number of minutiae but the uniqueness of the pore configurations provides a powerful means to compensate for this insufficiency. A pilot study performed to test the presented approach confirms the efficacy of using pores in addition to the traditionally used minutiae in fragmentary fingerprint comparison.", "title": "" }, { "docid": "d18d4780cc259da28da90485bd3f0974", "text": "L'ostéogenèse imparfaite (OI) est un groupe hétérogène de maladies affectant le collagène de type I et caractérisées par une fragilité osseuse. Les formes létales sont rares et se caractérisent par une micromélie avec déformation des membres. Un diagnostic anténatal d'OI létale a été fait dans deux cas, par échographie à 17 et à 25 semaines d'aménorrhée, complélées par un scanner du squelette fœtal dans un cas. Une interruption thérapeutique de grossesse a été indiquée dans les deux cas. Pan African Medical Journal. 2016; 25:88 doi:10.11604/pamj.2016.25.88.5871 This article is available online at: http://www.panafrican-med-journal.com/content/article/25/88/full/ © Houda EL Mhabrech et al. The Pan African Medical Journal ISSN 1937-8688. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Pan African Medical Journal – ISSN: 19378688 (www.panafrican-med-journal.com) Published in partnership with the African Field Epidemiology Network (AFENET). (www.afenet.net) Case report Open Access", "title": "" }, { "docid": "bee25514d15321f4f0bdcf867bb07235", "text": "We propose a randomized block-coordinate variant of the classic Frank-Wolfe algorithm for convex optimization with block-separable constraints. Despite its lower iteration cost, we show that it achieves a similar convergence rate in duality gap as the full FrankWolfe algorithm. We also show that, when applied to the dual structural support vector machine (SVM) objective, this yields an online algorithm that has the same low iteration complexity as primal stochastic subgradient methods. However, unlike stochastic subgradient methods, the block-coordinate FrankWolfe algorithm allows us to compute the optimal step-size and yields a computable duality gap guarantee. Our experiments indicate that this simple algorithm outperforms competing structural SVM solvers.", "title": "" }, { "docid": "b418734faef12396bbcef4df356c6fb6", "text": "Active learning techniques were employed for classification of dialogue acts over two dialogue corpora, the English humanhuman Switchboard corpus and the Spanish human-machine Dihana corpus. It is shown clearly that active learning improves on a baseline obtained through a passive learning approach to tagging the same data sets. An error reduction of 7% was obtained on Switchboard, while a factor 5 reduction in the amount of labeled data needed for classification was achieved on Dihana. The passive Support Vector Machine learner used as baseline in itself significantly improves the state of the art in dialogue act classification on both corpora. On Switchboard it gives a 31% error reduction compared to the previously best reported result.", "title": "" } ]
scidocsrr
50477262d8c941c3133dda64487774d5
Why are average faces attractive? The effect of view and averageness on the attractiveness of female faces.
[ { "docid": "7440cb90073c8d8d58e28447a1774b2c", "text": "Common maxims about beauty suggest that attractiveness is not important in life. In contrast, both fitness-related evolutionary theory and socialization theory suggest that attractiveness influences development and interaction. In 11 meta-analyses, the authors evaluate these contradictory claims, demonstrating that (a) raters agree about who is and is not attractive, both within and across cultures; (b) attractive children and adults are judged more positively than unattractive children and adults, even by those who know them; (c) attractive children and adults are treated more positively than unattractive children and adults, even by those who know them; and (d) attractive children and adults exhibit more positive behaviors and traits than unattractive children and adults. Results are used to evaluate social and fitness-related evolutionary theories and the veracity of maxims about beauty.", "title": "" }, { "docid": "b66609e66cc9c3844974b3246b8f737e", "text": "— Inspired by the evolutionary conjecture that sexually selected traits function as indicators of pathogen resistance in animals and humans, we examined the notion that human facial attractiveness provides evidence of health. Using photos of 164 males and 169 females in late adolescence and health data on these individuals in adolescence, middle adulthood, and later adulthood, we found that adolescent facial attractiveness was unrelated to adolescent health for either males or females, and was not predictive of health at the later times. We also asked raters to guess the health of each stimulus person from his or her photo. Relatively attractive stimulus persons were mistakenly rated as healthier than their peers. The correlation between perceived health and medically assessed health increased when attractiveness was statistically controlled, which implies that attractiveness suppressed the accurate recognition of health. These findings may have important implications for evolutionary models. 0 When social psychologists began in earnest to study physical attractiveness , they were startled by the powerful effect of facial attractiveness on choice of romantic partner (Walster, Aronson, Abrahams, & Rott-mann, 1966) and other aspects of human interaction (Berscheid & Wal-ster, 1974; Hatfield & Sprecher, 1986). More recent findings have been startling again in revealing that infants' preferences for viewing images of faces can be predicted from adults' attractiveness ratings of the faces The assumption that perceptions of attractiveness are culturally determined has thus given ground to the suggestion that they are in substantial part biologically based (Langlois et al., 1987). A biological basis for perception of facial attractiveness is aptly viewed as an evolutionary basis. It happens that evolutionists, under the rubric of sexual selection theory, have recently devoted increasing attention to the origin and function of sexually attractive traits in animal species (Andersson, 1994; Hamilton & Zuk, 1982). Sexual selection as a province of evolutionary theory actually goes back to Darwin (1859, 1871), who noted with chagrin that a number of animals sport an appearance that seems to hinder their survival chances. Although the females of numerous birds of prey, for example, are well camouflaged in drab plum-age, their mates wear bright plumage that must be conspicuous to predators. Darwin divined that the evolutionary force that \" bred \" the males' bright plumage was the females' preference for such showiness in a mate. Whereas Darwin saw aesthetic preferences as fundamental and did not seek to give them adaptive functions, other scholars, beginning …", "title": "" }, { "docid": "1fc10d626c7a06112a613f223391de26", "text": "The question of what makes a face attractive, and whether our preferences come from culture or biology, has fascinated scholars for centuries. Variation in the ideals of beauty across societies and historical periods suggests that standards of beauty are set by cultural convention. Recent evidence challenges this view, however, with infants as young as 2 months of age preferring to look at faces that adults find attractive (Langlois et al., 1987), and people from different cultures showing considerable agreement about which faces are attractive (Cun-for a review). These findings raise the possibility that some standards of beauty may be set by nature rather than culture. Consistent with this view, specific preferences have been identified that appear to be part of our biological rather than Such a preference would be adaptive if stabilizing selection operates on facial traits (Symons, 1979), or if averageness is associated with resistance to pathogens , as some have suggested Evolutionary biologists have proposed that a preference for symmetry would also be adaptive because symmetry is a signal of health and genetic quality Only high-quality individuals can maintain symmetric development in the face of environmental and genetic stresses. Symmetric bodies are certainly attractive to humans and many other animals but what about symmetric faces? Biologists suggest that facial symmetry should be attractive because it may signal mate quality High levels of facial asymmetry in individuals with chro-mosomal abnormalities (e.g., Down's syndrome and Tri-somy 14; for a review, see Thornhill & Møller, 1997) are consistent with this view, as is recent evidence that facial symmetry levels correlate with emotional and psychological health (Shackelford & Larsen, 1997). In this paper, we investigate whether people can detect subtle differences in facial symmetry and whether these differences are associated with differences in perceived attractiveness. Recently, Kowner (1996) has reported that faces with normal levels of asymmetry are more attractive than perfectly symmetric versions of the same faces. 3 Similar results have been reported by Langlois et al. and an anonymous reviewer for helpful comments on an earlier version of the manuscript. We also thank Graham Byatt for assistance with stimulus construction, Linda Jeffery for assistance with the figures, and Alison Clark and Catherine Hickford for assistance with data collection and statistical analysis in Experiment 1A. Evolutionary, as well as cultural, pressures may contribute to our perceptions of facial attractiveness. Biologists predict that facial symmetry should be attractive, because it may signal …", "title": "" } ]
[ { "docid": "b1383088b26636e6ac13331a2419f794", "text": "This paper investigates the problem of blurring caused by motion during image capture of text documents. Motion blurring prevents proper optical character recognition of the document text contents. One area of such applications is to deblur name card images obtained from handheld cameras. In this paper, a complete motion deblurring procedure for document images has been proposed. The method handles both uniform linear motion blur and uniform acceleration motion blur. Experiments on synthetic and real-life blurred images prove the feasibility and reliability of this algorithm provided that the motion is not too irregular. The restoration procedure consumes only small amount of computation time.", "title": "" }, { "docid": "85f2e049dc90bf08ecb0d34899d8b3c5", "text": "Here is little doubt that the Internet represents the spearhead of the industrial revolution. I love new technologies and gadgets that promise new and better ways of doing things. I have many such gadgets myself and I even manage to use a few of them (though not without some pain).A new piece of technology is like a new relationship, fun and exciting at first, but eventually it requires some hard work to maintain, usually in the form of time and energy. I doubt technology’s promise to improve the quality of life and I am still surprised how time-distorting and dissociating the computer and the Internet can be for me, along with the thousands of people I’ve interviewed, studied and treated in my clinical practice. It seems clear that the Internet can be used and abused in a compulsive fashion, and that there are numerous psychological factors that contribute to the Internet’s power and appeal. It appears that the very same features that drive the potency of the Net are potentially habit-forming. This study examined the self-reported Internet behavior of nearly 18,000 people who answered a survey on the ABCNEWS.com web site. Results clearly support the psychoactive nature of the Internet, and the potential for compulsive use and abuse of the Internet for certain individuals. Introduction Technology, and most especially, computers and the Internet, seem to be at best easily overused/abused, and at worst, addictive. The combination of available stimulating content, ease of access, convenience, low cost, visual stimulation, autonomy, and anonymity—all contribute to a highly psychoactive experience. By psychoactive, that is Running Head: Virtual Addiction to say mood altering, and potentially behaviorally impacting. In other words these technologies affect the manner in which we live and love. It is my contention that some of these effects are indeed less than positive, and may contribute to various negative psychological effects. The Internet and other digital technologies are only the latest in a series of “improvements” to our world which may have unintended negative effects. The experience of problems with new and unknown technologies is far from new; we have seen countless examples of newer and better things that have had unintended and unexpected deleterious effects. Remember Thalidomide, PVC/PCB’s, Atomic power, fossil fuels, even television, along with other seemingly innocuous conveniences which have been shown to be conveniently helpful, but on other levels harmful. Some of these harmful effects are obvious and tragic, while others are more subtle and insidious. Even seemingly innocuous advances such as the elevator, remote controls, credit card gas pumps, dishwashers, and drive-through everything, have all had unintended negative effects. They all save time and energy, but the energy they save may dissuade us from using our physical bodies as they were designed to be used. In short we have convenience ourselves to a sedentary lifestyle. Technology is amoral; it is not inherently good or evil, but it is impact on the manner in which we live our lives. American’s love technology and for some of us this trust and blind faith almost parallels a religious fanaticism. Perhaps most of all, we love it Running Head: Virtual Addiction because of the hope for the future it promises; it is this promise of a better today and a longer tomorrow which captivates us to attend to the call for new better things to come. We live in the age were computer and digital technology are always on the cusp of great things-Newer, better ways of doing things (which in some ways is true). The old becomes obsolete within a year or two. Newer is always better. Computers and the Internet purport to make our lives easier, simpler, and therefore more fulfilling, but it may not be that simple. People have become physically and psychologically dependent on many behaviors and substances for centuries. This compulsive pattern does not reflect a casual interest, but rather consists of a driven pattern of use that can frequently escalate to negatively impact our lives. The key life-areas that seem to be impacted are marriages and relationships, employment, health, and legal/financial status. The fact that substances, such as alcohol and other mood-altering drugs can create a physical and/or psychological dependence is well known and accepted. And certain behaviors such as gambling, eating, work, exercise, shopping, and sex have gained more recent acceptance with regard to their addictive potential. More recently however, there has been an acknowledgement that the compulsive performance of these behaviors may mimic the compulsive process found with drugs, alcohol and other substances. This same process appears to also be found with certain aspects of the Internet. Running Head: Virtual Addiction The Internet can and does produce clear alterations in mood; nearly 30 percent of Internet users admit to using the Net to alter their mood so as to relieve a negative mood state. In other words, they use the Internet like a drug (Greenfield, 1999). In addressing the phenomenon of Internet behavior, initial behavioral research (Young, 1996, 1998) focused on conceptual definitions of Internet use and abuse, and demonstrated similar patterns of abuse as found in compulsive gambling. There have been further recent studies on the nature and effects of the Internet. Cooper, Scherer, Boies, and Gordon (1998) examined sexuality on the Internet utilizing an extensive online survey of 9,177 Web users, and Greenfield (1999) surveyed nearly 18,000 Web users on ABCNEWS.com to examine Internet use and abuse behavior. The later study did yield some interesting trends and patterns, but also raised further areas that require clarification. There has been very little research that actually examined and measured specific behavior related to Internet use. The Carnegie Mellon University study (Kraut, Patterson, Lundmark, Kiesler, Mukopadhyay, and Scherlis, 1998) did attempt to examine and verify actual Internet use among 173 people in 73 households. This initial study did seem to demonstrate that there may be some deleterious effects from heavy Internet use, which appeared to increase some measures of social isolation and depression. What seems to be abundantly clear from the limited research to date is that we know very little about the human/Internet interface. Theoretical suppositions abound, but we are only just beginning to understand the nature and implications of Internet use and Running Head: Virtual Addiction abuse. There is an abundance of clinical, legal, and anecdotal evidence to suggest that there is something unique about being online that seems to produce a powerful impact on people. It is my belief that as we expand our analysis of this new and exciting area we will likely discover that there are many subcategories of Internet abuse, some of which will undoubtedly exist as concomitant disorders alongside of other addictions including sex, gambling, and compulsive shopping/spending. There are probably two types of Internet based problems: the first is defined as a primary problem where the Internet itself becomes the focus on the compulsive pattern, and secondary, where a preexisting problem (or compulsive behavior) is exacerbated via the use of the Internet. In a secondary problem, necessity is no longer the mother of invention, but rather convenience is. The Internet simply makes everything easier to acquire, and therefore that much more easily abused. The ease of access, availability, low cost, anonymity, timelessness, disinhibition, and loss of boundaries all appear to contribute to the total Internet experience. This has particular relevance when it comes to well-established forms of compulsive consumer behavior such as gambling, shopping, stock trading, and compulsive sexual behavior where traditional modalities of engaging in these behaviors pale in comparison to the speed and efficiency of the Internet. There has been considerable debate regarding the terms and definitions in describing pathological Internet behavior. Many terms have been used, including Internet abuse, Internet addiction, and compulsive Internet use. The concern over terminology Running Head: Virtual Addiction seems spurious to me, as it seems irrelevant as to what the addictive process is labeled. The underlying neurochemical changes (probably Dopamine) that occur during any pleasurable act have proven themselves to be potentially habit-forming on a brainbehavior level. The net effect is ultimately the same with regard to potential life impact, which in the case of compulsive behavior can be quite large. Any time there is a highly pleasurable human behavior that can be acquired without human interface (as can be accomplished on the Net) there seems to be greater potential for abuse. The ease of purchasing a stock, gambling, or shopping online allows for a boundless and disinhibited experience. Without the normal human interaction there is a far greater likelihood of abusive and/or compulsive behavior in these areas. Research in the field of Internet behavior is in its relative infancy. This is in part due to the fact that the depth and breadth of the Internet and World Wide Web are changing at exponential rates. With thousands of new subscribers a day and approaching (perhaps exceeding) 200 million worldwide users, the Internet represents a communications, social, and economic revolution. The Net now serves at the pinnacle of the digital industrial revolution, and with any revolution come new problems and difficulties.", "title": "" }, { "docid": "6888b5311d7246c5eb18142d2746ec68", "text": "Forms of well-being vary in their activation as well as valence, differing in respect of energy-related arousal in addition to whether they are negative or positive. Those differences suggest the need to refine traditional assumptions that poor person-job fit causes lower well-being. More activated forms of well-being were proposed to be associated with poorer, rather than better, want-actual fit, since greater motivation raises wanted levels of job features and may thus reduce fit with actual levels. As predicted, activated well-being (illustrated by job engagement) and more quiescent well-being (here, job satisfaction) were found to be associated with poor fit in opposite directions--positively and negatively, respectively. Theories and organizational practices need to accommodate the partly contrasting implications of different forms of well-being.", "title": "" }, { "docid": "87a6fd003dd6e23f27e791c9de8b8ba6", "text": "The well-known travelling salesman problem is the following: \" A salesman is required ~,o visit once and only once each of n different cities starting from a base city, and returning to this city. What path minimizes the to ta l distance travelled by the salesman?\" The problem has been treated by a number of different people using a var ie ty of techniques; el. Dantzig, Fulkerson, Johnson [1], where a combination of ingemtity and linear programming is used, and Miller, Tucker and Zemlin [2], whose experiments using an all-integer program of Gomory did not produce results i~ cases with ten cities although some success was achieved in eases of simply four cities. The purpose of this note is to show tha t this problem can easily be formulated in dynamic programming terms [3], and resolved computationally for up to 17 cities. For larger numbers, the method presented below, combined with various simple manipulations, may be used to obtain quick approximate solutions. Results of this nature were independently obtained by M. Held and R. M. Karp, who are in the process of publishing some extensions and computat ional results.", "title": "" }, { "docid": "1c2f873f3fb57de69f5783cc1f9699ed", "text": "Deep artificial neural networks (DNNs) are typically trained via gradient-based learning algorithms, namely backpropagation. Evolution strategies (ES) can rival backprop-based algorithms such as Q-learning and policy gradients on challenging deep reinforcement learning (RL) problems. However, ES can be considered a gradient-based algorithm because it performs stochastic gradient descent via an operation similar to a finite-difference approximation of the gradient. That raises the question of whether non-gradient-based evolutionary algorithms can work at DNN scales. Here we demonstrate they can: we evolve the weights of a DNN with a simple, gradient-free, population-based genetic algorithm (GA) and it performs well on hard deep RL problems, including Atari and humanoid locomotion. The Deep GA successfully evolves networks with over four million free parameters, the largest neural networks ever evolved with a traditional evolutionary algorithm. These results (1) expand our sense of the scale at which GAs can operate, (2) suggest intriguingly that in some cases following the gradient is not the best choice for optimizing performance, and (3) make immediately available the multitude of techniques that have been developed in the neuroevolution community to improve performance on RL problems. To demonstrate the latter, we show that combining DNNs with novelty search, which was designed to encourage exploration on tasks with deceptive or sparse reward functions, can solve a high-dimensional problem on which reward-maximizing algorithms (e.g. DQN, A3C, ES, and the GA) fail. Additionally, the Deep GA parallelizes better than ES, A3C, and DQN, and enables a state-of-the-art compact encoding technique that can represent million-parameter DNNs in thousands of bytes.", "title": "" }, { "docid": "fef105b33a85f76f24c468c58a7534a0", "text": "An aging population in the United States presents important challenges for patients and physicians. The presence of inflammation can contribute to an accelerated aging process, the increasing presence of comorbidities, oxidative stress, and an increased prevalence of chronic pain. As patient-centered care is embracing a multimodal, integrative approach to the management of disease, patients and physicians are increasingly looking to the potential contribution of natural products. Camu camu, a well-researched and innovative natural product, has the potential to contribute, possibly substantially, to this management paradigm. The key issue is to raise camu camu's visibility through increased emphasis on its robust evidentiary base and its various formulations, as well as making consumers, patients, and physicians more aware of its potential. A program to increase the visibility of camu camu can contribute substantially not only to the management of inflammatory conditions and its positive contribution to overall good health but also to its potential role in many disease states.", "title": "" }, { "docid": "d72bb787f20a08e70d5f0294551907d7", "text": "In this paper we present a novel strategy, DragPushing, for improving the performance of text classifiers. The strategy is generic and takes advantage of training errors to successively refine the classification model of a base classifier. We describe how it is applied to generate two new classification algorithms; a Refined Centroid Classifier and a Refined Naïve Bayes Classifier. We present an extensive experimental evaluation of both algorithms on three English collections and one Chinese corpus. The results indicate that in each case, the refined classifiers achieve significant performance improvement over the base classifiers used. Furthermore, the performance of the Refined Centroid Classifier implemented is comparable, if not better, to that of state-of-the-art support vector machine (SVM)-based classifier, but offers a much lower computational cost.", "title": "" }, { "docid": "60291da2284d7cde487094fff6f8c9c6", "text": "0959-1524/$ see front matter 2009 Elsevier Ltd. A doi:10.1016/j.jprocont.2009.02.003 * Tel.: +39 02 2399 3539. E-mail address: [email protected] The aim of this paper is to review and to propose a classification of a number of decentralized, distributed and hierarchical control architectures for large scale systems. Attention is focused on the design approaches based on Model Predictive Control. For the considered architectures, the underlying rationale, the fields of application, the merits and limitations are discussed, the main references to the literature are reported and some future developments are suggested. Finally, a number of open problems is listed. 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "45fe8a9188804b222df5f12bc9a486bc", "text": "There is renewed interest in the application of gypsum to agricultural lands, particularly of gypsum produced during flue gas desulfurization (FGD) at coal-burning power plants. We studied the effects of land application of FGD gypsum to corn ( L.) in watersheds draining to the Great Lakes. The FGD gypsum was surface applied at 11 sites at rates of 0, 1120, 2240, and 4480 kg ha after planting to 3-m by 7.6-m field plots. Approximately 12 wk after application, penetration resistance and hydraulic conductivity were measured in situ, and samples were collected for determination of bulk density and aggregate stability. No treatment effect was detected for penetration resistance or hydraulic conductivity. A positive treatment effect was seen for bulk density at only 2 of 10 sites tested. Aggregate stability reacted similarly across all sites and was decreased with the highest application of FGD gypsum, whereas the lower rates were not different from the control. Overall, there were few beneficial effects of the FGD gypsum to soil physical properties in the year of application.", "title": "" }, { "docid": "3d4fa878fe3e4d3cbeb1ccedd75ee913", "text": "Digital images are widely communicated over the internet. The security of digital images is an essential and challenging task on shared communication channel. Various techniques are used to secure the digital image, such as encryption, steganography and watermarking. These are the methods for the security of digital images to achieve security goals, i.e. confidentiality, integrity and availability (CIA). Individually, these procedures are not quite sufficient for the security of digital images. This paper presents a blended security technique using encryption, steganography and watermarking. It comprises of three key components: (1) the original image has been encrypted using large secret key by rotating pixel bits to right through XOR operation, (2) for steganography, encrypted image has been altered by least significant bits (LSBs) of the cover image and obtained stego image, then (3) stego image has been watermarked in the time domain and frequency domain to ensure the ownership. The proposed approach is efficient, simpler and secured; it provides significant security against threats and attacks. Keywords—Image security; Encryption; Steganography; Watermarking", "title": "" }, { "docid": "0344917c6b44b85946313957a329bc9c", "text": "Recently, Haas and Hellerstein proposed the hash ripple join algorithm in the context of online aggregation. Although the algorithm rapidly gives a good estimate for many join-aggregate problem instances, the convergence can be slow if the number of tuples that satisfy the join predicate is small or if there are many groups in the output. Furthermore, if memory overflows (for example, because the user allows the algorithm to run to completion for an exact answer), the algorithm degenerates to block ripple join and performance suffers. In this paper, we build on the work of Haas and Hellerstein and propose a new algorithm that (a) combines parallelism with sampling to speed convergence, and (b) maintains good performance in the presence of memory overflow. Results from a prototype implementation in a parallel DBMS show that its rate of convergence scales with the number of processors, and that when allowed to run to completion, even in the presence of memory overflow, it is competitive with the traditional parallel hybrid hash join algorithm.", "title": "" }, { "docid": "1be8fa2ade3d8547044d06bd07b6fc1e", "text": "Gastric rupture with necrosis following acute gastric dilatation (AGD) is a rare and potentially fatal event; usually seen in patients with eating disorders such as anorexia nervosa or bulimia. A 12-year-old lean boy with no remarkable medical history was brought to our Emergency Department suffering acute abdominal symptoms. Emergency laparotomy revealed massive gastric dilatation and partial necrosis, with rupture of the anterior wall of the fundus of the stomach. We performed partial gastrectomy and the patient recovered uneventfully. We report this case to demonstrate that AGD and subsequent gastric rupture can occur in patients without any underlying disorders and that just a low body mass index is a risk factor for this potentially fatal condition.", "title": "" }, { "docid": "595cb7698c38b9f5b189ded9d270fe69", "text": "Sentiment Analysis can help to extract knowledge related to opinions and emotions from user generated text information. It can be applied in medical field for patients monitoring purposes. With the availability of large datasets, deep learning algorithms have become a state of the art also for sentiment analysis. However, deep models have the drawback of not being non human-interpretable, raising various problems related to model’s interpretability. Very few work have been proposed to build models that explain their decision making process and actions. In this work, we review the current sentiment analysis approaches and existing explainable systems. Moreover, we present a critical review of explainable sentiment analysis models and discussed the insight of applying explainable sentiment analysis in the medical field.", "title": "" }, { "docid": "d84ef527d58d70b3c559d21608901d2f", "text": "Whistleblowing on organizational wrongdoing is becoming increasingly prevalent. What aspects of the person, the context, and the transgression relate to whistleblowing intentions and to actual whistleblowing on corporate wrongdoing? Which aspects relate to retaliation against whistleblowers? Can we draw conclusions about the whistleblowing process by assessing whistleblowing intentions? Meta-analytic examination of 193 correlations obtained from 26 samples (N = 18,781) reveals differences in the correlates of whistleblowing intentions and actions. Stronger relationships were found between personal, contextual, and wrongdoing characteristics and whistleblowing intent than with actual whistleblowing. Retaliation might best be predicted using contextual variables. Implications for research and practice are discussed.", "title": "" }, { "docid": "a0e7712da82a338fda01e1fd0bb4a44e", "text": "Compliance specifications concisely describe selected aspects of what a business operation should adhere to. To enable automated techniques for compliance checking, it is important that these requirements are specified correctly and precisely, describing exactly the behavior intended. Although there are rigorous mathematical formalisms for representing compliance rules, these are often perceived to be difficult to use for business users. Regardless of notation, however, there are often subtle but important details in compliance requirements that need to be considered. The main challenge in compliance checking is to bridge the gap between informal description and a precise specification of all requirements. In this paper, we present an approach which aims to facilitate creating and understanding formal compliance requirements by providing configurable templates that capture these details as options for commonly-required compliance requirements. These options are configured interactively with end-users, using question trees and natural language. The approach is implemented in the Process Mining Toolkit ProM.", "title": "" }, { "docid": "5034984717b3528f7f47a1f88a3b1310", "text": "ALL RIGHTS RESERVED. This document contains material protected under International and Federal Copyright Laws and Treaties. Any unauthorized reprint or use of this material is prohibited. No part of this document may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system without express written permission from the author / publisher.", "title": "" }, { "docid": "0ac38422284d164095882a3f3dd74e4f", "text": "This paper introduces the status of social recommender system research in general and collaborative filtering in particular. For the collaborative filtering, the paper shows the basic principles and formulas of two basic approaches, the user-based collaborative filtering and the item-based collaborative filtering. For the user or item similarity calculation, the paper compares the differences between the cosine-based similarity, the revised cosine-based similarity and the Pearson-based similarity. The paper also analyzes the three main challenges of the collaborative filtering algorithm and shows the related works facing the challenges. To solve the Cold Start problem and reduce the cost of best neighborhood calculation, the paper provides several solutions. At last it discusses the future of the collaborative filtering algorithm in social recommender system.", "title": "" }, { "docid": "ef976fc364d9fdb85c0d34e5b831644c", "text": "This paper presents a Mars Sample Return (MSR) Sample Acquisition and Caching (SAC) study developed for the three rover platforms: MER, MER+, and MSL. The study took into account 26 SAC requirements provided by the NASA Mars Exploration Program Office. For this SAC architecture, the reduction of mission risk was chosen by us as having greater priority than mass or volume. For this reason, we selected a “One Bit per Core” approach. The enabling technology for this architecture is Honeybee Robotics' “eccentric tubes” core breakoff approach. The breakoff approach allows the drill bits to be relatively small in diameter and in turn lightweight. Hence, the bits could be returned to Earth with the cores inside them with only a modest increase to the total returned mass, but a significant decrease in complexity. Having dedicated bits allows a reduction in the number of core transfer steps and actuators. It also alleviates the bit life problem, eliminates cross contamination, and aids in hermetic sealing. An added advantage is faster drilling time, lower power, lower energy, and lower Weight on Bit (which reduces Arm preload requirements). Drill bits are based on the BigTooth bit concept, which allows re-use of the same bit multiple times, if necessary. The proposed SAC consists of a 1) Rotary-Percussive Core Drill, 2) Bit Storage Carousel, 3) Cache, 4) Robotic Arm, and 5) Rock Abrasion and Brushing Bit (RABBit), which is deployed using the Drill. The system also includes PreView bits (for viewing of cores prior to caching) and Powder bits for acquisition of regolith or cuttings. The SAC total system mass is less than 22 kg for MER and MER+ size rovers and less than 32 kg for the MSL-size rover.", "title": "" }, { "docid": "f3e9858900dd75c86d106856e63f1ab2", "text": "In the near future, new storage-class memory (SCM) technologies -- such as phase-change memory and memristors -- will radically change the nature of long-term storage. These devices will be cheap, non-volatile, byte addressable, and near DRAM density and speed. While SCM offers enormous opportunities, profiting from them will require new storage systems specifically designed for SCM's properties.\n This paper presents Echo, a persistent key-value storage system designed to leverage the advantages and address the challenges of SCM. The goals of Echo include high performance for both small and large data objects, recoverability after failure, and scalability on multicore systems. Echo achieves its goals through the use of a two-level memory design targeted for memory systems containing both DRAM and SCM, exploitation of SCM's byte addressability for fine-grained transactions in non-volatile memory, and the use of snapshot isolation for concurrency, consistency, and versioning. Our evaluation demonstrates that Echo's SCM-centric design achieves the durability guarantees of the best disk-based stores with the performance characteristics approaching the best in-memory key-value stores.", "title": "" }, { "docid": "97a458ead2bd94775c7d27a6a47ce8e6", "text": "This article presents an approach to using cognitive models of narrative discourse comprehension to define an explicit computational model of a reader’s comprehension process during reading, predicting aspects of narrative focus and inferencing with precision. This computational model is employed in a narrative discourse generation system to select and sequence content from a partial plan representing story world facts, objects, and events, creating discourses that satisfy comprehension criteria. Cognitive theories of narrative discourse comprehension define explicit models of a reader’s mental state during reading. These cognitive models are created to test hypotheses and explain empirical results about reader comprehension, but do not often contain sufficient precision for implementation on a computer. Therefore, they have not previously been suitable for computational narrative generation. The results of three experiments are presented and discussed, exhibiting empirical support for the approach presented. This work makes a number of contributions that advance the state-of-the-art in narrative discourse generation: a formal model of narrative focus, a formal model of online inferencing in narrative, a method of selecting narrative discourse content to satisfy comprehension criteria, and both implementation and evaluation of these models. .................................................................................................................................................................................", "title": "" } ]
scidocsrr
5b603ea8fa99282dbea13015eebe9613
A Novel Video-Based Smoke Detection Method Using Image Separation
[ { "docid": "70e88fe5fc43e0815a1efa05e17f7277", "text": "Smoke detection is a crucial task in many video surveillance applications and could have a great impact to raise the level of safety of urban areas. Many commercial smoke detection sensors exist but most of them cannot be applied in open space or outdoor scenarios. With this aim, the paper presents a smoke detection system that uses a common CCD camera sensor to detect smoke in images and trigger alarms. First, a proper background model is proposed to reliably extract smoke regions and avoid over-segmentation and false positives in outdoor scenarios where many distractors are present, such as moving trees or light reflexes. A novel Bayesian approach is adopted to detect smoke regions in the scene analyzing image energy by means of the Wavelet Transform coefficients and Color Information. A statistical model of image energy is built, using a temporal Gaussian Mixture, to analyze the energy decay that typically occurs when smoke covers the scene then the detection is strengthen evaluating the color blending between a reference smoke color and the input frame. The proposed system is capable of detecting rapidly smoke events both in night and in day conditions with a reduced number of false alarms hence is particularly suitable for monitoring large outdoor scenarios where common sensors would fail. An extensive experimental campaign both on recorded videos and live cameras evaluates the efficacy and efficiency of the system in many real world scenarios, such as outdoor storages and forests.", "title": "" } ]
[ { "docid": "645f4db902246c01476ae941004bcd94", "text": "The Internet of Things is part of our everyday life, which applies to all aspects of human life; from smart phones and environmental sensors to smart devices used in the industry. Although the Internet of Things has many advantages, there are risks and dangers as well that need to be addressed. The information used and transmitted on Internet of Things contain important info about the daily lives of people, banking information, location and geographical information, environmental and medical information, together with many other sensitive data. Therefore, it is critical to identify and address the security issues and challenges of Internet of Things. In this article, considering the broad scope of this field and its literature, we are going to express some comprehensive information on security challenges of the Internet of Things.", "title": "" }, { "docid": "b798103f64ec684a4d0f530c7add8eeb", "text": "Self-adaptation is a key feature of evolutionary algorithms (EAs). Although EAs have been used successfully to solve a wide variety of problems, the performance of this technique depends heavily on the selection of the EA parameters. Moreover, the process of setting such parameters is considered a time-consuming task. Several research works have tried to deal with this problem; however, the construction of algorithms letting the parameters adapt themselves to the problem is a critical and open problem of EAs. This work proposes a novel ensemble machine learning method that is able to learn rules, solve problems in a parallel way and adapt parameters used by its components. A self-adaptive ensemble machine consists of simultaneously working extended classifier systems (XCSs). The proposed ensemble machine may be treated as a meta classifier system. A new self-adaptive XCS-based ensemble machine was compared with two other XCSbased ensembles in relation to one-step binary problems: Multiplexer, One Counts, Hidden Parity, and randomly generated Boolean functions, in a noisy version as well. Results of the experiments have shown the ability of the model to adapt the mutation rate and the tournament size. The results are analyzed in detail.", "title": "" }, { "docid": "c0ac3eff02d60a293bb88807d289223d", "text": "Computational models in cognitive neuroscience should ideally use biological properties and powerful computational principles to produce behavior consistent with psychological findings. Error-driven backpropagation is computationally powerful and has proven useful for modeling a range of psychological data but is not biologically plausible. Several approaches to implementing backpropagation in a biologically plausible fashion converge on the idea of using bid irectional activation propagation in interactive networks to convey error signals. This article demonstrates two main points about these error-driven interactive networks: (1) they generalize poorly due to attractor dynamics that interfere with the network's ability to produce novel combinatorial representations systematically in response to novel inputs, and (2) this generalization problem can be remedied by adding two widely used mechanistic principles, inhibitory competition and Hebbian learning, that can be independently motivated for a variety of biological, psychological, and computational reasons. Simulations using the Leabra algorithm, which combines the generalized recirculation (GeneRec), biologically plausible, error-driven learning algorithm with inhibitory competition and Hebbian learning, show that these mechanisms can result in good generalization in interactive networks. These results support the general conclusion that cognitive neuroscience models that incorporate the core mechanistic principles of interactivity, inhibitory competition, and error-driven and Hebbian learning satisfy a wider range of biological, psychological, and computational constraints than models employing a subset of these principles.", "title": "" }, { "docid": "7974885ccc886fb307dfdb98606951ed", "text": "We examined whether the male spatial advantage varies across children from different socioeconomic (SES) groups. In a longitudinal study, children were administered two spatial tasks requiring mental transformations and a syntax comprehension task in the fall and spring of second and third grades. Boys from middle- and high-SES backgrounds outperformed their female counterparts on both spatial tasks, whereas boys and girls from a low-SES group did not differ in their performance level on these tasks. As expected, no sex differences were found on the verbal comprehension task. Prior studies have generally been based on the assumption that the male spatial advantage reflects ability differences in the population as a whole. Our finding that the advantage is sensitive to variations in SES provides a challenge to this assumption, and has implications for a successful explanation of the sex-related difference in spatial skill.", "title": "" }, { "docid": "16bafec4544454a948d72f26861d0313", "text": "Measuring the similarity between documents is an important operation in the text processing field. In this paper, a new similarity measure is proposed. To compute the similarity between two documents with respect to a feature, the proposed measure takes the following three cases into account: a) The feature appears in both documents, b) the feature appears in only one document, and c) the feature appears in none of the documents. For the first case, the similarity increases as the difference between the two involved feature values decreases. Furthermore, the contribution of the difference is normally scaled. For the second case, a fixed value is contributed to the similarity. For the last case, the feature has no contribution to the similarity. The proposed measure is extended to gauge the similarity between two sets of documents. The effectiveness of our measure is evaluated on several real-world data sets for text classification and clustering problems. The results show that the performance obtained by the proposed measure is better than that achieved by other measures.", "title": "" }, { "docid": "fb43cec4064dfad44d54d1f2a4981262", "text": "Knowledge representation is a major topic in AI, and many studies attempt to represent entities and relations of know ledge base in a continuous vector space. Among these attempts, translation-based methods build entity and relati on vectors by minimizing the translation loss from a head entity to a tail one. In spite of the success of these methods, translation-based methods also suffer from the oversimpli fied loss metric, and are not competitive enough to model various and complex entities/relations in knowledge bases. To address this issue, we propose TransA, an adaptive metric approach for embedding, utilizing the metric learning idea s to provide a more flexible embedding method. Experiments are conducted on the benchmark datasets and our proposed method makes significant and consistent improvements over the state-of-the-art baselines.", "title": "" }, { "docid": "0257589dc59f1ddd4ec19a2450e3156f", "text": "Drawing upon the literatures on beliefs about magical contagion and property transmission, we examined people's belief in a novel mechanism of human-to-human contagion, emotional residue. This is the lay belief that people's emotions leave traces in the physical environment, which can later influence others or be sensed by others. Studies 1-4 demonstrated that Indians are more likely than Americans to endorse a lay theory of emotions as substances that move in and out of the body, and to claim that they can sense emotional residue. However, when the belief in emotional residue is measured implicitly, both Indians and American believe to a similar extent that emotional residue influences the moods and behaviors of those who come into contact with it (Studies 5-7). Both Indians and Americans also believe that closer relationships and a larger number of people yield more detectable residue (Study 8). Finally, Study 9 demonstrated that beliefs about emotional residue can influence people's behaviors. Together, these finding suggest that emotional residue is likely to be an intuitive concept, one that people in different cultures acquire even without explicit instruction.", "title": "" }, { "docid": "2f02235636c5c0aecd8918cba512888d", "text": "To determine whether an AIDS prevention mass media campaign influenced risk perception, self-efficacy and other behavioural predictors. We used household survey data collected from 2,213 sexually experienced male and female Kenyans aged 15-39. Respondents were administered a questionnaire asking them about their exposure to branded and generic mass media messages concerning HIV/AIDS and condom use. They were asked questions concerning their personal risk perception, self-efficacy, condom effectiveness, condom availability, and their embarrassment in obtaining condoms. Logistic regression analysis was used to determine the impact of exposure to mass media messages on these predictors of behaviour change. Those exposed to branded advertising messages were significantly more likely to consider themselves at higher risk of acquiring HIV and to believe in the severity of AIDS. Exposure to branded messages was also associated with a higher level of personal self-efficacy, a greater belief in the efficacy of condoms, a lower level of perceived difficulty in obtaining condoms and reduced embarrassment in purchasing condoms. Moreover, there was a dose-response relationship: a higher intensity of exposure to advertising was associated with more positive outcomes. Exposure to generic advertising messages was less frequently associated with positive health beliefs and these relationships were also weaker. Branded mass media campaigns that promote condom use as an attractive lifestyle choice are likely to contribute to the development of perceptions that are conducive to the adoption of condom use.", "title": "" }, { "docid": "abef126b2e8cb932378013e1cf125b15", "text": "We describe our submission to the CoNLL 2017 shared task, which exploits the shared common knowledge of a language across different domains via a domain adaptation technique. Our approach is an extension to the recently proposed adversarial training technique for domain adaptation, which we apply on top of a graph-based neural dependency parsing model on bidirectional LSTMs. In our experiments, we find our baseline graphbased parser already outperforms the official baseline model (UDPipe) by a large margin. Further, by applying our technique to the treebanks of the same language with different domains, we observe an additional gain in the performance, in particular for the domains with less training data.", "title": "" }, { "docid": "a2223d57a866b0a0ef138e52fb515b84", "text": "This paper is concerned with paraphrase detection, i.e., identifying sentences that are semantically identical. The ability to detect similar sentences written in natural language is crucial for several applications, such as text mining, text summarization, plagiarism detection, authorship authentication and question answering. Recognizing this importance, we study in particular how to address the challenges with detecting paraphrases in user generated short texts, such as Twitter, which often contain language irregularity and noise, and do not necessarily contain as much semantic information as longer clean texts. We propose a novel deep neural network-based approach that relies on coarse-grained sentence modelling using a convolutional neural network (CNN) and a recurrent neural network (RNN) model, combined with a specific fine-grained word-level similarity matching model. More specifically, we develop a new architecture, called DeepParaphrase, which enables to create an informative semantic representation of each sentence by (1) using CNN to extract the local region information in form of important n-grams from the sentence, and (2) applying RNN to capture the long-term dependency information. In addition, we perform a comparative study on stateof-the-art approaches within paraphrase detection. An important insight from this study is that existing paraphrase approaches perform well when applied on clean texts, but they do not necessarily deliver good performance against noisy texts, and vice versa. In contrast, our evaluation has shown that the proposed DeepParaphrase-based approach achieves good results in both types of texts, thus making it more robust and generic than the existing approaches.", "title": "" }, { "docid": "423d8264602c19c313c044fcf08c0717", "text": "Since the last two decades, XML has gained momentum as the standard for web information management and complex data representation. Also, collaboratively built semi-structured information resources, such as Wikipedia, have become prevalent on the Web and can be inherently encoded in XML. Yet most methods for processing XML and semi-structured information handle mainly the syntactic properties of the data, while ignoring the semantics involved. To devise more intelligent applications, one needs to augment syntactic features with machine-readable semantic meaning. This can be achieved through the computational identification of the meaning of data in context, also known as (a.k.a.) automated semantic analysis and disambiguation, which is nowadays one of the main challenges at the core of the Semantic Web. This survey paper provides a concise and comprehensive review of the methods related to XML-based semi-structured semantic analysis and disambiguation. It is made of four logical parts. First, we briefly cover traditional word sense disambiguation methods for processing flat textual data. Second, we describe and categorize disambiguation techniques developed and extended to handle semi-structured and XML data. Third, we describe current and potential application scenarios that can benefit from XML semantic analysis, including: data clustering and semantic-aware indexing, data integration and selective dissemination, semantic-aware and temporal querying, web and mobile services matching and composition, blog and social semantic network analysis, and ontology learning. Fourth, we describe and discuss ongoing challenges and future directions, including: the quantification of semantic ambiguity, expanding XML disambiguation context, combining structure and content, using collaborative/social information sources, integrating explicit and implicit semantic analysis, emphasizing user involvement, and reducing computational complexity.", "title": "" }, { "docid": "25bb62673d1bfadfc751bd10413c94dd", "text": "Phase-change materials are some of the most promising materials for data-storage applications. They are already used in rewriteable optical data storage and offer great potential as an emerging non-volatile electronic memory. This review looks at the unique property combination that characterizes phase-change materials. The crystalline state often shows an octahedral-like atomic arrangement, frequently accompanied by pronounced lattice distortions and huge vacancy concentrations. This can be attributed to the chemical bonding in phase-change alloys, which is promoted by p-orbitals. From this insight, phase-change alloys with desired properties can be designed. This is demonstrated for the optical properties of phase-change alloys, in particular the contrast between the amorphous and crystalline states. The origin of the fast crystallization kinetics is also discussed.", "title": "" }, { "docid": "b4ed57258b85ab4d81d5071fc7ad2cc9", "text": "We present LEAR (Lexical Entailment AttractRepel), a novel post-processing method that transforms any input word vector space to emphasise the asymmetric relation of lexical entailment (LE), also known as the IS-A or hyponymy-hypernymy relation. By injecting external linguistic constraints (e.g., WordNet links) into the initial vector space, the LE specialisation procedure brings true hyponymyhypernymy pairs closer together in the transformed Euclidean space. The proposed asymmetric distance measure adjusts the norms of word vectors to reflect the actual WordNetstyle hierarchy of concepts. Simultaneously, a joint objective enforces semantic similarity using the symmetric cosine distance, yielding a vector space specialised for both lexical relations at once. LEAR specialisation achieves state-of-the-art performance in the tasks of hypernymy directionality, hypernymy detection, and graded lexical entailment, demonstrating the effectiveness and robustness of the proposed asymmetric specialisation model.", "title": "" }, { "docid": "ee97c467539a3e08cd3cfe7a8f7ee3e2", "text": "The problem of geometric alignment of two roughly pre-registered, partially overlapping, rigid, noisy 3D point sets is considered. A new natural and simple, robustified extension of the popular Iterative Closest Point (ICP) algorithm [1] is presented, called Trimmed ICP. The new algorithm is based on the consistent use of the Least Trimmed Squares approach in all phases of the operation. Convergence is proved and an efficient implementation is discussed. TrICP is fast, applicable to overlaps under 50%, robust to erroneous and incomplete measurements, and has easy-to-set parameters. ICP is a special case of TrICP when the overlap parameter is 100%. Results of a performance evaluation study on the SQUID database of 1100 shapes are presented. The tests compare TrICP and the Iterative Closest Reciprocal Point algorithm [2].", "title": "" }, { "docid": "f3dbd127e5d76706b592c6154528a909", "text": "Due to the undeniable advantage of prediction and proactivity, many research areas and industrial applications are accelerating the pace to keep up with data science and predictive analytics. However and due to three well-known facts, the reactive Complex Event Processing (CEP) technology might lag behind when prediction becomes a requirement. 1st fact: The one and only inference mechanism in this domain is totally guided by CEP rules. 2nd fact: The only way to define a CEP rule is by writing it manually with the help of a human expert. 3rd fact: Experts tend to write reactive CEP rules, because and regardless of the level of expertise, it is nearly impossible to manually write predictive CEP rules. Combining these facts together, the CEP is---and will stay--- a reactive computing technique. Therefore in this article, we present a novel data mining-based approach that automatically learns predictive CEP rules. The approach proposes a new learning algorithm where complex patterns from multivariate time series are learned. Then at run-time, a seamless transformation into the CEP world takes place. The result is a ready-to-use CEP engine with enrolled predictive CEP rules. Many experiments on publicly-available data sets demonstrate the effectiveness of our approach.", "title": "" }, { "docid": "bbb8b5304ac9b7b1221b0b34387cd7f7", "text": "Paramaligne Erscheinungen stellen eigenartige Symptome der Krebskrankheit dar. Sie sind meist bei den Lungeneareinomen und hier haupts/tchlieh bei den kleinzelligen Carcinomen zu verzeichnen. Es kommen vor (BA~I~TY, CouRY u. RULLI~RE, 1964) neurologische, osteoartikul~re, h/~matologische, vascul/ire, metabolische, Muskelund Hauterscheinungen. Bei den endokrinologisehen Symptomen kSnnen die bioehemisehen Untersuehungen reeht interessante Ergebnisse zeigen (CrrA~OT, 1964; AZZOPARDI u. BELLAU, 1965). Von diesen Symptomen sind die osteoartikul/~ren die h~ufigsten. In unserer Zusammensteliung von 225 kleinzelligen Lungencarcinomen haben wir 44 Kranke mit verschieden stark ent~,iekelten Ver~nderungen im Sinne der Trommelschlegel finger gefunden (Tab.).", "title": "" }, { "docid": "ecc4f1d5fb66b816daa9ae514bd58b45", "text": "In this paper, we introduce SLQS, a new entropy-based measure for the unsupervised identification of hypernymy and its directionality in Distributional Semantic Models (DSMs). SLQS is assessed through two tasks: (i.) identifying the hypernym in hyponym-hypernym pairs, and (ii.) discriminating hypernymy among various semantic relations. In both tasks, SLQS outperforms other state-of-the-art measures.", "title": "" }, { "docid": "4f43a692ff8f6aed3a3fc4521c86d35e", "text": "LEARNING OBJECTIVES\nAfter reading this article, the participant should be able to: 1. Understand the challenges in restoring volume and structural integrity in rhinoplasty. 2. Identify the appropriate uses of various autografts in aesthetic and reconstructive rhinoplasty (septal cartilage, auricular cartilage, costal cartilage, calvarial and nasal bone, and olecranon process of the ulna). 3. Identify the advantages and disadvantages of each of these autografts.\n\n\nSUMMARY\nThis review specifically addresses the use of autologous grafts in rhinoplasty. Autologous materials remain the preferred graft material for use in rhinoplasty because of their high biocompatibility and low risk of infection and extrusion. However, these advantages should be counterbalanced with the concerns of donor-site morbidity, graft availability, and graft resorption.", "title": "" }, { "docid": "2c734e48d2698ea11c84efa4704d5da8", "text": "Nowadays there is an increasing interest in mobile application development. However, developers often disregard, or at least significantly adapt, existing software development processes to suit their purpose, given the existing specific constraints. Such adjustments can introduce variations and new trends in existing processes that in many occasions are not shared with the scientific community since there is no official documentation, thus justifying further research. In this paper, we present a study and characterization of current mobile application development processes based on a practical experience. We consider a set of real case studies to investigate the current development processes for mobile applications used by software development companies, as well as by independent developers. The result of the present study is the identification of mobile software development processes, namely agile approaches, and also of shortcomings in current methodologies applied in industry and academy, namely the lack of informed and experienced resources to develop mobile apps.", "title": "" }, { "docid": "59c16bb2ec81dfb0e27ff47ccae0a169", "text": "A geometric dissection is a set of pieces which can be assembled in different ways to form distinct shapes. Dissections are used as recreational puzzles because it is striking when a single set of pieces can construct highly different forms. Existing techniques for creating dissections find pieces that reconstruct two input shapes exactly. Unfortunately, these methods only support simple, abstract shapes because an excessive number of pieces may be needed to reconstruct more complex, naturalistic shapes. We introduce a dissection design technique that supports such shapes by requiring that the pieces reconstruct the shapes only approximately. We find that, in most cases, a small number of pieces suffices to tightly approximate the input shapes. We frame the search for a viable dissection as a combinatorial optimization problem, where the goal is to search for the best approximation to the input shapes using a given number of pieces. We find a lower bound on the tightness of the approximation for a partial dissection solution, which allows us to prune the search space and makes the problem tractable. We demonstrate our approach on several challenging examples, showing that it can create dissections between shapes of significantly greater complexity than those supported by previous techniques.", "title": "" } ]
scidocsrr
66418795e9037d036af8379bdeb2b8c5
Towards Generic Text-Line Extraction
[ { "docid": "258601c560572a9c43823fe65481a3bf", "text": "Dewarping of documents captured with hand-held cameras in an uncontrolled environment has triggered a lot of interest in the scientific community over the last few years and many approaches have been proposed. However, there has been no comparative evaluation of different dewarping techniques so far. In an attempt to fill this gap, we have organized a page dewarping contest along with CBDAR 2007. We have created a dataset of 102 documents captured with a hand-held camera and have made it freely available online. We have prepared text-line, text-zone, and ASCII text ground-truth for the documents in this dataset. Three groups participated in the contest with their methods. In this paper we present an overview of the approaches that the participants used, the evaluation measure, and the dataset used in the contest. We report the performance of all participating methods. The evaluation shows that none of the participating methods was statistically significantly better than any other participating method.", "title": "" } ]
[ { "docid": "b5372d4cad87aab69356ebd72aed0e0b", "text": "Web content nowadays can also be accessed through new generation of Internet connected TVs. However, these products failed to change users’ behavior when consuming online content. Users still prefer personal computers to access Web content. Certainly, most of the online content is still designed to be accessed by personal computers or mobile devices. In order to overcome the usability problem of Web content consumption on TVs, this paper presents a knowledge graph based video generation system that automatically converts textual Web content into videos using semantic Web and computer graphics based technologies. As a use case, Wikipedia articles are automatically converted into videos. The effectiveness of the proposed system is validated empirically via opinion surveys. Fifty percent of survey users indicated that they found generated videos enjoyable and 42 % of them indicated that they would like to use our system to consume Web content on their TVs.", "title": "" }, { "docid": "6f87969a98451881a9c9da9c8a05f219", "text": "The possibility of filtering light cloud cover in satellite imagery to expose objects beneath the clouds is discussed. A model of the cloud distortion process is developed and a transformation is introduced which makes the signal and noise additive so that optimum linear filtering techniques can be applied. This homomorphic filtering can be done in the two-dimensional image plane, or it can be extended to include the spectral dimension on multispectral data. The three-dimensional filter is especially promising because clouds tend to follow a common spectral response. The noise statistics can be estimated directly from the noisy data. Results from a computer simulation and from Landsat data are shown.", "title": "" }, { "docid": "0123fd04bc65b8dfca7ff5c058d087da", "text": "The authors forward the hypothesis that social exclusion is experienced as painful because reactions to rejection are mediated by aspects of the physical pain system. The authors begin by presenting the theory that overlap between social and physical pain was an evolutionary development to aid social animals in responding to threats to inclusion. The authors then review evidence showing that humans demonstrate convergence between the 2 types of pain in thought, emotion, and behavior, and demonstrate, primarily through nonhuman animal research, that social and physical pain share common physiological mechanisms. Finally, the authors explore the implications of social pain theory for rejection-elicited aggression and physical pain disorders.", "title": "" }, { "docid": "1de1324d0f10a0e58c2adccdd8cb2c21", "text": "In keyword search advertising, many advertisers operate on a limited budget. Yet how limited budgets affect keyword search advertising has not been extensively studied. This paper offers an analysis of the generalized second-price auction with budget constraints. We find that the budget constraint may induce advertisers to raise their bids to the highest possible amount for two different motivations: to accelerate the elimination of the budget-constrained competitor as well as to reduce their own advertising cost. Thus, in contrast to the current literature, our analysis shows that both budget-constrained and unconstrained advertisers could bid more than their own valuation. We further extend the model to consider dynamic bidding and budget-setting decisions.", "title": "" }, { "docid": "d3d478d3e8ef3498b63e7e8803c8cfec", "text": "INTRODUCTION\nThe International Physical Activity Questionnaire (IPAQ) was developed to measure health-related physical activity (PA) in populations. The short version of the IPAQ has been tested extensively and is now used in many international studies. The present study aimed to explore the validity characteristics of the long-version IPAQ.\n\n\nSUBJECTS AND METHODS\nForty-six voluntary healthy male and female subjects (age, mean +/- standard deviation: 40.7 +/- 10.3 years) participated in the study. PA indicators derived from the long, self-administered IPAQ were compared with data from an activity monitor and a PA log book for concurrent validity, and with aerobic fitness, body mass index (BMI) and percentage body fat for construct validity.\n\n\nRESULTS\nStrong positive relationships were observed between the activity monitor data and the IPAQ data for total PA (rho = 0.55, P < 0.001) and vigorous PA (rho = 0.71, P < 0.001), but a weaker relationship for moderate PA (rho = 0.21, P = 0.051). Calculated MET-h day(-1) from the PA log book was significantly correlated with MET-h day(-1) from the IPAQ (rho = 0.67, P < 0.001). A weak correlation was observed between IPAQ data for total PA and both aerobic fitness (rho = 0.21, P = 0.051) and BMI (rho = 0.25, P = 0.009). No significant correlation was observed between percentage body fat and IPAQ variables. Bland-Altman analysis suggested that the inability of activity monitors to detect certain types of activities might introduce a source of error in criterion validation studies.\n\n\nCONCLUSIONS\nThe long, self-administered IPAQ questionnaire has acceptable validity when assessing levels and patterns of PA in healthy adults.", "title": "" }, { "docid": "0075c4714b8e7bf704381d3a3722ab59", "text": "This paper surveys the current state of the art in Natural Language Generation (nlg), defined as the task of generating text or speech from non-linguistic input. A survey of nlg is timely in view of the changes that the field has undergone over the past two decades, especially in relation to new (usually data-driven) methods, as well as new applications of nlg technology. This survey therefore aims to (a) give an up-to-date synthesis of research on the core tasks in nlg and the architectures adopted in which such tasks are organised; (b) highlight a number of recent research topics that have arisen partly as a result of growing synergies between nlg and other areas of artificial intelligence; (c) draw attention to the challenges in nlg evaluation, relating them to similar challenges faced in other areas of nlp, with an emphasis on different evaluation methods and the relationships between them.", "title": "" }, { "docid": "094fb0a17d6358cc166e43872bc59b09", "text": "This paper is a review of the evolutionary history of deep learning models. It covers from the genesis of neural networks when associationism modeling of the brain is studied, to the models that dominate the last decade of research in deep learning like convolutional neural networks, deep belief networks, and recurrent neural networks, and extends to popular recent models like variational autoencoder and generative adversarial nets. In addition to a review of these models, this paper primarily focuses on the precedents of the models above, examining how the initial ideas are assembled to construct the early models and how these preliminary models are developed into their current forms. Many of these evolutionary paths last more than half a century and have a diversity of directions. For example, CNN is built on prior knowledge of biological vision system; DBN is evolved from a trade-off of modeling power and computation complexity of graphical models and many nowadays models are neural counterparts of ancient linear models. This paper reviews these evolutionary paths and offers a concise thought flow of how these models are developed, and aims to provide a thorough background for deep learning. More importantly, along with the path, this paper summarizes the gist behind these milestones and proposes many directions to guide the future research of deep learning. 1 ar X iv :1 70 2. 07 80 0v 2 [ cs .L G ] 1 M ar 2 01 7", "title": "" }, { "docid": "001104ca832b10553b28bbd713e6cbd5", "text": "In this paper we present a tracker, which is radically different from state-of-the-art trackers: we apply no model updating, no occlusion detection, no combination of trackers, no geometric matching, and still deliver state-of-the-art tracking performance, as demonstrated on the popular online tracking benchmark (OTB) and six very challenging YouTube videos. The presented tracker simply matches the initial patch of the target in the first frame with candidates in a new frame and returns the most similar patch by a learned matching function. The strength of the matching function comes from being extensively trained generically, i.e., without any data of the target, using a Siamese deep neural network, which we design for tracking. Once learned, the matching function is used as is, without any adapting, to track previously unseen targets. It turns out that the learned matching function is so powerful that a simple tracker built upon it, coined Siamese INstance search Tracker, SINT, which only uses the original observation of the target from the first frame, suffices to reach state-of-the-art performance. Further, we show the proposed tracker even allows for target re-identification after the target was absent for a complete video shot.", "title": "" }, { "docid": "c95980f3f1921426c20757e6020f62c2", "text": "Recent successes of deep learning have been largely driven by the ability to train large models on vast amounts of data. We believe that High Performance Computing (HPC) will play an increasingly important role in helping deep learning achieve the next level of innovation fueled by neural network models that are orders of magnitude larger and trained on commensurately more training data. We are targeting the unique capabilities of both current and upcoming HPC systems to train massive neural networks and are developing the Livermore Big Artificial Neural Network (LBANN) toolkit to exploit both model and data parallelism optimized for large scale HPC resources. This paper presents our preliminary results in scaling the size of model that can be trained with the LBANN toolkit.", "title": "" }, { "docid": "a986826041730d953dfbf9fbc1b115a6", "text": "This article presents a general class of associative reinforcement learning algorithms for connectionist networks containing stochastic units. These algorithms, called REINFORCE algorithms, are shown to make weight adjustments in a direction that lies along the gradient of expected reinforcement in both immediate-reinforcement tasks and certain limited forms of delayed-reinforcement tasks, and they do this without explicitly computing gradient estimates or even storing information from which such estimates could be computed. Specific examples of such algorithms are presented, some of which bear a close relationship to certain existing algorithms while others are novel but potentially interesting in their own right. Also given are results that show how such algorithms can be naturally integrated with backpropagation. We close with a brief discussion of a number of additional issues surrounding the use of such algorithms, including what is known about their limiting behaviors as well as further considerations that might be used to help develop similar but potentially more powerful reinforcement learning algorithms.", "title": "" }, { "docid": "c25bdb567ee525e2ae3416dcf9c42717", "text": "Despite the efforts that bioengineers have exerted in designing and constructing biological processes that function according to a predetermined set of rules, their operation remains fundamentally circumstantial. The contextual situation in which molecules and single-celled or multi-cellular organisms find themselves shapes the way they interact, respond to the environment and process external information. Since the birth of the field, synthetic biologists have had to grapple with contextual issues, particularly when the molecular and genetic devices inexplicably fail to function as designed when tested in vivo. In this review, we set out to identify and classify the sources of the unexpected divergences between design and actual function of synthetic systems and analyze possible methodologies aimed at controlling, if not preventing, unwanted contextual issues.", "title": "" }, { "docid": "7c974eacb24368a0c5acfeda45d60f64", "text": "We propose a novel approach for verifying model hypotheses in cluttered and heavily occluded 3D scenes. Instead of verifying one hypothesis at a time, as done by most state-of-the-art 3D object recognition methods, we determine object and pose instances according to a global optimization stage based on a cost function which encompasses geometrical cues. Peculiar to our approach is the inherent ability to detect significantly occluded objects without increasing the amount of false positives, so that the operating point of the object recognition algorithm can nicely move toward a higher recall without sacrificing precision. Our approach outperforms state-of-the-art on a challenging dataset including 35 household models obtained with the Kinect sensor, as well as on the standard 3D object recognition benchmark dataset.", "title": "" }, { "docid": "1b646a8a45b65799bbf2e71108f420e0", "text": "Dynamic Time Warping (DTW) is a distance measure that compares two time series after optimally aligning them. DTW is being used for decades in thousands of academic and industrial projects despite the very expensive computational complexity, O(n2). These applications include data mining, image processing, signal processing, robotics and computer graphics among many others. In spite of all this research effort, there are many myths and misunderstanding about DTW in the literature, for example \"it is too slow to be useful\" or \"the warping window size does not matter much.\" In this tutorial, we correct these misunderstandings and we summarize the research efforts in optimizing both the efficiency and effectiveness of both the basic DTW algorithm, and of the higher-level algorithms that exploit DTW such as similarity search, clustering and classification. We will discuss variants of DTW such as constrained DTW, multidimensional DTW and asynchronous DTW, and optimization techniques such as lower bounding, early abandoning, run-length encoding, bounded approximation and hardware optimization. We will discuss a multitude of application areas including physiological monitoring, social media mining, activity recognition and animal sound processing. The optimization techniques are generalizable to other domains on various data types and problems.", "title": "" }, { "docid": "7dd62985fc9349b87b2d239e01ccd5b5", "text": "The goal of pattern-based classification of functional neuroimaging data is to link individual brain activation patterns to the experimental conditions experienced during the scans. These brain-reading analyses advance functional neuroimaging on three fronts. From a technical standpoint, pattern-based classifiers overcome fatal f laws in the status quo inferential and exploratory multivariate approaches by combining pattern-based analyses with a direct link to experimental variables. In theoretical terms, the results that emerge from pattern-based classifiers can offer insight into the nature of neural representations. This shifts the emphasis in functional neuroimaging studies away from localizing brain activity toward understanding how patterns of brain activity encode information. From a practical point of view, pattern-based classifiers are already well established and understood in many areas of cognitive science. These tools are familiar to many researchers and provide a quantitatively sound and qualitatively satisfying answer to most questions addressed in functional neuroimaging studies. Here, we examine the theoretical, statistical, and practical underpinnings of pattern-based classification approaches to functional neuroimaging analyses. Pattern-based classification analyses are well positioned to become the standard approach to analyzing functional neuroimaging data.", "title": "" }, { "docid": "af03474957035ad189d47f3bee959cda", "text": "Fully convolutional neural network (FCN) has been dominating the game of face detection task for a few years with its congenital capability of sliding-window-searching with shared kernels, which boiled down all the redundant calculation, and most recent state-of-the-art methods such as Faster-RCNN, SSD, YOLO and FPN use FCN as their backbone. So here comes one question: Can we find a universal strategy to further accelerate FCN with higher accuracy, so could accelerate all the recent FCN-based methods? To analyze this, we decompose the face searching space into two orthogonal directions, 'scale' and 'spatial'. Only a few coordinates in the space expanded by the two base vectors indicate foreground. So if FCN could ignore most of the other points, the searching space and false alarm should be significantly boiled down. Based on this philosophy, a novel method named scale estimation and spatial attention proposal (S2AP) is proposed to pay attention to some specific scales in image pyramid and valid locations in each scales layer. Furthermore, we adopt a masked-convolution operation based on the attention result to accelerate FCN calculation. Experiments show that FCN-based method RPN can be accelerated by about 4× with the help of S2AP and masked-FCN and at the same time it can also achieve the state-of-the-art on FDDB, AFW and MALF face detection benchmarks as well.", "title": "" }, { "docid": "a76a1aea4861dfd1e1f426ce55747b2a", "text": "Which topics spark the most heated debates in social media? Identifying these topics is a first step towards creating systems which pierce echo chambers. In this paper, we perform a systematic methodological study of controversy detection using social media network structure and content.\n Unlike previous work, rather than identifying controversy in a single hand-picked topic and use domain-specific knowledge, we focus on comparing topics in any domain. Our approach to quantifying controversy is a graph-based three-stage pipeline, which involves (i) building a conversation graph about a topic, which represents alignment of opinion among users; (ii) partitioning the conversation graph to identify potential sides of the controversy; and (iii)measuring the amount of controversy from characteristics of the~graph.\n We perform an extensive comparison of controversy measures, as well as graph building approaches and data sources. We use both controversial and non-controversial topics on Twitter, as well as other external datasets. We find that our new random-walk-based measure outperforms existing ones in capturing the intuitive notion of controversy, and show that content features are vastly less helpful in this task.", "title": "" }, { "docid": "b4c25df52a0a5f6ab23743d3ca9a3af2", "text": "Measuring similarity between texts is an important task for several applications. Available approaches to measure document similarity are inadequate for document pairs that have non-comparable lengths, such as a long document and its summary. This is because of the lexical, contextual and the abstraction gaps between a long document of rich details and its concise summary of abstract information. In this paper, we present a document matching approach to bridge this gap, by comparing the texts in a common space of hidden topics. We evaluate the matching algorithm on two matching tasks and find that it consistently and widely outperforms strong baselines. We also highlight the benefits of incorporating domain knowledge to text matching.", "title": "" }, { "docid": "48a0e75b97fdaa734f033c6b7791e81f", "text": "OBJECTIVE\nTo examine the role of physical activity, inactivity, and dietary patterns on annual weight changes among preadolescents and adolescents, taking growth and development into account.\n\n\nSTUDY DESIGN\nWe studied a cohort of 6149 girls and 4620 boys from all over the United States who were 9 to 14 years old in 1996. All returned questionnaires in the fall of 1996 and a year later in 1997. Each child provided his or her current height and weight and a detailed assessment of typical past-year dietary intakes, physical activities, and recreational inactivities (TV, videos/VCR, and video/computer games).\n\n\nMETHODS\nOur hypotheses were that physical activity and dietary fiber intake are negatively correlated with annual changes in adiposity and that recreational inactivity (TV/videos/games), caloric intake, and dietary fat intake are positively correlated with annual changes in adiposity. Separately for boys and girls, we performed regression analysis of 1-year change in body mass index (BMI; kg/m(2)). All hypothesized factors were in the model simultaneously with several adjustment factors.\n\n\nRESULTS\nLarger increases in BMI from 1996 to 1997 were among girls who reported higher caloric intakes (.0061 +/-.0026 kg/m(2) per 100 kcal/day; beta +/- standard error), less physical activity (-.0284 +/-.0142 kg/m(2)/hour/day) and more time with TV/videos/games (.0372 +/-.0106 kg/m(2)/hour/day) during the year between the 2 BMI assessments. Larger BMI increases were among boys who reported more time with TV/videos/games (.0384 +/-.0101) during the year. For both boys and girls, a larger rise in caloric intake from 1996 to 1997 predicted larger BMI increases (girls:.0059 +/-.0027 kg/m(2) per increase of 100 kcal/day; boys:.0082 +/-.0030). No significant associations were noted for energy-adjusted dietary fat or fiber.\n\n\nCONCLUSIONS\nFor both boys and girls, a 1-year increase in BMI was larger in those who reported more time with TV/videos/games during the year between the 2 BMI measurements, and in those who reported that their caloric intakes increased more from 1 year to the next. Larger year-to-year increases in BMI were also seen among girls who reported higher caloric intakes and less physical activity during the year between the 2 BMI measurements. Although the magnitudes of these estimated effects were small, their cumulative effects, year after year during adolescence, would produce substantial gains in body weight. Strategies to prevent excessive caloric intakes, to decrease time with TV/videos/games, and to increase physical activity would be promising as a means to prevent obesity.", "title": "" }, { "docid": "17deb6c21da616a73a6daedf971765c3", "text": "Recent approaches to causal discovery based on Boolean satisfiability solvers have opened new opportunities to consider search spaces for causal models with both feedback cycles and unmeasured confounders. However, the available methods have so far not been able to provide a principled account of how to handle conflicting constraints that arise from statistical variability. Here we present a new approach that preserves the versatility of Boolean constraint solving and attains a high accuracy despite the presence of statistical errors. We develop a new logical encoding of (in)dependence constraints that is both well suited for the domain and allows for faster solving. We represent this encoding in Answer Set Programming (ASP), and apply a state-of-theart ASP solver for the optimization task. Based on different theoretical motivations, we explore a variety of methods to handle statistical errors. Our approach currently scales to cyclic latent variable models with up to seven observed variables and outperforms the available constraintbased methods in accuracy.", "title": "" } ]
scidocsrr
17a1d277de79c3df576bbf050f43ed94
Optimizations and Analysis of BSP Graph Processing Models on Public Clouds
[ { "docid": "bf56462f283d072c4157d5c5665eead3", "text": "Various scientific computations have become so complex, and thus computation tools play an important role. In this paper, we explore the state-of-the-art framework providing high-level matrix computation primitives with MapReduce through the case study approach, and demonstrate these primitives with different computation engines to show the performance and scalability. We believe the opportunity for using MapReduce in scientific computation is even more promising than the success to date in the parallel systems literature.", "title": "" } ]
[ { "docid": "e03eee5fc75ce7561e22d214e0dacb8b", "text": "As deep neural networks become more complex and input data-sets grow larger, it can take days or even weeks to train a deep neural network to the desired accuracy. Therefore, distributed Deep Learning at a massive scale is a critical capability, since it offers the potential to reduce the training time from weeks to hours. In this paper, we present a software-hardware co-optimized distributed Deep Learning system that can achieve near-linear scaling up to hundreds of GPUs. The core algorithm is a multi-ring communication pattern that provides a good tradeoff between latency and bandwidth and adapts to a variety of system configurations. The communication algorithm is implemented as a library for easy use. This library has been integrated into Tensorflow, Caffe, and Torch. We train Resnet-101 on Imagenet 22K with 64 IBM Power8 S822LC servers (256 GPUs) in about 7 hours to an accuracy of 33.8% validation accuracy. Microsoft’s ADAM [10] and Google’s DistBelief [11] results did not reach 30% validation accuracy for Imagenet 22K. Compared to Facebook’s recent paper [1] on 256 GPU training, we use a different communication algorithm, and our combined software and hardware system offers better communication overhead for Resnet50. A PowerAI DDL enabled version of Torch completed 90 epochs of training on Resnet 50 for 1K classes in 50 minutes using 64 IBM Power8 S822LC servers (256 GPUs).", "title": "" }, { "docid": "710e81da55d50271b55ac9a4f2d7f986", "text": "Although prior research has examined how individual difference factors are related to relationship initiation and formation over the Internet (e.g., online dating sites, social networking sites), little research has examined how dispositional factors are related to other aspects of online dating. The present research therefore sought to examine the relationship between several dispositional factors, such as Big-Five personality traits, self-esteem, rejection sensitivity, and attachment styles, and the use of online dating sites and online dating behaviors. Rejection sensitivity was the only dispositional variable predictive of use of online dating sites whereby those higher in rejection sensitivity are more likely to use online dating sites than those lower in rejection sensitivity. We also found that those higher in rejection sensitivity, those lower in conscientiousness, and men indicated being more likely to engage in potentially risky behaviors related to meeting an online dating partner face-to-face. Further research is needed to further explore the relationships between these dispositional factors and online dating behaviors. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "9d3c4cef17b6736fa9c940051c642e29", "text": "A zero-knowledge interactive proof is a protocol by which Alice can convince a polynomially-bounded Bob of the truth of some theorem without giving him any hint as to how the proof might proceed. Under cryptographic assumptions, we give a general technique for achieving this goal for every problem in NP. This extends to a presumably larger class, which combines the powers of non-determinism and randomness. Our protocol is powerful enough to allow Mice to convince Bob of theorems for which she does not even have a proof: it is enough for Alice to convince herself probabilistidly of a theorem, perhaps thanks to her knowledge of some trap-door information, in order for her to be able to convince Bob as well, without compromising the map-door in any way.", "title": "" }, { "docid": "2c2574e1eb29ad45bedf346417c85e2d", "text": "Technology has shown great promise in providing access to textual information for visually impaired people. Optical Braille Recognition (OBR) allows people with visual impairments to read volumes of typewritten documents with the help of flatbed scanners and OBR software. This project looks at developing a system to recognize an image of embossed Arabic Braille and then convert it to text. It particularly aims to build fully functional Optical Arabic Braille Recognition system. It has two main tasks, first is to recognize printed Braille cells, and second is to convert them to regular text. Converting Braille to text is not simply a one to one mapping, because one cell may represent one symbol (alphabet letter, digit, or special character), two or more symbols, or part of a symbol. Moreover, multiple cells may represent a single symbol.", "title": "" }, { "docid": "8e03643ffcab0fbbcaabb32d5503e653", "text": "This paper is an in-depth review on silicon implementations of threshold logic gates that covers several decades. In this paper, we will mention early MOS threshold logic solutions and detail numerous very-large-scale integration (VLSI) implementations including capacitive (switched capacitor and floating gate with their variations), conductance/current (pseudo-nMOS and output-wired-inverters, including a plethora of solutions evolved from them), as well as many differential solutions. At the end, we will briefly mention other implementations, e.g., based on negative resistance devices and on single electron technologies.", "title": "" }, { "docid": "1aa73c76f121f01a5ee1a7ced788841f", "text": "In recent years Educational Data Mining (EDM) has emerged as a new field of research due to the development of several statistical approaches to explore data in educational context. One such application of EDM is early prediction of student results. This is necessary in higher education for identifying the &quot;weak&quot; students so that some form of remediation may be organized for them. In this paper a set of attributes are first defined for a group of students majoring in Computer Science in some undergraduate colleges in Kolkata. Since the numbers of attributes are reasonably high, feature selection algorithms are applied on the data set to reduce the number of features. Five classes of Machine Learning Algorithm (MLA) are then applied on this data set and it was found that the best results were obtained with the decision tree class of algorithms. It was also found that the prediction results obtained with this model are comparable with other previously developed models.", "title": "" }, { "docid": "64f15815e4c1c94c3dfd448dec865b85", "text": "Modern software systems are typically large and complex, making comprehension of these systems extremely difficult. Experienced programmers comprehend code by seamlessly processing synonyms and other word relations. Thus, we believe that automated comprehension and software tools can be significantly improved by leveraging word relations in software. In this paper, we perform a comparative study of six state of the art, English-based semantic similarity techniques and evaluate their effectiveness on words from the comments and identifiers in software. Our results suggest that applying English-based semantic similarity techniques to software without any customization could be detrimental to the performance of the client software tools. We propose strategies to customize the existing semantic similarity techniques to software, and describe how various program comprehension tools can benefit from word relation information.", "title": "" }, { "docid": "e82013b8c8d2e9e48bfdd106df18c042", "text": "The fourth Emotion Recognition in the Wild (EmotiW) challenge is a grand challenge in the ACM International Conference on Multimodal Interaction 2016, Tokyo. EmotiW is a series of benchmarking and competition effort for researchers working in the area of automatic emotion recognition in the wild. The fourth EmotiW has two sub-challenges: Video based emotion recognition (VReco) and Group-level emotion recognition (GReco). The VReco sub-challenge is being run for the fourth time and GReco is a new sub-challenge this year.", "title": "" }, { "docid": "bc93abd474fe56d744d51317deda03d1", "text": "Land surface temperature (LST) is one of the most important variables measured by satellite remote sensing. Public domain data are available from the newly operational Landsat-8 Thermal Infrared Sensor (TIRS). This paper presents an adjustment of the split window algorithm (SWA) for TIRS that uses atmospheric transmittance and land surface emissivity (LSE) as inputs. Various alternatives for estimating these SWA inputs are reviewed, and a sensitivity analysis of the SWA to misestimating the input parameters is performed. The accuracy of the current development was assessed using simulated Modtran data. The root mean square error (RMSE) of the simulated LST was calculated as 0.93 °C. This SWA development is leading to progress in the determination of LST by Landsat-8 TIRS.", "title": "" }, { "docid": "f418441593da8db1dcbaa922cccc21fa", "text": "Sentiment analysis, as a heatedly-discussed research topic in the area of information extraction, has attracted more attention from the beginning of this century. With the rapid development of the Internet, especially the rising popularity of Web2.0 technology, network user has become not only the content maker, but also the receiver of information. Meanwhile, benefiting from the development and maturity of the technology in natural language processing and machine learning, we can widely employ sentiment analysis on subjective texts. In this paper, we propose a supervised learning method on fine-grained sentiment analysis to meet the new challenges by exploring new research ideas and methods to further improve the accuracy and practicability of sentiment analysis. First, this paper presents an improved strength computation method of sentiment word. Second, this paper introduces a sentiment information joint recognition model based on Conditional Random Fields and analyzes the related knowledge of the basic and semantic features. Finally, the experimental results show that our approach and a demo system are feasible and effective.", "title": "" }, { "docid": "8aadc690d86ad4c015a4a82a32336336", "text": "The complexities of various search algorithms are considered in terms of time, space, and cost of the solution paths. • Brute-force search . Breadth-first search (BFS) . Depth-first search (DFS) . Depth-first Iterative-deepening (DFID) . Bi-directional search • Heuristic search: best-first search . A∗ . IDA∗ The issue of storing information in DISK instead of main memory. Solving 15-puzzle. TCG: DFID, 20121120, Tsan-sheng Hsu c © 2", "title": "" }, { "docid": "e61c73a9afdef24842cc0234db573376", "text": "Searching persons in large-scale image databases with the query of natural language description has important applications in video surveillance. Existing methods mainly focused on searching persons with image-based or attribute-based queries, which have major limitations for a practical usage. In this paper, we study the problem of person search with natural language description. Given the textual description of a person, the algorithm of the person search is required to rank all the samples in the person database then retrieve the most relevant sample corresponding to the queried description. Since there is no person dataset or benchmark with textual description available, we collect a large-scale person description dataset with detailed natural language annotations and person samples from various sources, termed as CUHK Person Description Dataset (CUHK-PEDES). A wide range of possible models and baselines have been evaluated and compared on the person search benchmark. An Recurrent Neural Network with Gated Neural Attention mechanism (GNA-RNN) is proposed to establish the state-of-the art performance on person search.", "title": "" }, { "docid": "b21e817d95b11119b9dbafca89a69262", "text": "This paper identifies and analyzes BitCoin features which may facilitate Bitcoin to become a global currency, as well as characteristics which may impede the use of BitCoin as a medium of exchange, a unit of account and a store of value, and compares BitCoin with standard currencies with respect to the main functions of money. Among all analyzed BitCoin features, the extreme price volatility stands out most clearly compared to standard currencies. In order to understand the reasons for such extreme price volatility, we attempt to identify drivers of BitCoin price formation and estimate their importance econometrically. We apply time-series analytical mechanisms to daily data for the 2009-2014 period. Our estimation results suggest that BitCoin attractiveness indicators are the strongest drivers of BitCoin price followed by market forces. In contrast, macro-financial developments do not determine BitCoin price in the long-run. Our findings suggest that as long as BitCoin price will be mainly driven by speculative investments, BitCoin will not be able to compete with standard currencies.", "title": "" }, { "docid": "64be267c0cf28fbdf2ab7a3670d461d2", "text": "Jump flooding algorithm (JFA) is an interesting way to utilize the graphics processing unit to efficiently compute Voronoi diagrams and distance transforms in 2D discrete space. This paper presents three novel variants of JFA. They focus on different aspects of JFA: the first variant can further reduce the errors of JFA; the second variant can greatly increase the speed of JFA; and the third variant enables JFA to compute Voronoi diagrams in 3D space in a slice-by-slice manner, without a high end graphics processing unit. These variants are orthogonal to each other. In other words, it is possible to combine any two or all of them together.", "title": "" }, { "docid": "93ee971eaa99d055426e842e38454c3b", "text": "Advanced persistent threats (APTs) pose a grave threat to cyberspace, because they deactivate all the conventional cyber defense mechanisms. This paper addresses the issue of evaluating the security of the cyber networks under APTs. For this purpose, a dynamic model capturing the APT-based cyber-attack-defense processes is proposed. Theoretical analysis shows that this model admits a globally stable equilibrium. On this basis, a new security metric known as the equilibrium security is suggested. The impact of several factors on the equilibrium security is revealed through theoretical analysis or computer simulation. These findings contribute to the development of feasible security solutions against APTs.", "title": "" }, { "docid": "7cfc2866218223ba6bd56eb1f10ce29f", "text": "This paper deals with prediction of anopheles number, the main vector of malaria risk, using environmental and climate variables. The variables selection is based on an automatic machine learning method using regression trees, and random forests combined with stratified two levels cross validation. The minimum threshold of variables importance is accessed using the quadratic distance of variables importance while the optimal subset of selected variables is used to perform predictions. Finally the results revealed to be qualitatively better, at the selection, the prediction, and the CPU time point of view than those obtained by GLM-Lasso method.", "title": "" }, { "docid": "14fff9cf166ef6c4b1012b5dc35ad5c1", "text": "Average word embeddings are a common baseline for more sophisticated sentence embedding techniques. However, they typically fall short of the performances of more complex models such as InferSent. Here, we generalize the concept of average word embeddings to power mean word embeddings. We show that the concatenation of different types of power mean word embeddings considerably closes the gap to state-of-the-art methods monolingually and substantially outperforms these more complex techniques crosslingually. In addition, our proposed method outperforms different recently proposed baselines such as SIF and Sent2Vec by a solid margin, thus constituting a much harder-to-beat monolingual baseline. Our data and code are publicly available.1", "title": "" }, { "docid": "372b6f7e224032c8f49d954dfc41b558", "text": "Carcinoma of the breast is very rare in childhood, accounting for less than 1% of all childhood malignancies and is especially rare in boys. Delay in diagnosis and treatment in children with breast cancer may occur because surgeons are very reluctant to perform biopsies on the developing breast, since these can cause future deformity. We report a case of male secretory breast carcinoma in a 13-year-old boy. Radical mastectomy was performed followed by chemotherapy. The patient is free of disease after 10 years. Secretory breast carcinoma (SBC) is the commonest type of breast carcinoma in children. In this article, we discuss the diagnosis and treatment options for breast cancer among children as well as features of SBC, based on a literature review.", "title": "" }, { "docid": "a0f304d04c9cf68d74164577d6c46228", "text": "Recent work on Winograd-based convolution allows for a great reduction of computational complexity, but existing implementations are limited to 2D data and a single kernel size of 3 by 3. They can achieve only slightly better, and often worse performance than better optimized, direct convolution implementations. We propose and implement an algorithm for N-dimensional Winograd-based convolution that allows arbitrary kernel sizes and is optimized for manycore CPUs. Our algorithm achieves high hardware utilization through a series of optimizations. Our experiments show that on modern ConvNets, our optimized implementation, is on average more than 3 x, and sometimes 8 x faster than other state-of-the-art CPU implementations on an Intel Xeon Phi manycore processors. Moreover, our implementation on the Xeon Phi achieves competitive performance for 2D ConvNets and superior performance for 3D ConvNets, compared with the best GPU implementations.", "title": "" }, { "docid": "dee922c700479ea808e59fd323193e48", "text": "In this article we present a novel mapping system that robustly generates highly accurate 3D maps using an RGB-D camera. Our approach does not require any further sensors or odometry. With the availability of low-cost and light-weight RGB-D sensors such as the Microsoft Kinect, our approach applies to small domestic robots such as vacuum cleaners as well as flying robots such as quadrocopters. Furthermore, our system can also be used for free-hand reconstruction of detailed 3D models. In addition to the system itself, we present a thorough experimental evaluation on a publicly available benchmark dataset. We analyze and discuss the influence of several parameters such as the choice of the feature descriptor, the number of visual features, and validation methods. The results of the experiments demonstrate that our system can robustly deal with challenging scenarios such as fast cameras motions and feature-poor environments while being fast enough for online operation. Our system is fully available as open-source and has already been widely adopted by the robotics community.", "title": "" } ]
scidocsrr
af6d04859e8295b9cd615b0dcbcb6b30
Job recommender systems: A survey
[ { "docid": "9a79af1c226073cc129087695295a4e5", "text": "This paper presents an effective approach for resume information extraction to support automatic resume management and routing. A cascaded information extraction (IE) framework is designed. In the first pass, a resume is segmented into a consecutive blocks attached with labels indicating the information types. Then in the second pass, the detailed information, such as Name and Address, are identified in certain blocks (e.g. blocks labelled with Personal Information), instead of searching globally in the entire resume. The most appropriate model is selected through experiments for each IE task in different passes. The experimental results show that this cascaded hybrid model achieves better F-score than flat models that do not apply the hierarchical structure of resumes. It also shows that applying different IE models in different passes according to the contextual structure is effective.", "title": "" } ]
[ { "docid": "6d405b0f6b1381cec5e1d001e1102404", "text": "Consensus is an important building block for building replicated systems, and many consensus protocols have been proposed. In this paper, we investigate the building blocks of consensus protocols and use these building blocks to assemble a skeleton that can be configured to produce, among others, three well-known consensus protocols: Paxos, Chandra-Toueg, and Ben-Or. Although each of these protocols specifies only one quorum system explicitly, all also employ a second quorum system. We use the skeleton to implement a replicated service, allowing us to compare the performance of these consensus protocols under various workloads and failure scenarios.", "title": "" }, { "docid": "44ba90b77cb6bc324fbeebe096b93cd0", "text": "With the growth of fandom population, a considerable amount of broadcast sports videos have been recorded, and a lot of research has focused on automatically detecting semantic events in the recorded video to develop an efficient video browsing tool for a general viewer. However, a professional sportsman or coach wonders about high level semantics in a different perspective, such as the offensive or defensive strategy performed by the players. Analyzing tactics is much more challenging in a broadcast basketball video than in other kinds of sports videos due to its complicated scenes and varied camera movements. In this paper, by developing a quadrangle candidate generation algorithm and refining the model fitting score, we ameliorate the court-based camera calibration technique to be applicable to broadcast basketball videos. Player trajectories are extracted from the video by a CamShift-based tracking method and mapped to the real-world court coordinates according to the calibrated results. The player position/trajectory information in the court coordinates can be further analyzed for professional-oriented applications such as detecting wide open event, retrieving target video clips based on trajectories, and inferring implicit/explicit tactics. Experimental results show the robustness of the proposed calibration and tracking algorithms, and three practicable applications are introduced to address the applicability of our system.", "title": "" }, { "docid": "4381dfbb321feaca3299605b76836e93", "text": "This paper deals with the design of a Model Predictive Control (MPC) approach for the altitude and attitude stabilization and tracking of a Quad Tilt Wing (QTW) type of Unmanned Aerial Vehicles (UAVs). This Vertical Take-Off and Landing (VTOL) aircraft can take-off and landing vertically such as helicopters and is convertible to the fixed-wing configuration for horizontal flight using a tilting mechanism for its rotors/wings. A nonlinear dynamical model, relating to the vertical flight mode of this QTW, is firstly developed using the Newton-Euler formalism, in describing the aerodynamic forces and moments acting on the aircraft. This established model, linearized around an equilibrium operating point, is then used to design a MPC approach for the stabilization and tracking of the QTW attitude and altitude. In order to show the performance superiority of the proposed MPC technique, a comparison with the known Linear Quadratic (LQ) strategy is carried out. All simulation results, obtained for both MPC and LQ approaches, are presented and discussed.", "title": "" }, { "docid": "a679d37b88485cf71569f9aeefefbac5", "text": "Incrementality is ubiquitous in human-human interaction and beneficial for human-computer interaction. It has been a topic of research in different parts of the NLP community, mostly with focus on the specific topic at hand even though incremental systems have to deal with similar challenges regardless of domain. In this survey, I consolidate and categorize the approaches, identifying similarities and differences in the computation and data, and show trade-offs that have to be considered. A focus lies on evaluating incremental systems because the standard metrics often fail to capture the incremental properties of a system and coming up with a suitable evaluation scheme is non-trivial. Title and Abstract in German Inkrementelle Sprachverarbeitung: Herausforderungen, Strategien und Evaluation Inkrementalität ist allgegenwärtig in Mensch-Mensch-Interaktiton und hilfreich für MenschComputer-Interaktion. In verschiedenen Teilen der NLP-Community wird an Inkrementalität geforscht, zumeist fokussiert auf eine konkrete Aufgabe, obwohl sich inkrementellen Systemen domänenübergreifend ähnliche Herausforderungen stellen. In diesem Überblick trage ich Ansätze zusammen, kategorisiere sie und stelle Ähnlichkeiten und Unterschiede in Berechnung und Daten sowie nötige Abwägungen vor. Ein Fokus liegt auf der Evaluierung inkrementeller Systeme, da Standardmetriken of nicht in der Lage sind, die inkrementellen Eigenschaften eines Systems einzufangen und passende Evaluationsschemata zu entwickeln nicht einfach ist.", "title": "" }, { "docid": "efd6856e774b258858c43d7746639317", "text": "In this paper, we propose a vision-based robust vehicle distance estimation algorithm that supports motorists to rapidly perceive relative distance of oncoming and passing vehicles thereby minimizing the risk of hazardous circumstances. And, as it is expected, the silhouettes of background stationary objects may appear in the motion scene, which pop-up due to motion of the camera, which is mounted on dashboard of the host vehicle. To avoid the effect of false positive detection of stationary objects and to determine the ego motion a new Morphological Strip Matching Algorithm and Recursive Stencil Mapping Algorithm(MSM-RSMA)is proposed. A new series of stencils are created where non-stationary objects are taken off after detecting stationary objects by applying a shape matching technique to each image strip pair. Then the vertical shift is estimated recursively with new stencils with identified stationary background objects. Finally, relative comparison of known templates are used to estimate the distance, which is further certified by value obtained for vertical shift. We apply analysis of relative dimensions of bounding box of the detected vehicle with relevant templates to calculate the relative distance. We prove that our method is capable of providing a comparatively fast distance estimation while keeping its robustness in different environments changes.", "title": "" }, { "docid": "2d4cb6980cf8716699bdffca6cfed274", "text": "Advances in laser technology have progressed so rapidly during the past decade that successful treatment of many cutaneous concerns and congenital defects, including vascular and pigmented lesions, tattoos, scars and unwanted haircan be achieved. The demand for laser surgery has increased as a result of the relative ease with low incidence of adverse postoperative sequelae. In this review, the currently available laser systems with cutaneous applications are outlined to identify the various types of dermatologic lasers available, to list their clinical indications and to understand the possible side effects.", "title": "" }, { "docid": "9229a48b8df014b896abb60548759e36", "text": "Given that a user interface interacts with users, a critical factor to be considered in improving the usability of an e-learning user interface is user-friendliness. Affordances enable users to more easily approach and engage in learning tasks because they strengthen positive, activating emotions. However, most studies on affordances limit themselves to an examination of the affordance attributes of e-learning tools rather than determining how to increase such attributes. A design approach is needed to improve affordances for e-learning user interfaces. Using Maier and Fadel’s Affordance-Based Design methodology as a framework, the researchers in this study identified affordance factors, suggested affordance design strategies for the user interface, and redesigned an affordable user interface prototype. The identified affordance factors and strategies were reviewed and validated in Delphi meetings whose members were teachers, e-learning specialists, and educational researchers. The effects of the redesigned user interface on usability were evaluated by fifth-grade participating in the experimental study. The results show that affordances led users to experience positive emotions, and as a result, use the interface effectively, efficiently, and satisfactorily. Implications were discussed for designing strategies to enhance the affordances of the user interfaces of e-learning and other learning technology tools.", "title": "" }, { "docid": "f1220465c3ac6da5a2edc96b5979d4be", "text": "We consider Complexity Leadership Theory [Uhl-Bien, M., Marion, R., & McKelvey, B. (2007). Complexity Leadership Theory: Shifting leadership from the industrial age to the knowledge era. The Leadership Quarterly.] in contexts of bureaucratic forms of organizing to describe how adaptive dynamics can work in combination with administrative functions to generate emergence and change in organizations. Complexity leadership approaches are consistent with the central assertion of the meso argument that leadership is multi-level, processual, contextual, and interactive. In this paper we focus on the adaptive function, an interactive process between adaptive leadership (an agentic behavior) and complexity dynamics (nonagentic social dynamics) that generates emergent outcomes (e.g., innovation, learning, adaptability) for the firm. Propositions regarding the actions of complexity leadership in bureaucratic forms of organizing are offered. © 2009 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "807a94db483f0ca72d3096e4897d2c76", "text": "A typical scene contains many different objects that, because of the limited processing capacity of the visual system, compete for neural representation. The competition among multiple objects in visual cortex can be biased by both bottom-up sensory-driven mechanisms and top-down influences, such as selective attention. Functional brain imaging studies reveal that, both in the absence and in the presence of visual stimulation, biasing signals due to selective attention can modulate neural activity in visual cortex in several ways. Although the competition among stimuli for representation is ultimately resolved within visual cortex, the source of top-down biasing signals derives from a network of areas in frontal and parietal cortex.", "title": "" }, { "docid": "43cd3b5ac6e2e2f240f4feb44be65b99", "text": "Executive Overview Toyota’s Production System (TPS) is based on “lean” principles including a focus on the customer, continual improvement and quality through waste reduction, and tightly integrated upstream and downstream processes as part of a lean value chain. Most manufacturing companies have adopted some type of “lean initiative,” and the lean movement recently has gone beyond the shop floor to white-collar offices and is even spreading to service industries. Unfortunately, most of these efforts represent limited, piecemeal approaches—quick fixes to reduce lead time and costs and to increase quality—that almost never create a true learning culture. We outline and illustrate the management principles of TPS that can be applied beyond manufacturing to any technical or service process. It is a true systems approach that effectively integrates people, processes, and technology—one that must be adopted as a continual, comprehensive, and coordinated effort for change and learning across the organization.", "title": "" }, { "docid": "7e3cdead80a1d17b064b67ddacd5d8c1", "text": "BACKGROUND\nThe aim of the study was to evaluate the relationship between depression and Internet addiction among adolescents.\n\n\nSAMPLING AND METHOD\nA total of 452 Korean adolescents were studied. First, they were evaluated for their severity of Internet addiction with consideration of their behavioral characteristics and their primary purpose for computer use. Second, we investigated correlations between Internet addiction and depression, alcohol dependence and obsessive-compulsive symptoms. Third, the relationship between Internet addiction and biogenetic temperament as assessed by the Temperament and Character Inventory was evaluated.\n\n\nRESULTS\nInternet addiction was significantly associated with depressive symptoms and obsessive-compulsive symptoms. Regarding biogenetic temperament and character patterns, high harm avoidance, low self-directedness, low cooperativeness and high self-transcendence were correlated with Internet addiction. In multivariate analysis, among clinical symptoms depression was most closely related to Internet addiction, even after controlling for differences in biogenetic temperament.\n\n\nCONCLUSIONS\nThis study reveals a significant association between Internet addiction and depressive symptoms in adolescents. This association is supported by temperament profiles of the Internet addiction group. The data suggest the necessity of the evaluation of the potential underlying depression in the treatment of Internet-addicted adolescents.", "title": "" }, { "docid": "a361214a42392cbd0ba3e0775d32c839", "text": "We propose a design methodology to exploit adaptive nanodevices (memristors), virtually immune to their variability. Memristors are used as synapses in a spiking neural network performing unsupervised learning. The memristors learn through an adaptation of spike timing dependent plasticity. Neurons' threshold is adjusted following a homeostasis-type rule. System level simulations on a textbook case show that performance can compare with traditional supervised networks of similar complexity. They also show the system can retain functionality with extreme variations of various memristors' parameters, thanks to the robustness of the scheme, its unsupervised nature, and the power of homeostasis. Additionally the network can adjust to stimuli presented with different coding schemes.", "title": "" }, { "docid": "22f633957b40d9027aceff93a68964b5", "text": "Most of previous image denoising methods focus on additive white Gaussian noise (AWGN). However,the real-world noisy image denoising problem with the advancing of the computer vision techiniques. In order to promote the study on this problem while implementing the concurrent real-world image denoising datasets, we construct a new benchmark dataset which contains comprehensive real-world noisy images of different natural scenes. These images are captured by different cameras under different camera settings. We evaluate the different denoising methods on our new dataset as well as previous datasets. Extensive experimental results demonstrate that the recently proposed methods designed specifically for realistic noise removal based on sparse or low rank theories achieve better denoising performance and are more robust than other competing methods, and the newly proposed dataset is more challenging. The constructed dataset of real photographs is publicly available at https://github.com/csjunxu/PolyUDataset for researchers to investigate new real-world image denoising methods. We will add more analysis on the noise statistics in the real photographs of our new dataset in the next version of this article.", "title": "" }, { "docid": "da0b5fc36cd36b1a3aa7ebb9441e3e15", "text": "In Steganography, the total message will be invisible into a cover media such as text, audio, video, and image in which attackers don't have any idea about the original message that the media contain and which algorithm use to embed or extract it. In this paper, the proposed technique has focused on Bitmap image as it is uncompressed and convenient than any other image format to implement LSB Steganography method. For better security AES cryptography technique has also been used in the proposed method. Before applying the Steganography technique, AES cryptography will change the secret message into cipher text to ensure two layer security of the message. In the proposed technique, a new Steganography technique is being developed to hide large data in Bitmap image using filtering based algorithm, which uses MSB bits for filtering purpose. This method uses the concept of status checking for insertion and retrieval of message. This method is an improvement of Least Significant Bit (LSB) method for hiding information in images. It is being predicted that the proposed method will able to hide large data in a single image retaining the advantages and discarding the disadvantages of the traditional LSB method. Various sizes of data are stored inside the images and the PSNR are also calculated for each of the images tested. Based on the PSNR value, the Stego image has higher PSNR value as compared to other method. Hence the proposed Steganography technique is very efficient to hide the secret information inside an image.", "title": "" }, { "docid": "4dffb7bcd82bcc2fbb7291233e4f8f88", "text": "In the following paper, we present a framework for quickly training 2D object detectors for robotic perception. Our method can be used by robotics practitioners to quickly (under 30 seconds per object) build a large-scale real-time perception system. In particular, we show how to create new detectors on the fly using large-scale internet image databases, thus allowing a user to choose among thousands of available categories to build a detection system suitable for the particular robotic application. Furthermore, we show how to adapt these models to the current environment with just a few in-situ images. Experiments on existing 2D benchmarks evaluate the speed, accuracy, and flexibility of our system.", "title": "" }, { "docid": "1ab0308539bc6508b924316b39a963ca", "text": "Daily wafer fabrication in semiconductor foundry depends on considerable metrology operations for tool-quality and process-quality assurance. The metrology operations required a lot of metrology tools, which increase FAB's investment. Also, these metrology operations will increase cycle time of wafer process. Metrology operations do not bring any value added to wafer but only quality assurance. This article provides a new method denoted virtual metrology (VM) to utilize sensor data collected from 300 mm FAB's tools to forecast quality data of wafers and tools. This proposed method designs key steps to establish a VM control model based on neural networks and to develop and deploy applications following SEMI EDA (equipment data acquisition) standards.", "title": "" }, { "docid": "36f37bdf7da56a57f29d026dca77e494", "text": "Fifth generation (5G) systems are expected to introduce a revolution in the ICT domain with innovative networking features, such as device-to-device (D2D) communications. Accordingly, in-proximity devices directly communicate with each other, thus avoiding routing the data across the network infrastructure. This innovative technology is deemed to be also of high relevance to support effective heterogeneous objects interconnection within future IoT ecosystems. However, several open challenges shall be solved to achieve a seamless and reliable deployment of proximity-based communications. In this paper, we give a contribution to trust and security enhancements for opportunistic hop-by-hop forwarding schemes that rely on cellular D2D communications. To tackle the presence of malicious nodes in the network, reliability and reputation notions are introduced to model the level of trust among involved devices. To this aim, social-awareness of devices is accounted for, to better support D2D-based multihop content uploading. Our simulative results in small-scale IoT environments, demonstrate that data loss due to malicious nodes can be drastically reduced and gains in uploading time be reached with the proposed solution.", "title": "" }, { "docid": "ea525c15c1cbb4a4a716e897287fd770", "text": "This study explored student teachers’ cognitive presence and learning achievements by integrating the SOP Model in which self-study (S), online group discussion (O) and double-stage presentations (P) were implemented in the flipped classroom. The research was conducted at a university in Taiwan with 31 student teachers. Preand post-worksheets measuring knowledge of educational issues were administered before and after group discussion. Quantitative content analysis and behavior sequential analysis were used to evaluate cognitive presence, while a paired-samples t-test analyzed learning achievement. The results showed that the participants had the highest proportion of “Exploration,” the second largest rate of “Integration,” but rarely reached “Resolution.” The participants’ achievements were greatly enhanced using the SOP Model in terms of the scores of the preand post-worksheets. Moreover, the groups with a higher proportion of “Integration” (I) and “Resolution” (R) performed best in the post-worksheets and were also the most progressive groups. Both highand low-rated groups had significant correlations between the “I” and “R” phases, with “I”  “R” in the low-rated groups but “R”  “I” in the high-rated groups. The instructional design of the SOP Model can be a reference for future pedagogical implementations in the higher educational context.", "title": "" }, { "docid": "69f6b21da3fa48f485fc612d385e7869", "text": "Recurrent neural networks (RNN) have been successfully applied for recognition of cursive handwritten documents, both in English and Arabic scripts. Ability of RNNs to model context in sequence data like speech and text makes them a suitable candidate to develop OCR systems for printed Nabataean scripts (including Nastaleeq for which no OCR system is available to date). In this work, we have presented the results of applying RNN to printed Urdu text in Nastaleeq script. Bidirectional Long Short Term Memory (BLSTM) architecture with Connectionist Temporal Classification (CTC) output layer was employed to recognize printed Urdu text. We evaluated BLSTM networks for two cases: one ignoring the character's shape variations and the second is considering them. The recognition error rate at character level for first case is 5.15% and for the second is 13.6%. These results were obtained on synthetically generated UPTI dataset containing artificially degraded images to reflect some real-world scanning artifacts along with clean images. Comparison with shape-matching based method is also presented.", "title": "" }, { "docid": "11a140232485cb8bcc4914b8538ab5ea", "text": "We explain why we feel that the comparison betwen Common Lisp and Fortran in a recent article by Fateman et al. in this journal is not entirely fair.", "title": "" } ]
scidocsrr
937765e465ed05decbcf71da3c584d90
Generalized parallel CRC computation on FPGA
[ { "docid": "b60555d52e5a8772ba128b184ec6de73", "text": "Standardized 32-bit Cyclic Redundancy Codes provide fewer bits of guaranteed error detection than they could, achieving a Hamming Distance (HD) of only 4 for maximum-length Ethernet messages, whereas HD=6 is possible. Although research has revealed improved codes, exploring the entire design space has previously been computationally intractable, even for special-purpose hardware. Moreover, no CRC polynomial has yet been found that satisfies an emerging need to attain both HD=6 for 12K bit messages and HD=4 for message lengths beyond 64K bits. This paper presents results from the first exhaustive search of the 32-bit CRC design space. Results from previous research are validated and extended to include identifying all polynomials achieving a better HD than the IEEE 802.3 CRC-32 polynomial. A new class of polynomials is identified that provides HD=6 up to nearly 16K bit and HD=4 up to 114K bit message lengths, providing the best achievable design point that maximizes error detection for both legacy and new applications, including potentially iSCSI and application-implemented error checks.", "title": "" }, { "docid": "0cb490aacaf237bdade71479151ab8d2", "text": "This brief presents a high-speed parallel cyclic redundancy check (CRC) implementation based on unfolding, pipelining, and retiming algorithms. CRC architectures are first pipelined to reduce the iteration bound by using novel look-ahead pipelining methods and then unfolded and retimed to design high-speed parallel circuits. A comparison on commonly used generator polynomials between the proposed design and previously proposed parallel CRC algorithms shows that the proposed design can increase the speed by up to 25% and control or even reduce hardware cost", "title": "" } ]
[ { "docid": "1d32c84e539e10f99b92b54f2f71970b", "text": "Stories are the most natural ways for people to deal with information about the changing world. They provide an efficient schematic structure to order and relate events according to some explanation. We describe (1) a formal model for representing storylines to handle streams of news and (2) a first implementation of a system that automatically extracts the ingredients of a storyline from news articles according to the model. Our model mimics the basic notions from narratology by adding bridging relations to timelines of events in relation to a climax point. We provide a method for defining the climax score of each event and the bridging relations between them. We generate a JSON structure for any set of news articles to represent the different stories they contain and visualize these stories on a timeline with climax and bridging relations. This visualization helps inspecting the validity of the generated structures.", "title": "" }, { "docid": "d157d7b6e1c5796b6d7e8fedf66e81d8", "text": "Intrusion detection for computer network systems becomes one of the most critical tasks for network administrators today. It has an important role for organizations, governments and our society due to its valuable resources on computer networks. Traditional misuse detection strategies are unable to detect new and unknown intrusion. Besides , anomaly detection in network security is aim to distinguish between illegal or malicious events and normal behavior of network systems. Anomaly detection can be considered as a classification problem where it builds models of normal network behavior, which it uses to detect new patterns that significantly deviate from the model. Most of the current research on anomaly detection is based on the learning of normally and anomaly behaviors. They do not take into account the previous, recent events to detect the new incoming one. In this paper, we propose a real time collective anomaly detection model based on neural network learning and feature operating. Normally a Long Short-Term Memory Recurrent Neural Network (LSTM RNN) is trained only on normal data and it is capable of predicting several time steps ahead of an input. In our approach, a LSTM RNN is trained with normal time series data before performing a live prediction for each time step. Instead of considering each time step separately, the observation of prediction errors from a certain number of time steps is now proposed as a new idea for detecting collective anomalies. The prediction errors from a number of the latest time steps above a threshold will indicate a collective anomaly. The model is built on a time series version of the KDD 1999 dataset. The experiments demonstrate that it is possible to offer reliable and efficient for collective anomaly detection.", "title": "" }, { "docid": "97c8806b425bc7448baf904ae01b16e1", "text": "Consumers or the Customers are valuable assets for any organisation as they are the ultimate destination of any products or services. Since, they are the ultimate end users of any product or services, thus, the success of any organisation depends upon the satisfaction of the consumers, if not they will switch to other brands. Due to this reason, the satisfaction of the consumers becomes priority for any organisations. For satisfying the consumers, one has to know about what consumer buy, why they buy it, when they buy it, how and how often they buy it and what made them to switch to other brands. The present paper is an attempt to study the shampoo buying patterns among the individuals. The study also examines the various factors which influence the consumers to buy a shampoo of particular brand and reasons for their switching to other brands.", "title": "" }, { "docid": "a5a7e3fe9d6eaf8fc25e7fd91b74219e", "text": "We present in this paper a new approach that uses supervised machine learning techniques to improve the performances of optimization algorithms in the context of mixed-integer programming (MIP). We focus on the branch-and-bound (B&B) algorithm, which is the traditional algorithm used to solve MIP problems. In B&B, variable branching is the key component that most conditions the efficiency of the optimization. Good branching strategies exist but are computationally expensive and usually hinder the optimization rather than improving it. Our approach consists in imitating the decisions taken by a supposedly good branching strategy, strong branching in our case, with a fast approximation. To this end, we develop a set of features describing the state of the ongoing optimization and show how supervised machine learning can be used to approximate the desired branching strategy. The approximated function is created by a supervised machine learning algorithm from a set of observed branching decisions taken by the target strategy. The experiments performed on randomly generated and standard benchmark (MIPLIB) problems show promising results.", "title": "" }, { "docid": "aba674bc0b1d66f901ece0617dee115c", "text": "An appropriate special case of a transform developed by J. Radon in 1917 is shown to have the major properties of the Hough transform which is useful for finding line segments in digital pictures. Such an observation may be useful in further efforts to generalize the Hough transform. Techniques for applying the Radon transform to lines and pixels are developed through examples, and the appropriate generalization to arbitrary curves is discussed.", "title": "" }, { "docid": "b813635e27731d5ca25597d7a5984fc0", "text": "Glioblastoma multiforme (GBM) represents an aggressive tumor type with poor prognosis. The majority of GBM patients cannot be cured. There is high willingness among patients for the compassionate use of non-approved medications, which might occasionally lead to profound toxicity. A 65-year-old patient with glioblastoma multiforme (GBM) has been treated with radiochemotherapy including temozolomide (TMZ) after surgery. The treatment outcome was evaluated as stable disease with a tendency to slow tumor progression. In addition to standard medication (ondansetron, valproic acid, levetiracetam, lorazepam, clobazam), the patient took the antimalarial drug artesunate (ART) and a decoction of Chinese herbs (Coptis chinensis, Siegesbeckia orientalis, Artemisia scoparia, Dictamnus dasycarpus). In consequence, the clinical status deteriorated. Elevated liver enzymes were noted with peak values of 238 U/L (GPT/ALAT), 226 U/L (GOT/ASAT), and 347 U/L (γ-GT), respectively. After cessation of ART and Chinese herbs, the values returned back to normal and the patient felt well again. In the literature, hepatotoxicity is well documented for TMZ, but is very rare for ART. Among the Chinese herbs used, Dictamnus dasycarpus has been reported to induce liver injury. Additional medication included valproic acid and levetiracetam, which are also reported to exert hepatotoxicity. While all drugs alone may bear a minor risk for hepatotoxicity, the combination treatment might have caused increased liver enzyme activities. It can be speculated that the combination of these drugs caused liver injury. We conclude that the compassionate use of ART and Chinese herbs is not recommended during standard radiochemotherapy with TMZ for GBM.", "title": "" }, { "docid": "945553f360d7f569f15d249dbc5fa8cd", "text": "One of the main issues in service collaborations among business partners is the possible lack of trust among them. A promising approach to cope with this issue is leveraging on blockchain technology by encoding with smart contracts the business process workflow. This brings the benefits of trust decentralization, transparency, and accountability of the service composition process. However, data in the blockchain are public, implying thus serious consequences on confidentiality and privacy. Moreover, smart contracts can access data outside the blockchain only through Oracles, which might pose new confidentiality risks if no assumptions are made on their trustworthiness. For these reasons, in this paper, we are interested in investigating how to ensure data confidentiality during business process execution on blockchain even in the presence of an untrusted Oracle.", "title": "" }, { "docid": "1d3192e66e042e67dabeae96ca345def", "text": "Privacy-enhancing technologies (PETs), which constitute a wide array of technical means for protecting users’ privacy, have gained considerable momentum in both academia and industry. However, existing surveys of PETs fail to delineate what sorts of privacy the described technologies enhance, which makes it difficult to differentiate between the various PETs. Moreover, those surveys could not consider very recent important developments with regard to PET solutions. The goal of this chapter is two-fold. First, we provide an analytical framework to differentiate various PETs. This analytical framework consists of high-level privacy principles and concrete privacy concerns. Secondly, we use this framework to evaluate representative up-to-date PETs, specifically with regard to the privacy concerns they address, and how they address them (i.e., what privacy principles they follow). Based on findings of the evaluation, we outline several future research directions.", "title": "" }, { "docid": "a8164a657a247761147c6012fd5442c9", "text": "Latent variable models are a powerful tool for addressing several tasks in machine learning. However, the algorithms for learning the parameters of latent variable models are prone to getting stuck in a bad local optimum. To alleviate this problem, we build on the intuition that, rather than considering all samples simultaneously, the algorithm should be presented with the training data in a meaningful order that facilitates learning. The order of the samples is determined by how easy they are. The main challenge is that typically we are not provided with a readily computable measure of the easiness of samples. We address this issue by proposing a novel, iterative self-paced learning algorithm where each iteration simultaneously selects easy samples and learns a new parameter vector. The number of samples selected is governed by a weight that is annealed until the entire training data has been considered. We empirically demonstrate that the self-paced learning algorithm outperforms the state of the art method for learning a latent structural SVM on four applications: object localization, noun phrase coreference, motif finding and handwritten digit recognition.", "title": "" }, { "docid": "1a0a299c53924e08eb767512de230f44", "text": "Binary code reutilization is the process of automatically identifying the interface and extracting the instructions and data dependencies of a code fragment from an executable program, so that it is selfcontained and can be reused by external code. Binary code reutilization is useful for a number of security applications, including reusing the proprietary cryptographic or unpacking functions from a malware sample and for rewriting a network dialog. In this paper we conduct the first systematic study of automated binary code reutilization and its security applications. The main challenge in binary code reutilization is understanding the code fragment’s interface. We propose a novel technique to identify the prototype of an undocumented code fragment directly from the program’s binary, without access to source code or symbol information. Further, we must also extract the code itself from the binary so that it is self-contained and can be easily reused in another program. We design and implement a tool that uses a combination of dynamic and static analysis to automatically identify the prototype and extract the instructions of an assembly function into a form that can be reused by other C code. The extracted function can be run independently of the rest of the program’s functionality and shared with other users. We apply our approach to scenarios that include extracting the encryption and decryption routines from malware samples, and show that these routines can be reused by a network proxy to decrypt encrypted traffic on the network. This allows the network proxy to rewrite the malware’s encrypted traffic by combining the extracted encryption and decryption functions with the session keys and the protocol grammar. We also show that we can reuse a code fragment from an unpacking function for the unpacking routine for a different sample of the same family, even if the code fragment is not a complete function.", "title": "" }, { "docid": "6e690c5aa54b28ba23d9ac63db4cc73a", "text": "The Topic Detection and Tracking (TDT) evaluation program has included a \"cluster detection\" task since its inception in 1996. Systems were required to process a stream of broadcast news stories and partition them into non-overlapping clusters. A system's effectiveness was measured by comparing the generated clusters to \"truth\" clusters created by human annotators. Starting in 2003, TDT is moving to a more realistic model that permits overlapping clusters (stories may be on more than one topic) and encourages the creation of a hierarchy to structure the relationships between clusters (topics). We explore a range of possible evaluation models for this modified TDT clustering task to understand the best approach for mapping between the human-generated \"truth\" clusters and a much richer hierarchical structure. We demonstrate that some obvious evaluation techniques fail for degenerate cases. For a few others we attempt to develop an intuitive sense of what the evaluation numbers mean. We settle on some approaches that incorporate a strong balance between cluster errors (misses and false alarms) and the distance it takes to travel between stories within the hierarchy.", "title": "" }, { "docid": "5b41a7c287b54b16e9d791cb62d7aa5a", "text": "Recent evidence demonstrates that children are selective in their social learning, preferring to learn from a previously accurate speaker than from a previously inaccurate one. We examined whether children assessing speakers' reliability take into account how speakers achieved their prior accuracy. In Study 1, when faced with two accurate informants, 4- and 5-year-olds (but not 3-year-olds) were more likely to seek novel information from an informant who had previously given the answers unaided than from an informant who had always relied on help from a third party. Similarly, in Study 2, 4-year-olds were more likely to trust the testimony of an unaided informant over the testimony provided by an assisted informant. Our results indicate that when children reach around 4 years of age, their selective trust extends beyond simple generalizations based on informants' past accuracy to a more sophisticated selectivity that distinguishes between truly knowledgeable informants and merely accurate informants who may not be reliable in the long term.", "title": "" }, { "docid": "f44ad33cfe612c99d5b9ac52e3bb4c70", "text": "Kongetira, Poonacha. MSEE., Purdue University, August 1994. Modelling of Selective Epitaxial Growth(SEG) and Epitaxial Lateral Overgrowth( ELO) of Silicon in SiH2C12-HC1-H2 system. Major Professor: Gerold W. Neudeck. A semi-empirical model for the growth rate of selective epitaxial silicon(SEG) in the Dichlorosilane-HC1-Hz system that represents the experimenltal data has been presented. All epitaxy runs were done using a Gemini-I LPCVD pancake reactor. Dichlorosilane was used as the source gas and hydrogen as the carrier gas. Hydrogen Cllloride(HC1) was used to ensure that no nucleation took place on the oxide. The growth rate expression was considered to be the sum of a growth term dependent on the partial pressures of Dichlorosilane and hydrogen, and an etch berm that varies as the partial pressure of HC1. The growth and etch terms were found to have an Arrhenius relation with temperature, with activation energies of 52kcal/mol and 36kcal/mol respectively. Good agreement was obtained with experimental data. The variation of the selectivity threshold was correctly predicted, which had been a problem with earlier models for SEG growth rates. SEG/ELO Silicon was grown from 920-970°C at 40 and 150 torr pressures for a variety of HCI concentrations. In addition previous data collected by our research group at 820-1020°C and 40-150torr were used in the model.", "title": "" }, { "docid": "560a19017dcc240d48bb879c3165b3e1", "text": "Battery management systems in hybrid electric vehicle battery packs must estimate values descriptive of the pack’s present operating condition. These include: battery state of charge, power fade, capacity fade, and instantaneous available power. The estimation mechanism must adapt to changing cell characteristics as cells age and therefore provide accurate estimates over the lifetime of the pack. In a series of three papers, we propose a method, based on extended Kalman filtering (EKF), that is able to accomplish these goals on a lithium ion polymer battery pack. We expect that it will also work well on other battery chemistries. These papers cover the required mathematical background, cell modeling and system identification requirements, and the final solution, together with results. In order to use EKF to estimate the desired quantities, we first require a mathematical model that can accurately capture the dynamics of a cell. In this paper we “evolve” a suitable model from one that is very primitive to one that is more advanced and works well in practice. The final model includes terms that describe the dynamic contributions due to open-circuit voltage, ohmic loss, polarization time constants, electro-chemical hysteresis, and the effects of temperature. We also give a means, based on EKF, whereby the constant model parameters may be determined from cell test data. Results are presented that demonstrate it is possible to achieve root-mean-squared modeling error smaller than the level of quantization error expected in an implementation. © 2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "fea34b4a4b0b2dcdacdc57dce66f31de", "text": "Deep neural networks have become the state-ofart methods in many fields of machine learning recently. Still, there is no easy way how to choose a network architecture which can significantly influence the network performance. This work is a step towards an automatic architecture design. We propose an algorithm for an optimization of a network architecture based on evolution strategies. The al gorithm is inspired by and designed directly for the Keras library [3] which is one of the most common implementations of deep neural networks. The proposed algorithm is tested on MNIST data set and the prediction of air pollution based on sensor measurements, and it is compared to several fixed architectures and support vector regression.", "title": "" }, { "docid": "2dee247b24afc7ddba44b312c0832bc1", "text": "During crowded events, cellular networks face voice and data traffic volumes that are often orders of magnitude higher than what they face during routine days. Despite the use of portable base stations for temporarily increasing communication capacity and free Wi-Fi access points for offloading Internet traffic from cellular base stations, crowded events still present significant challenges for cellular network operators looking to reduce dropped call events and improve Internet speeds. For an effective cellular network design, management, and optimization, it is crucial to understand how cellular network performance degrades during crowded events, what causes this degradation, and how practical mitigation schemes would perform in real-life crowded events. This paper makes a first step toward this end by characterizing the operational performance of a tier-1 cellular network in the U.S. during two high-profile crowded events in 2012. We illustrate how the changes in population distribution, user behavior, and application workload during crowded events result in significant voice and data performance degradation, including more than two orders of magnitude increase in connection failures. Our findings suggest two mechanisms that can improve performance without resorting to costly infrastructure changes: radio resource allocation tuning and opportunistic connection sharing. Using trace-driven simulations, we show that more aggressive release of radio resources via 1-2 s shorter radio resource control timeouts as compared with routine days helps to achieve better tradeoff between wasted radio resources, energy consumption, and delay during crowded events, and opportunistic connection sharing can reduce connection failures by 95% when employed by a small number of devices in each cell sector.", "title": "" }, { "docid": "959a8602cb7292a7daf341d2b7614492", "text": "This paper presents a calibration method for eye-in-hand systems in order to estimate the hand-eye and the robot-world transformations. The estimation takes place in terms of a parametrization of a stochastic model. In order to perform optimally, a metric on the group of the rigid transformations SE(3) and the corresponding error model are proposed for nonlinear optimization. This novel metric works well with both common formulations AX=XB and AX=ZB, and makes use of them in accordance with the nature of the problem. The metric also adapts itself to the system precision characteristics. The method is compared in performance to earlier approaches", "title": "" }, { "docid": "a238ba310374a78d9c0e09bee5aaf123", "text": "Automatically constructed knowledge bases (KB’s) are a powerful asset for search, analytics, recommendations and data integration, with intensive use at big industrial stakeholders. Examples are the knowledge graphs for search engines (e.g., Google, Bing, Baidu) and social networks (e.g., Facebook), as well as domain-specific KB’s (e.g., Bloomberg, Walmart). These achievements are rooted in academic research and community projects. The largest general-purpose KB’s with publicly accessible contents are BabelNet, DBpedia, Wikidata, and Yago. They contain millions of entities, organized in hundreds to hundred thousands of semantic classes, and billions of relational facts on entities. These and other knowledge and data resources are interlinked at the entity level, forming the Web of Linked Open Data.", "title": "" }, { "docid": "8ee5a9dde6f919637618787f6ffcc777", "text": "Microbial infection initiates complex interactions between the pathogen and the host. Pathogens express several signature molecules, known as pathogen-associated molecular patterns (PAMPs), which are essential for survival and pathogenicity. PAMPs are sensed by evolutionarily conserved, germline-encoded host sensors known as pathogen recognition receptors (PRRs). Recognition of PAMPs by PRRs rapidly triggers an array of anti-microbial immune responses through the induction of various inflammatory cytokines, chemokines and type I interferons. These responses also initiate the development of pathogen-specific, long-lasting adaptive immunity through B and T lymphocytes. Several families of PRRs, including Toll-like receptors (TLRs), RIG-I-like receptors (RLRs), NOD-like receptors (NLRs), and DNA receptors (cytosolic sensors for DNA), are known to play a crucial role in host defense. In this review, we comprehensively review the recent progress in the field of PAMP recognition by PRRs and the signaling pathways activated by PRRs.", "title": "" } ]
scidocsrr
461f28025dae78b5cfed0db5c17e62db
Weakly supervised learning of actions from transcripts
[ { "docid": "6a73df1df45d9dbed6c1250583fdbc50", "text": "Actions are spatiotemporal patterns. Similar to the sliding window-based object detection, action detection finds the reoccurrences of such spatiotemporal patterns through pattern matching, by handling cluttered and dynamic backgrounds and other types of action variations. We address two critical issues in pattern matching-based action detection: 1) the intrapattern variations in actions, and 2) the computational efficiency in performing action pattern search in cluttered scenes. First, we propose a discriminative pattern matching criterion for action classification, called naive Bayes mutual information maximization (NBMIM). Each action is characterized by a collection of spatiotemporal invariant features and we match it with an action class by measuring the mutual information between them. Based on this matching criterion, action detection is to localize a subvolume in the volumetric video space that has the maximum mutual information toward a specific action class. A novel spatiotemporal branch-and-bound (STBB) search algorithm is designed to efficiently find the optimal solution. Our proposed action detection method does not rely on the results of human detection, tracking, or background subtraction. It can handle action variations such as performing speed and style variations as well as scale changes well. It is also insensitive to dynamic and cluttered backgrounds and even to partial occlusions. The cross-data set experiments on action detection, including KTH, CMU action data sets, and another new MSR action data set, demonstrate the effectiveness and efficiency of the proposed multiclass multiple-instance action detection method.", "title": "" } ]
[ { "docid": "c1e1e9db4f6abaffe421b0e2ca4cec2f", "text": "One can't deny the effectiveness of video arcade games in reachipg users! Just loop at the number of quarters pushed into the slots, the time spent by people of widely differing abilities, and the number of repeat encounters with the systems. At least part of the success is due to the ease of getting started (the first play of the game gets one comfortable with the procedures), the high degree of visualization of controls and results, and the responsiveness overall. Other factors will be taken up by the panelists.Review of the home computer market shows what can be accomplished by an easy-to-use accounting aid through advertising store demonstrations, and word of mouth. Visicalc has sold over a million dollars! Attendees will have an opportunity to try some of these impressive applications before and after the session.", "title": "" }, { "docid": "6fac5265abac9f07d355dc794522a061", "text": "The deployment of cryptocurrencies in e-commerce has reached a significant number of transactions and continuous increases in monetary circulation; nevertheless, they face two impediments: a lack of awareness of the technological utility, and a lack of trust among consumers. E-commerce carried out through social networks expands its application to a new paradigm called social commerce. Social commerce uses the content generated within social networks to attract new consumers and influence their behavior. The objective of this paper is to analyze the role played by social media in increasing trust and intention to use cryptocurrencies in making electronic payments. It develops a model that combines constructs from social support theory, social commerce, and the technology acceptance model. This model is evaluated using the partial least square analysis. The obtained results show that social commerce increases the trust and intention to use cryptocurrencies. However, mutual support among participants does not generate sufficient trust to adequately promote the perceived usefulness of cryptocurrencies. This research provides a practical tool for analyzing how collaborative relationships that emerge in social media can influence or enhance the adoption of a new technology in terms of perceived trust and usefulness. Furthermore, it provides a significant contribution to consumer behavior research by applying the social support theory to the adoption of new information technologies. These theoretical and practical contributions are detailed in the final section of the paper.", "title": "" }, { "docid": "6ad07075bdeff6e662b3259ba39635be", "text": "We discuss a new deblurring problems in this paper. Focus measurements play a fundamental role in image processing techniques. Most traditional methods neglect spatial information in the frequency domain. Therefore, this study analyzed image data in the frequency domain to determine the value of spatial information. but instead misleading noise reduction results . We found that the local feature is not always a guide for noise reduction. This finding leads to a new method to measure the image edges in focus deblurring. We employed an all-in-focus measure in the frequency domain, based on the energy level of frequency components. We also used a multi-circle enhancement model to analyze this spatial information to provide a more accurate method for measuring images. We compared our results with those using other methods in similar studies. Findings demonstrate the effectiveness of our new method.", "title": "" }, { "docid": "9787ae39c27f9cfad2dbd29779bb5f36", "text": "Compressive sensing (CS) techniques offer a framework for the detection and allocation of sparse signals with a reduced number of samples. Today, modern radar systems operate with high bandwidths—demanding high sample rates according to the Shannon–Nyquist theorem—and a huge number of single elements for phased array consumption and costs of radar systems. There is only a small number of publications addressing the application of CS to radar, leaving several open questions. This paper addresses some aspects as a further step to CS-radar by presenting generic system architectures and implementation considerations. It is not the aim of this paper to investigate numerically efficient algorithms but to point to promising applications as well as arising problems. Three possible applications are considered: pulse compression, radar imaging, and air space surveillance with array antennas. Some simulation results are presented and enriched by the evaluation of real data acquired by an experimental radar system of Fraunhofer FHR. & 2009 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "7cf8e2555cfccc1fc091272559ad78d7", "text": "This paper presents a multimodal emotion recognition method that uses a feature-level combination of three-dimensional (3D) geometric features (coordinates, distance and angle of joints), kinematic features such as velocity and displacement of joints, and features extracted from daily behavioral patterns such as frequency of head nod, hand wave, and body gestures that represent specific emotions. Head, face, hand, body, and speech data were captured from 15 participants using an infrared sensor (Microsoft Kinect). The 3D geometric and kinematic features were developed using raw feature data from the visual channel. Human emotional behavior-based features were developed using inter-annotator agreement and commonly observed expressions, movements and postures associated to specific emotions. The features from each modality and the behavioral pattern-based features (head shake, arm retraction, body forward movement depicting anger) were combined to train the multimodal classifier for the emotion recognition system. The classifier was trained using 10-fold cross validation and support vector machine (SVM) to predict six basic emotions. The results showed improvement in emotion recognition accuracy (The precision increased by 3.28% and the recall rate by 3.17%) when the 3D geometric, kinematic, and human behavioral pattern-based features were combined for multimodal emotion recognition using supervised classification.", "title": "" }, { "docid": "2cbd47c2e7a1f68bd84d18413db26ea3", "text": "Horizontal gene transfer (HGT) refers to the acquisition of foreign genes by organisms. The occurrence of HGT among bacteria in the environment is assumed to have implications in the risk assessment of genetically modified bacteria which are released into the environment. First, introduced genetic sequences from a genetically modified bacterium could be transferred to indigenous micro-organisms and alter their genome and subsequently their ecological niche. Second, the genetically modified bacterium released into the environment might capture mobile genetic elements (MGE) from indigenous micro-organisms which could extend its ecological potential. Thus, for a risk assessment it is important to understand the extent of HGT and genome plasticity of bacteria in the environment. This review summarizes the present state of knowledge on HGT between bacteria as a crucial mechanism contributing to bacterial adaptability and diversity. In view of the use of GM crops and microbes in agricultural settings, in this mini-review we focus particularly on the presence and role of MGE in soil and plant-associated bacteria and the factors affecting gene transfer.", "title": "" }, { "docid": "f9c2457b4ba8da2011120e0834a6101d", "text": "The advent of new touch technologies and the wide spread of smart mobile phones made humans embrace technology more and depend on it extensively in their lives. With new communication technologies and smart phones the world really became a small village. Although these technologies provided many positive features, we cannot neglect the negative influences inherited in these technologies. One of the major negative sides of smart phones is their side effects on human health. This paper will address this issue by exploring the exiting literature related to the negative side of smart phones on human health and behavior by investigating the literature related to three major dimensions: health, addiction and behavior. The third section will describe the research method used. The fourth section will discuss the analysis side followed by a section on the conclusions and future work. Index Terms Mobile phones, smart phone, touch screen, health effects, ergonomics, addiction, behavior.", "title": "" }, { "docid": "717d1c31ac6766fcebb4ee04ca8aa40f", "text": "We present an incremental maintenance algorithm for leapfrog triejoin. The algorithm maintains rules in time proportional (modulo log factors) to the edit distance between leapfrog triejoin traces.", "title": "" }, { "docid": "65931785cf431d55387c2e3f63a75635", "text": "Mindfulness-as a state, trait, process, type of meditation, and intervention has proven to be beneficial across a diverse group of psychological disorders as well as for general stress reduction. Yet, there remains a lack of clarity in the operationalization of this construct, and underlying mechanisms. Here, we provide an integrative theoretical framework and systems-based neurobiological model that explains the mechanisms by which mindfulness reduces biases related to self-processing and creates a sustainable healthy mind. Mindfulness is described through systematic mental training that develops meta-awareness (self-awareness), an ability to effectively modulate one's behavior (self-regulation), and a positive relationship between self and other that transcends self-focused needs and increases prosocial characteristics (self-transcendence). This framework of self-awareness, -regulation, and -transcendence (S-ART) illustrates a method for becoming aware of the conditions that cause (and remove) distortions or biases. The development of S-ART through meditation is proposed to modulate self-specifying and narrative self-networks through an integrative fronto-parietal control network. Relevant perceptual, cognitive, emotional, and behavioral neuropsychological processes are highlighted as supporting mechanisms for S-ART, including intention and motivation, attention regulation, emotion regulation, extinction and reconsolidation, prosociality, non-attachment, and decentering. The S-ART framework and neurobiological model is based on our growing understanding of the mechanisms for neurocognition, empirical literature, and through dismantling the specific meditation practices thought to cultivate mindfulness. The proposed framework will inform future research in the contemplative sciences and target specific areas for development in the treatment of psychological disorders.", "title": "" }, { "docid": "ce22073b8dbc3a910fa8811a2a8e5c87", "text": "Ethernet is going to play a major role in automotive communications, thus representing a significant paradigm shift in automotive networking. Ethernet technology will allow for multiple in-vehicle systems (such as, multimedia/infotainment, camera-based advanced driver assistance and on-board diagnostics) to simultaneously access information over a single unshielded twisted pair cable. The leading technology for automotive applications is the IEEE Audio Video Bridging (AVB), which offers several advantages, such as open specification, multiple sources of electronic components, high bandwidth, the compliance with the challenging EMC/EMI automotive requirements, and significant savings on cabling costs, thickness and weight. This paper surveys the state of the art on Ethernet-based automotive communications and especially on the IEEE AVB, with a particular focus on the way to provide support to the so-called scheduled traffic, that is a class of time-sensitive traffic (e.g., control traffic) that is transmitted according to a time schedule.", "title": "" }, { "docid": "1bc001dc0e4adb2f1c7bc736e2d105f7", "text": "Web personalization has quickly moved from an added value feature to a necessity, particularly for large information services and sites that generate revenue by selling products. Web personalization can be viewed as using user preferences profiles to dynamically serve customized content to particular users. User preferences may be obtained explicitly, or by passive observation of users over time as they interact with the system. Principal elements of Web personalization include modeling of Web objects (pages, etc.) and subjects (users), matching between and across objects and/or subjects, and determination of the set of actions to be recommended for personalization. Existing approaches used by many Web-based companies, as well as approaches based on collaborative filtering (e.g., GroupLens [HKBR99] and Firefly [SM95]), rely heavily on human input for determining the personalization actions. This type of input is often a subjective description of the users by the users themselves, and thus prone to biases. Furthermore, the profile is static, and its performance degrades over time as the profile ages. Recently, a number of approaches have been developed dealing with specific aspects of Web usage mining for the purpose of automatically discovering user profiles. For example, Perkowitz and Etzioni [PE98] proposed the idea of optimizing the structure of Web sites based co-occurrence patterns of pages within usage data for the site. Schechter et al [SKS98] have developed techniques for using path profiles of users to predict future HTTP requests, which can be used for network and proxy caching. Spiliopoulou et al [SF99], Cooley et al [CMS99], and Buchner and Mulvenna [BM99] have applied data mining techniques to extract usage patterns from Web logs, for the purpose of deriving marketing intelligence. Shahabi et al [SZA97], Yan et al [YJGD96], and Nasraoui et al [NFJK99] have proposed clustering of user sessions to predict future user behavior. In this paper we describe an approach to usage-based Web personalization taking into account both the offline tasks related to the mining of usage data, and the online process of automatic Web page customization based on the mined knowledge. Specifically, we propose an effective technique for capturing common user profiles based on association-rule discovery and usage-based clustering. We also propose techniques for combining this knowledge with the current status of an ongoing Web activity to perform realtime personalization. Finally, we provide an experimental evaluation of the proposed techniques using real Web usage data.", "title": "" }, { "docid": "609651c6c87b634814a81f38d9bfbc67", "text": "Resistance training (RT) has shown the most promise in reducing/reversing effects of sarcopenia, although the optimum regime specific for older adults remains unclear. We hypothesized myofiber hypertrophy resulting from frequent (3 days/wk, 16 wk) RT would be impaired in older (O; 60-75 yr; 12 women, 13 men), sarcopenic adults compared with young (Y; 20-35 yr; 11 women, 13 men) due to slowed repair/regeneration processes. Myofiber-type distribution and cross-sectional area (CSA) were determined at 0 and 16 wk. Transcript and protein levels of myogenic regulatory factors (MRFs) were assessed as markers of regeneration at 0 and 24 h postexercise, and after 16 wk. Only Y increased type I CSA 18% (P < 0.001). O showed smaller type IIa (-16%) and type IIx (-24%) myofibers before training (P < 0.05), with differences most notable in women. Both age groups increased type IIa (O, 16%; Y, 25%) and mean type II (O, 23%; Y, 32%) size (P < 0.05). Growth was generally most favorable in young men. Percent change scores on fiber size revealed an age x gender interaction for type I fibers (P < 0.05) as growth among Y (25%) exceeded that of O (4%) men. Myogenin and myogenic differentiation factor D (MyoD) mRNAs increased (P < 0.05) in Y and O, whereas myogenic factor (myf)-5 mRNA increased in Y only (P < 0.05). Myf-6 protein increased (P < 0.05) in both Y and O. The results generally support our hypothesis as 3 days/wk training led to more robust hypertrophy in Y vs. O, particularly among men. However, this differential hypertrophy adaptation was not explained by age variation in MRF expression.", "title": "" }, { "docid": "4b544bb34c55e663cdc5f0a05201e595", "text": "BACKGROUND\nThis study seeks to examine a multidimensional model of student motivation and engagement using within- and between-network construct validation approaches.\n\n\nAIMS\nThe study tests the first- and higher-order factor structure of the motivation and engagement wheel and its corresponding measurement tool, the Motivation and Engagement Scale - High School (MES-HS; formerly the Student Motivation and Engagement Scale).\n\n\nSAMPLE\nThe study draws upon data from 12,237 high school students from 38 Australian high schools.\n\n\nMETHODS\nThe hypothesized 11-factor first-order structure and the four-factor higher-order structure, their relationship with a set of between-network measures (class participation, enjoyment of school, educational aspirations), factor invariance across gender and year-level, and the effects of age and gender are examined using confirmatory factor analysis and structural equation modelling.\n\n\nRESULTS\nIn terms of within-network validity, (1) the data confirm that the 11-factor and higher-order factor models of motivation and engagement are good fitting and (2) multigroup tests showed invariance across gender and year levels. In terms of between-network validity, (3) correlations with enjoyment of school, class participation and educational aspirations are in the hypothesized directions, and (4) girls reflect a more adaptive pattern of motivation and engagement, and year-level findings broadly confirm hypotheses that middle high school students seem to reflect a less adaptive pattern of motivation and engagement.\n\n\nCONCLUSION\nThe first- and higher-order structures hold direct implications for educational practice and directions for future motivation and engagement research.", "title": "" }, { "docid": "f5690b7f5aad7508221d3023b5d8812c", "text": "Modern applications especially cloud-based or cloud-centric applications always have many components running in the large distributed environment with complex interactions. They are vulnerable to suffer from performance or availability problems due to the highly dynamic runtime environment such as resource hogs, configuration changes and software bugs. In order to make efficient software maintenance and provide some hints to software bugs, we build a system named CauseInfer, a low cost and blackbox cause inference system without instrumenting the application source code. CauseInfer can automatically construct a two layered hierarchical causality graph and infer the causes of performance problems along the causal paths in the graph with a series of statistical methods. According to the experimental evaluation in the controlled environment, we find out CauseInfer can achieve an average 80% precision and 85% recall in a list of top two causes to identify the root causes, higher than several state-of-the-art methods and a good scalability to scale up in the distributed systems.", "title": "" }, { "docid": "aa1f74013cb1f74e3f0a0046645a7d00", "text": "In this paper, we propose a novel fully convolutional two-stream fusion network (FCTSFN) for interactiveimage segmentation. The proposed network includes two sub-networks: a two-stream late fusion network (TSLFN) that predicts the foreground at a reduced resolution, and a multi-scale refining network (MSRN) that refines the foreground at full resolution. The TSLFN includes two distinct deep streams followed by a fusion network. The intuition is that, since user interactions are more direct information on foreground/background than the image itself, the two-stream structure of the TSLFN reduces the number of layers between the pure user interaction features and the network output, allowing the user interactions to have a more direct impact on the segmentation result. The MSRN fuses the features from different layers of TSLFN with different scales, in order to seek the local to global information on the foreground to refine the segmentation result at full resolution. We conduct comprehensive experiments on four benchmark datasets. The results show that the proposed network achieves competitive performance compared to current state-of-the-art interactive image segmentation methods. 1.", "title": "" }, { "docid": "623e62e756321d14bb552a1ef364e4a5", "text": "With the wide deployment of smart card automated fare collection (SCAFC) systems, public transit agencies have been benefiting from huge volume of transit data, a kind of sequential data, collected every day. Yet, improper publishing and use of transit data could jeopardize passengers' privacy. In this paper, we present our solution to transit data publication under the rigorous differential privacy model for the Société de transport de Montréal (STM). We propose an efficient data-dependent yet differentially private transit data sanitization approach based on a hybrid-granularity prefix tree structure. Moreover, as a post-processing step, we make use of the inherent consistency constraints of a prefix tree to conduct constrained inferences, which lead to better utility. Our solution not only applies to general sequential data, but also can be seamlessly extended to trajectory data. To our best knowledge, this is the first paper to introduce a practical solution for publishing large volume of sequential data under differential privacy. We examine data utility in terms of two popular data analysis tasks conducted at the STM, namely count queries and frequent sequential pattern mining. Extensive experiments on real-life STM datasets confirm that our approach maintains high utility and is scalable to large datasets.", "title": "" }, { "docid": "f046a1be5645d9d359f545699704e76b", "text": "This paper presents a novel way for detecting sign board and text recognition to aid navigation in indoor environment. Using text as a landmark for vision based navigation is still an active research and till date all algorithms developed for detection and recognition of texts for an indoor navigation have a lot of room to make it applicable on real time. Our proposed method is an extension of the work and will aid in the on going research. We have achieved an accuracy of 80% and were able to extract sign boards text from actual scenes.", "title": "" }, { "docid": "b06c18822c119b72fe6d55bb58478a2b", "text": "The Sphinx-4 speech recognition system is the latest addition to Carnegie Mellon University's repository of Sphinx speech recognition systems. It has been jointly designed by Carnegie Mellon University, Sun Microsystems Laboratories and Mitsubishi Electric Research Laboratories. It is differently designed from the earlier Sphinx systems in terms of modularity, flexibility and algorithmic aspects. It uses newer search strategies, is universal in its acceptance of various kinds of grammars and language models, types of acoustic models and feature streams. Algorithmic innovations included in the system design enable it to incorporate multiple information sources in an elegant manner. The system is entirely developed on the JavaTM platform and is highly portable, flexible, and easier to use with multithreading. This paper describes the salient features of the Sphinx-4 decoder and includes preliminary performance measures relating to speed and accuracy.", "title": "" }, { "docid": "86910fd866dd4945d044bd6057fe2010", "text": "Context: The literature is rich in examples of both successful and failed global software development projects. However, practitioners do not have the time to wade through the many recommendations to work out which ones apply to them. To this end, we developed a prototype Decision Support System (DSS) for Global Teaming (GT), with the goal of making research results available to practitioners. Aims: We want the system we build to be based on the real needs of practitioners: the end users of our system. Therefore the aim of this study is to assess the usefulness and usability of our proof-of-concept in order to create a tool that is actually used by practitioners. Method: Twelve experts in GSD evaluated our system. Each individual participant tested the system and completed a short usability questionnaire. Results: Feedback on the prototype DSS was positive. All experts supported the concept, although many suggested areas that could be improved. Both expert practitioners and researchers participated, providing different perspectives on what we need to do to improve the system. Conclusion: Involving both practitioners (users) and researchers in the evaluation elicited a range of useful feedback, providing useful insights that might not have emerged had we focused on one or the other group. However, even when we implement recommended changes, we still need to persuade practitioner to adopt the new tool.", "title": "" }, { "docid": "5b67f07b5ce37c0dd1bb9be1af6c6005", "text": "Anomaly detection is the identification of items or observations which deviate from an expected pattern in a dataset. This paper proposes a novel real time anomaly detection framework for dynamic resource scheduling of a VMware-based cloud data center. The framework monitors VMware performance stream data (e.g. CPU load, memory usage, etc.). Hence, the framework continuously needs to collect data and make decision without any delay. We have used Apache Storm, distributed framework for handling performance stream data and making prediction without any delay. Storm is chosen over a traditional distributed framework (e.g., Hadoop and MapReduce, Mahout) that is good for batch processing. An incremental clustering algorithm to model benign characteristics is incorporated in our storm-based framework. During continuous incoming test stream, if the model finds data deviated from its benign behavior, it considers that as an anomaly. We have shown effectiveness of our framework by providing real-time complex analytic functionality over stream data.", "title": "" } ]
scidocsrr
bc3e56dd3ae4888bbbee1759080c5a26
Glyph-based Visualization: Foundations, Design Guidelines, Techniques and Applications
[ { "docid": "2ddf3153ec8432d226c419748b5b4828", "text": "Visualized data often have dubious origins and quality. Different forms of uncertainty and errors are also introduced as the data are derived, transformed, interpolated, and finally rendered. This paper surveys uncertainty visualization techniques that present data so that users are made aware of the locations and degree of uncertainties in their data. The techniques include adding glyphs, adding geometry, modifying geometry, modifying attributes, animation, sonification, and psychovisual approaches. We present our results in uncertainty visualization for environmental visualization, surface interpolation, global illumination with radiosity, flow visualization, and figure animation. We also present a classification of the possibilities in uncertainty visualization and locate our contributions within this classification.", "title": "" } ]
[ { "docid": "0bad228f0b86be12f6714241a8ecee69", "text": "Local Binary Pattern (LBP) is a kind of discriminative texture descriptor for characterization of face patterns. However, the value of LBP operator is greatly changed under non-monotonic intensity transformations. As a result, the recognition performance of LBP descriptor for face images with significant illumination variations is severely dropped. In this paper, a novel illumination-invariant face recognition algorithm that applies LBP descriptor is proposed to overcome the performance degradation of LBP descriptor caused by varying illumination conditions. In our proposed algorithm, illumination variation is first compensated by the so called Dynamic Morphological Quotient Image (DMQI) which generates quotient image after morphological filtering. Then, powerful LBP operator is applied to the DMQI to derive a distinctive and robust representation for face patterns in images. We compared the recognition accuracy of the proposed algorithm with that of traditional PCA-based, LDA-based and raw LBP-based method on Yale face dataset B which contains face images with severe lighting variations. Evaluation result demonstrates that our proposed algorithm outperforms the PCA-based, LDA-based and raw LBP-based method by 22.5%, 17.4%, and 5%, respectively, in terms of the recognition accuracy on the first rank. Another advantage of our algorithm is its computational simplicity. It only takes 0.48 seconds on a Pentium IV 3.0G CPU, so it is very suitable for real-time manipulation.", "title": "" }, { "docid": "df354ff3f0524d960af7beff4ec0a68b", "text": "The paper presents digital beamforming for Passive Coherent Location (PCL) radar. The considered circular antenna array is a part of a passive system developed at Warsaw University of Technology. The system is based on FM radio transmitters. The array consists of eight half-wave dipoles arranged in a circular array covering 360deg with multiple beams. The digital beamforming procedure is presented, including mutual coupling correction and antenna pattern optimization. The results of field calibration and measurements are also shown.", "title": "" }, { "docid": "a1444497114eadc1c90c1cfb85852641", "text": "For several years it has been argued that neural synchronisation is crucial for cognition. The idea that synchronised temporal patterns between different neural groups carries information above and beyond the isolated activity of these groups has inspired a shift in focus in the field of functional neuroimaging. Specifically, investigation into the activation elicited within certain regions by some stimulus or task has, in part, given way to analysis of patterns of co-activation or functional connectivity between distal regions. Recently, the functional connectivity community has been looking beyond the assumptions of stationarity that earlier work was based on, and has introduced methods to incorporate temporal dynamics into the analysis of connectivity. In particular, non-invasive electrophysiological data (magnetoencephalography/electroencephalography (MEG/EEG)), which provides direct measurement of whole-brain activity and rich temporal information, offers an exceptional window into such (potentially fast) brain dynamics. In this review, we discuss challenges, solutions, and a collection of analysis tools that have been developed in recent years to facilitate the investigation of dynamic functional connectivity using these imaging modalities. Further, we discuss the applications of these approaches in the study of cognition and neuropsychiatric disorders. Finally, we review some existing developments that, by using realistic computational models, pursue a deeper understanding of the underlying causes of non-stationary connectivity.", "title": "" }, { "docid": "0cb3cdb1e44fd9171156ad46fdf2d2ed", "text": "In this paper, from the viewpoint of scene under standing, a three-layer Bayesian hierarchical framework (BHF) is proposed for robust vacant parking space detection. In practice, the challenges of vacant parking space inference come from dramatic luminance variations, shadow effect, perspective distortion, and the inter-occlusion among vehicles. By using a hidden labeling layer between an observation layer and a scene layer, the BHF provides a systematic generative structure to model these variations. In the proposed BHF, the problem of luminance variations is treated as a color classification problem and is tack led via a classification process from the observation layer to the labeling layer, while the occlusion pattern, perspective distortion, and shadow effect are well modeled by the relationships between the scene layer and the labeling layer. With the BHF scheme, the detection of vacant parking spaces and the labeling of scene status are regarded as a unified Bayesian optimization problem subject to a shadow generation model, an occlusion generation model, and an object classification model. The system accuracy was evaluated by using outdoor parking lot videos captured from morning to evening. Experimental results showed that the proposed framework can systematically determine the vacant space number, efficiently label ground and car regions, precisely locate the shadowed regions, and effectively tackle the problem of luminance variations.", "title": "" }, { "docid": "547ce0778d8d51d96a610fb72b6bb4e9", "text": "Applications in cyber-physical systems are increasingly coupled with online instruments to perform long-running, continuous data processing. Such “always on” dataflow applications are dynamic, where they need to change the applications logic and performance at runtime, in response to external operational needs. F`oε is a continuous dataflow framework that is designed to be adaptive for dynamic applications on Cloud infrastructure. It offers advanced dataflow patterns like BSP and MapReduce for flexible and holistic composition of streams and files, and supports dynamic recomposition at runtime with minimal impact on the execution. Adaptive resource allocation strategies allow our framework to effectively use elastic Cloud resources to meet varying data rates. We illustrate the design patterns of F`oε by running an integration pipeline and a tweet clustering application from the Smart Power Grids domain on a private Eucalyptus Cloud. The responsiveness of our resource adaptation is validated through simulations for periodic, bursty and random workloads.", "title": "" }, { "docid": "beb22339057840dc9a7876a871d242cf", "text": "We look at the problem of location recognition in a large image dataset using a vocabulary tree. This entails finding the location of a query image in a large dataset containing 3times104 streetside images of a city. We investigate how the traditional invariant feature matching approach falls down as the size of the database grows. In particular we show that by carefully selecting the vocabulary using the most informative features, retrieval performance is significantly improved, allowing us to increase the number of database images by a factor of 10. We also introduce a generalization of the traditional vocabulary tree search algorithm which improves performance by effectively increasing the branching factor of a fixed vocabulary tree.", "title": "" }, { "docid": "7850280ba2c29dc328b9594f4def05a6", "text": "Electric traction motors in automotive applications work in operational conditions characterized by variable load, rotational speed and other external conditions: this complicates the task of diagnosing bearing defects. The objective of the present work is the development of a diagnostic system for detecting the onset of degradation, isolating the degrading bearing, classifying the type of defect. The developed diagnostic system is based on an hierarchical structure of K-Nearest Neighbours classifiers. The selection of the features from the measured vibrational signals to be used in input by the bearing diagnostic system is done by a wrapper approach based on a Multi-Objective (MO) optimization that integrates a Binary Differential Evolution (BDE) algorithm with the K-Nearest Neighbour (KNN) classifiers. The developed approach is applied to an experimental dataset. The satisfactory diagnostic performances obtain show the capability of the method, independently from the bearings operational conditions.", "title": "" }, { "docid": "ef7e973a5c6f9e722917a283a1f0fe52", "text": "We live in a digital society that provides a range of opportunities for virtual interaction. Consequently, emojis have become popular for clarifying online communication. This presents an exciting opportunity for psychologists, as these prolific online behaviours can be used to help reveal something unique about contemporary human behaviour.", "title": "" }, { "docid": "42d2f3c2cc7ed0c08dd8f450091e5a7a", "text": "Analytical methods validation is an important regulatory requirement in pharmaceutical analysis. High-Performance Liquid Chromatography (HPLC) is commonly used as an analytical technique in developing and validating assay methods for drug products and drug substances. Method validation provides documented evidence, and a high degree of assurance, that an analytical method employed for a specific test, is suitable for its intended use. Over recent years, regulatory authorities have become increasingly aware of the necessity of ensuring that the data submitted to them in applications for marketing authorizations have been acquired using validated analytical methodology. The International Conference on Harmonization (ICH) has introduced guidelines for analytical methods validation. 1,2 The U.S. Food and Drug Administration (FDA) methods validation draft guidance document, 3-5 as well as United States Pharmacopoeia (USP) both refer to ICH guidelines. These draft guidances define regulatory and alternative analytical procedures and stability-indicating assays. The FDA has proposed adding section CFR 211.222 on analytical methods validation to the current Good Manufacturing Practice (cGMP) regulations. 7 This would require pharmaceutical manufacturers to establish and document the accuracy, sensitivity, specificity, reproducibility, and any other attribute (e.g., system suitability, stability of solutions) necessary to validate test methods. Regulatory analytical procedures are of two types: compendial and noncompendial. The noncompendial analytical procedures in the USP are those legally recognized as regulatory procedures under section 501(b) of the Federal Food, Drug and Cosmetic Act. When using USP analytical methods, the guidance recommends that information be provided for the following characteristics: specificity of the method, stability of the analytical sample solution, and intermediate precision. Compendial analytical methods may not be stability indicating, and this concern must be addressed when developing a drug product specification, because formulation based interference may not be considered in the monograph specifications. Additional analytical tests for impurities may be necessary to support the quality of the drug substance or drug product. Noncompendial analytical methods must be fully validated. The most widely applied validation characteristics are accuracy, precision (repeatability and intermediate precision), specificity, detection limit, quantitation limit, linearity, range, and stability of analytical solutions. The parameters that require validation and the approach adopted for each particular case are dependent on the type and applications of the method. Before undertaking the task of method validation, it is necessary that the analytical system itself is adequately designed, maintained, calibrated, and validated. 8 The first step in method validation is to prepare a protocol, preferably written with the instructions in a clear step-by-step format. This A Practical Approach to Validation of HPLC Methods Under Current Good Manufacturing Practices", "title": "" }, { "docid": "ab525baa5aef2bedd87307aa76736045", "text": "For years, recursive neural networks (RvNNs) have shown to be suitable for representing text into fixed-length vectors and achieved good performance on several natural language processing tasks. However, the main drawback of RvNN is that it requires explicit tree structure (e.g. parse tree), which makes data preparation and model implementation hard. In this paper, we propose a novel tree-structured long short-term memory (Tree-LSTM) architecture that efficiently learns how to compose task-specific tree structures only from plain text data. To achieve this property, our model uses Straight-Through (ST) Gumbel-Softmax estimator to decide the parent node among candidates and to calculate gradients of the discrete decision. We evaluate the proposed model on natural language interface and sentiment analysis and show that our model outperforms or at least comparable to previous Tree-LSTM-based works. Especially in the natural language interface task, our model establishes the new state-of-the-art accuracy of 85.4%. We also find that our model converges significantly faster and needs less memory than other models of complex structures.", "title": "" }, { "docid": "ef787cfc1b00c9d05ec9293ff802f172", "text": "High Definition (HD) maps play an important role in modern traffic scenes. However, the development of HD maps coverage grows slowly because of the cost limitation. To efficiently model HD maps, we proposed a convolutional neural network with a novel prediction layer and a zoom module, called LineNet. It is designed for state-of-the-art lane detection in an unordered crowdsourced image dataset. And we introduced TTLane, a dataset for efficient lane detection in urban road modeling applications. Combining LineNet and TTLane, we proposed a pipeline to model HD maps with crowdsourced data for the first time. And the maps can be constructed precisely even with inaccurate crowdsourced data.", "title": "" }, { "docid": "08951a16123c26f5ac4241457b539454", "text": "High quality, physically accurate rendering at interactiv e rates has widespread application, but is a daunting task. We attempt t o bridge the gap between high-quality offline and interactive render ing by using existing environment mapping hardware in combinatio with a novel Image Based Rendering (IBR) algorithm. The primary c ontribution lies in performing IBR in reflection space. This me thod can be applied to ordinary environment maps, but for more phy sically accurate rendering, we apply reflection space IBR to ra diance environment maps. A radiance environment map pre-integrat s Bidirectional Reflection Distribution Function (BRDF) wit h a lighting environment. Using the reflection-space IBR algorithm o n radiance environment maps allows interactive rendering of ar bitr ry objects with a large class of complex BRDFs in arbitrary ligh ting environments. The ultimate simplicity of the final algor ithm suggests that it will be widely and immediately valuable giv en the ready availability of hardware assisted environment mappi ng. CR categories and subject descriptors: I.3.3 [Computer Graphics]: Picture/Image generation; I.3.7 [Image Proces sing]: Enhancement.", "title": "" }, { "docid": "6c4882ed23a8a3901ea0f498f97afe59", "text": "As self-driving cars have grown in sophistication and ability, they have been deployed on the road in both localised tests and as regular private vehicles. In this paper we draw upon publicly available videos of autonomous and assisted driving (specifically the Tesla autopilot and Google self-driving car) to explore how their drivers and the drivers of other cars interact with, and make sense of, the actions of these cars. Our findings provide an early perspective on human interaction with new forms of driving involving assisted-car drivers, autonomous vehicles and other road users. The focus is on social interaction on the road, and how drivers communicate through, and interpret, the movement of cars. We provide suggestions toward increasing the transparency of autopilots' actions for both their driver and other drivers.", "title": "" }, { "docid": "5b2f918fdfeb5c14910c1524310880ba", "text": "Many prior face anti-spoofing works develop discriminative models for recognizing the subtle differences between live and spoof faces. Those approaches often regard the image as an indivisible unit, and process it holistically, without explicit modeling of the spoofing process. In this work, motivated by the noise modeling and denoising algorithms, we identify a new problem of face despoofing, for the purpose of anti-spoofing: inversely decomposing a spoof face into a spoof noise and a live face, and then utilizing the spoof noise for classification. A CNN architecture with proper constraints and supervisions is proposed to overcome the problem of having no ground truth for the decomposition. We evaluate the proposed method on multiple face anti-spoofing databases. The results show promising improvements due to our spoof noise modeling. Moreover, the estimated spoof noise provides a visualization which helps to understand the added spoof noise by each spoof medium.", "title": "" }, { "docid": "de1d3377aafd684385a332a03d4b6267", "text": "It has recently been suggested that brain areas crucial for mentalizing, including the medial prefrontal cortex (mPFC), are not activated exclusively during mentalizing about the intentions, beliefs, morals or traits of the self or others, but also more generally during cognitive reasoning including relational processing about objects. Contrary to this notion, a meta-analysis of cognitive reasoning tasks demonstrates that the core mentalizing areas are not systematically recruited during reasoning, but mostly when these tasks describe some human agency or general evaluative and enduring traits about humans, and much less so when these social evaluations are absent. There is a gradient showing less mPFC activation as less mentalizing content is contained in the stimulus material used in reasoning tasks. Hence, it is more likely that cognitive reasoning activates the mPFC because inferences about social agency and mind are involved.", "title": "" }, { "docid": "ba302b1ee508edc2376160b3ad0a751f", "text": "During the last years terrestrial laser scanning became a standard method of data acquisition for various applications in close range domain, like industrial production, forest inventories, plant engineering and construction, car navigation and – one of the most important fields – the recording and modelling of buildings. To use laser scanning data in an adequate way, a quality assessment of the laser scanner is inevitable. In the literature some publications can be found concerning the data quality of terrestrial laser scanners. Most of these papers concentrate on the geometrical accuracy of the scanner (errors of instrument axis, range accuracy using target etc.). In this paper a special aspect of quality assessment will be discussed: the influence of different materials and object colours on the recorded measurements of a TLS. The effects on the geometric accuracy as well as on the simultaneously acquired intensity values are the topics of our investigations. A TRIMBLE GX scanner was used for several test series. The study of different effects refer to materials commonly used at building façades, i.e. grey scaled and coloured sheets, various species of wood, a metal plate, plasters of different particle size, light-transmissive slides and surfaces of different conditions of wetness. The tests concerning a grey wedge show a dependence on the brightness where the mean square error (MSE) decrease from black to white, and therefore, confirm previous results of other research groups. Similar results had been obtained with coloured sheets. In this context an important result is that the accuracy of measurements at night-time has proved to be much better than at day time. While different species of wood and different conditions of wetness have no significant effect on the range accuracy the study of a metal plate delivers MSE values considerably higher than the accuracy of the scanner, if the angle of incidence is approximately orthogonal. Also light-transmissive slides cause enormous MSE values. It can be concluded that high precision measurements should be carried out at night-time and preferable on bright surfaces without specular characteristics.", "title": "" }, { "docid": "b620dd7e1db47db6c37ea3bcd2d83744", "text": "Software failures due to configuration errors are commonplace as computer systems continue to grow larger and more complex. Troubleshooting these configuration errors is a major administration cost, especially in server clusters where problems often go undetected without user interference. This paper presents CODE–a tool that automatically detects software configuration errors. Our approach is based on identifying invariant configuration access rules that predict what access events follow what contexts. It requires no source code, application-specific semantics, or heavyweight program analysis. Using these rules, CODE can sift through a voluminous number of events and detect deviant program executions. This is in contrast to previous approaches that focus on only diagnosis. In our experiments, CODE successfully detected a real configuration error in one of our deployment machines, in addition to 20 user-reported errors that we reproduced in our test environment. When analyzing month-long event logs from both user desktops and production servers, CODE yielded a low false positive rate. The efficiency ofCODE makes it feasible to be deployed as a practical management tool with low overhead.", "title": "" }, { "docid": "e5a2c2ef9d2cb6376b18c1e7232016b2", "text": "In this paper we describe the problem of Visual Place Categorization (VPC) for mobile robotics, which involves predicting the semantic category of a place from image measurements acquired from an autonomous platform. For example, a robot in an unfamiliar home environment should be able to recognize the functionality of the rooms it visits, such as kitchen, living room, etc. We describe an approach to VPC based on sequential processing of images acquired with a conventional video camera. We identify two key challenges: Dealing with non-characteristic views and integrating restricted-FOV imagery into a holistic prediction. We present a solution to VPC based upon a recently-developed visual feature known as CENTRIST (CENsus TRansform hISTogram). We describe a new dataset for VPC which we have recently collected and are making publicly available. We believe this is the first significant, realistic dataset for the VPC problem. It contains the interiors of six different homes with ground truth labels. We use this dataset to validate our solution approach, achieving promising results.", "title": "" }, { "docid": "6882f244253e0367b85c76bd4884ddaa", "text": "Publishers of news information are keen to amplify the reach of their content by making it as re-sharable as possible on social media. In this work we study the relationship between the concept of social deviance and the re-sharing of news headlines by network gatekeepers on Twitter. Do network gatekeepers have the same predilection for selecting socially deviant news items as professionals? Through a study of 8,000 news items across 8 major news outlets in the U.S. we predominately find that network gatekeepers re-share news items more often when they reference socially deviant events. At the same time we find and discuss exceptions for two outlets, suggesting a more complex picture where newsworthiness for networked gatekeepers may be moderated by other effects such as topicality or varying motivations and relationships with their audience.", "title": "" }, { "docid": "3357bcf236fdb8077a6848423a334b45", "text": "According to the latest investigation, there are 1.7 million active social network users in Taiwan. Previous researches indicated social network posts have a great impact on users, and mostly, the negative impact is from the rising demands of social support, which further lead to heavier social overload. In this study, we propose social overloaded posts detection model (SODM) by deploying the latest text mining and deep learning techniques to detect the social overloaded posts and, then with the developed social overload prevention system (SOS), the social overload posts and non-social overload ones are rearranged with different sorting methods to prevent readers from excessive demands of social support or social overload. The empirical results show that our SOS helps readers to alleviate social overload when reading via social media.", "title": "" } ]
scidocsrr
69ad92317b0a01a4b5ad5c5ac3972586
Inverse Multipath Fingerprinting for Millimeter Wave V2I Beam Alignment
[ { "docid": "14c981a63e34157bb163d4586502a059", "text": "In this paper, we investigate an angle of arrival (AoA) and angle of departure (AoD) estimation algorithm for sparse millimeter wave multiple-input multiple-output (MIMO) channels. The analytical channel model whose use we advocate here is the beam space (or virtual) MIMO channel representation. By leveraging the beam space MIMO concept, we characterize probabilistic channel priors under an analog precoding and combining constraints. This investigation motivates Bayesian inference approaches to virtual AoA and AoD estimation. We divide the estimation task into downlink sounding for AoA estimation and uplink sounding for AoD estimation. A belief propagation (BP)-type algorithm is adopted, leading to computationally efficient approximate message passing (AMP) and approximate log-likelihood ratio testing (ALLRT) algorithms. Numerical results demonstrate that the proposed algorithm outperforms the conventional AMP in terms of the AoA and AoD estimation accuracy for the sparse millimeter wave MIMO channel.", "title": "" }, { "docid": "368a37e8247d8a6f446b31f1dc0f635e", "text": "In order to achieve autonomous operation of a vehicle in urban situations with unpredictable traffic, several realtime systems must interoperate, including environment perception, localization, planning, and control. In addition, a robust vehicle platform with appropriate sensors, computational hardware, networking, and software infrastructure is essential.", "title": "" } ]
[ { "docid": "081e474c622f122832490a54657e5051", "text": "To defend a network from intrusion is a generic problem of all time. It is important to develop a defense mechanism to secure the network from anomalous activities. This paper presents a comprehensive survey of methods and systems introduced by researchers in the past two decades to protect network resources from intrusion. A detailed pros and cons analysis of these methods and systems is also reported in this paper. Further, this paper also provides a list of issues and research challenges in this evolving field of research. We believe that, this knowledge will help to create a defense system.", "title": "" }, { "docid": "78ffcec1e3d5164d7360aa8a93848fc4", "text": "During a long period of time we are combating overfitting in the CNN training process with model regularization, including weight decay, model averaging, data augmentation, etc. In this paper, we present DisturbLabel, an extremely simple algorithm which randomly replaces a part of labels as incorrect values in each iteration. Although it seems weird to intentionally generate incorrect training labels, we show that DisturbLabel prevents the network training from over-fitting by implicitly averaging over exponentially many networks which are trained with different label sets. To the best of our knowledge, DisturbLabel serves as the first work which adds noises on the loss layer. Meanwhile, DisturbLabel cooperates well with Dropout to provide complementary regularization functions. Experiments demonstrate competitive recognition results on several popular image recognition datasets.", "title": "" }, { "docid": "0213b953415a2aa9bab63f9c210c3dcf", "text": "Purpose – The purpose of this paper is to distinguish and describe knowledge management (KM) technologies according to their support for strategy. Design/methodology/approach – This study employed an ontology development method to describe the relations between technology, KM and strategy, and to categorize available KM technologies according to those relations. Ontologies are formal specifications of concepts in a domain and their inter-relationships, and can be used to facilitate common understanding and knowledge sharing. The study focused particularly on two sub-domains of the KM field: KM strategies and KM technologies. Findings – ’’KM strategy’’ has three meanings in the literature: approach to KM, knowledge strategy, and KM implementation strategy. Also, KM technologies support strategy via KM initiatives based on particular knowledge strategies and approaches to KM. The study distinguishes three types of KM technologies: component technologies, KM applications, and business applications. They all can be described in terms of ’’creation’’ and ’’transfer’’ knowledge strategies, and ’’personalization’’ and ’’codification’’ approaches to KM. Research limitations/implications – The resulting framework suggests that KM technologies can be analyzed better in the context of KM initiatives, instead of the usual approach associating them with knowledge processes. KM initiatives provide the background and contextual elements necessary to explain technology adoption and use. Practical implications – The framework indicates three alternative modes for organizational adoption of KM technologies: custom development of KM systems from available component technologies; purchase of KM-specific applications; or purchase of business-driven applications that embed KM functionality. It also lists adequate technologies and provides criteria for selection in any of the cases. Originality/value – Among the many studies analyzing the role of technology in KM, an association with strategy has been missing. This paper contributes to filling this gap, integrating diverse contributions via a clearer definition of concepts and a visual representation of their relationships. This use of ontologies as a method, instead of an artifact, is also uncommon in the literature.", "title": "" }, { "docid": "04246c2b3d0b55acd3e316f066b36066", "text": "The involvement of free radical mechanisms in the pathogenesis of alcoholic liver disease (ALD) is demonstrated by the detection of lipid peroxidation markers in the liver and the serum of patients with alcoholism, as well as by experiments in alcohol-feed rodents that show a relationship between alcohol-induced oxidative stress and the development of liver pathology. Ethanol-induced oxidative stress is the result of the combined impairment of antioxidant defences and the production of reactive oxygen species by the mitochondrial electron transport chain, the alcohol-inducible cytochrome P450 (CYP) 2E1 and activated phagocytes. Furthermore, hydroxyethyl free radicals (HER) are also generated during ethanol metabolism by CYP2E1. The mechanisms by which oxidative stress contributes to alcohol toxicity are still not completely understood. The available evidence indicates that, by favouring mitochondrial permeability transition, oxidative stress promotes hepatocyte necrosis and/or apoptosis and is implicated in the alcohol-induced sensitization of hepatocytes to the pro-apoptotic action of TNF-alpha. Moreover, oxidative mechanisms can contribute to liver fibrosis, by triggering the release of pro-fibrotic cytokines and collagen gene expression in hepatic stellate cells. Finally, the reactions of HER and lipid peroxidation products with hepatic proteins stimulate both humoral and cellular immune reactions and favour the breaking of self-tolerance during ALD. Thus, immune responses might represent the mechanism by which alcohol-induced oxidative stress contributes to the perpetuation of chronic hepatic inflammation. Together these observations provide a rationale for the possible clinical application of antioxidants in the therapy for ALD.", "title": "" }, { "docid": "3de7dd15d2b8bb5d08eb548bf3f19230", "text": "Image compression has become an important process in today‟s world of information exchange. Image compression helps in effective utilization of high speed network resources. Medical Image Compression is very important in the present world for efficient archiving and transmission of images. In this paper two different approaches for lossless image compression is proposed. One uses the combination of 2D-DWT & FELICS algorithm for lossy to lossless Image Compression and another uses combination of prediction algorithm and Integer wavelet Transform (IWT). To show the effectiveness of the methodology used, different image quality parameters are measured and shown the comparison of both the approaches. We observed the increased compression ratio and higher PSNR values.", "title": "" }, { "docid": "c4062390a6598f4e9407d29e52c1a3ed", "text": "We have conducted a comprehensive search for conserved elements in vertebrate genomes, using genome-wide multiple alignments of five vertebrate species (human, mouse, rat, chicken, and Fugu rubripes). Parallel searches have been performed with multiple alignments of four insect species (three species of Drosophila and Anopheles gambiae), two species of Caenorhabditis, and seven species of Saccharomyces. Conserved elements were identified with a computer program called phastCons, which is based on a two-state phylogenetic hidden Markov model (phylo-HMM). PhastCons works by fitting a phylo-HMM to the data by maximum likelihood, subject to constraints designed to calibrate the model across species groups, and then predicting conserved elements based on this model. The predicted elements cover roughly 3%-8% of the human genome (depending on the details of the calibration procedure) and substantially higher fractions of the more compact Drosophila melanogaster (37%-53%), Caenorhabditis elegans (18%-37%), and Saccharaomyces cerevisiae (47%-68%) genomes. From yeasts to vertebrates, in order of increasing genome size and general biological complexity, increasing fractions of conserved bases are found to lie outside of the exons of known protein-coding genes. In all groups, the most highly conserved elements (HCEs), by log-odds score, are hundreds or thousands of bases long. These elements share certain properties with ultraconserved elements, but they tend to be longer and less perfectly conserved, and they overlap genes of somewhat different functional categories. In vertebrates, HCEs are associated with the 3' UTRs of regulatory genes, stable gene deserts, and megabase-sized regions rich in moderately conserved noncoding sequences. Noncoding HCEs also show strong statistical evidence of an enrichment for RNA secondary structure.", "title": "" }, { "docid": "889b4dabf8d9e9dbc6e3ae9e6dd9759f", "text": "Neuroscience is undergoing faster changes than ever before. Over 100 years our field qualitatively described and invasively manipulated single or few organisms to gain anatomical, physiological, and pharmacological insights. In the last 10 years neuroscience spawned quantitative datasets of unprecedented breadth (e.g., microanatomy, synaptic connections, and optogenetic brain-behavior assays) and size (e.g., cognition, brain imaging, and genetics). While growing data availability and information granularity have been amply discussed, we direct attention to a less explored question: How will the unprecedented data richness shape data analysis practices? Statistical reasoning is becoming more important to distill neurobiological knowledge from healthy and pathological brain measurements. We argue that large-scale data analysis will use more statistical models that are non-parametric, generative, and mixing frequentist and Bayesian aspects, while supplementing classical hypothesis testing with out-of-sample predictions.", "title": "" }, { "docid": "f502fe9a9758a03758620aeaf8bbeb57", "text": "Empirical data on design processes were obtained from a set of protocol studies of nine experienced industrial designers, whose designs were evaluated on overall quality and on a variety of aspects including creativity. From the protocol data we identify aspects of creativity in design related to the formulation of the design problem and to the concept of originality. We also apply our observations to a model of creative design as the coevolution of problem/solution spaces, and confirm the general validity of the model. We propose refinements to the co-evolution model, and suggest relevant new concepts of ‘default’ and ‘surprise’ problem/solution spaces.", "title": "" }, { "docid": "ff076ca404a911cc523af1aa51da8f47", "text": "Most Machine Learning (ML) researchers focus on automatic Machine Learning (aML) where great advances have been made, for example, in speech recognition, recommender systems, or autonomous vehicles. Automatic approaches greatly benefit from the availability of “big data”. However, sometimes, for example in health informatics, we are confronted not a small number of data sets or rare events, and with complex problems where aML-approaches fail or deliver unsatisfactory results. Here, interactive Machine Learning (iML) may be of help and the “human-in-the-loop” approach may be beneficial in solving computationally hard problems, where human expertise can help to reduce an exponential search space through heuristics. In this paper, experiments are discussed which help to evaluate the effectiveness of the iML-“human-in-the-loop” approach, particularly in opening the “black box”, thereby enabling a human to directly and indirectly manipulating and interacting with an algorithm. For this purpose, we selected the Ant Colony Optimization (ACO) framework, and use it on the Traveling Salesman Problem (TSP) which is of high importance in solving many practical problems in health informatics, e.g. in the study of proteins.", "title": "" }, { "docid": "935c1dc7c60c6179dd5c854cb92526e6", "text": "BACKGROUND\nAlthough surgical site infections (SSIs) are known to be associated with increased length of stay (LOS) and additional cost, their impact on the profitability of surgical procedures is unknown.\n\n\nAIM\nTo determine the clinical and economic burden of SSI over a two-year period and to predict the financial consequences of their elimination.\n\n\nMETHODS\nSSI surveillance and Patient Level Information and Costing System (PLICS) datasets for patients who underwent major surgical procedures at Plymouth Hospitals NHS Trust between April 2010 and March 2012 were consolidated. The main outcome measures were the attributable postoperative length of stay (LOS), cost, and impact on the margin differential (profitability) of SSI. A secondary outcome was the predicted financial consequence of eliminating all SSIs.\n\n\nFINDINGS\nThe median additional LOS attributable to SSI was 10 days [95% confidence interval (CI): 7-13 days] and a total of 4694 bed-days were lost over the two-year period. The median additional cost attributable to SSI was £5,239 (95% CI: 4,622-6,719) and the aggregate extra cost over the study period was £2,491,424. After calculating the opportunity cost of eliminating all SSIs that had occurred in the two-year period, the combined overall predicted financial benefit of doing so would have been only £694,007. For seven surgical categories, the hospital would have been financially worse off if it had successfully eliminated all SSIs.\n\n\nCONCLUSION\nSSI causes significant clinical and economic burden. Nevertheless the current system of reimbursement provided a financial disincentive to their reduction.", "title": "" }, { "docid": "30db2040ab00fd5eec7b1eb08526f8e8", "text": "We formulate an equivalence between machine learning and the formulation of statistical data assimilation as used widely in physical and biological sciences. The correspondence is that layer number in a feedforward artificial network setting is the analog of time in the data assimilation setting. This connection has been noted in the machine learning literature. We add a perspective that expands on how methods from statistical physics and aspects of Lagrangian and Hamiltonian dynamics play a role in how networks can be trained and designed. Within the discussion of this equivalence, we show that adding more layers (making the network deeper) is analogous to adding temporal resolution in a data assimilation framework. Extending this equivalence to recurrent networks is also discussed. We explore how one can find a candidate for the global minimum of the cost functions in the machine learning context using a method from data assimilation. Calculations on simple models from both sides of the equivalence are reported. Also discussed is a framework in which the time or layer label is taken to be continuous, providing a differential equation, the Euler-Lagrange equation and its boundary conditions, as a necessary condition for a minimum of the cost function. This shows that the problem being solved is a two-point boundary value problem familiar in the discussion of variational methods. The use of continuous layers is denoted “deepest learning.” These problems respect a symplectic symmetry in continuous layer phase space. Both Lagrangian versions and Hamiltonian versions of these problems are presented. Their well-studied implementation in a discrete time/layer, while respecting the symplectic structure, is addressed. The Hamiltonian version provides a direct rationale for backpropagation as a solution method for a certain two-point boundary value problem.", "title": "" }, { "docid": "bb815929889d93e19c6581c3f9a0b491", "text": "This paper presents an HMM-MLP hybrid system to recognize complex date images written on Brazilian bank cheques. The system first segments implicitly a date image into sub-fields through the recognition process based on an HMM-based approach. Afterwards, the three obligatory date sub-fields are processed by the system (day, month and year). A neural approach has been adopted to work with strings of digits and a Markovian strategy to recognize and verify words. We also introduce the concept of meta-classes of digits, which is used to reduce the lexicon size of the day and year and improve the precision of their segmentation and recognition. Experiments show interesting results on date recognition.", "title": "" }, { "docid": "c9d3def588f5f3dc95955635ebaa0d3d", "text": "In this paper we propose a novel computer vision method for classifying human facial expression from low resolution images. Our method uses the bag of words representation. It extracts dense SIFT descriptors either from the whole image or from a spatial pyramid that divides the image into increasingly fine sub-regions. Then, it represents images as normalized (spatial) presence vectors of visual words from a codebook obtained through clustering image descriptors. Linear kernels are built for several choices of spatial presence vectors, and combined into weighted sums for multiple kernel learning (MKL). For machine learning, the method makes use of multi-class one-versus-all SVM on the MKL kernel computed using this representation, but with an important twist, the learning is local, as opposed to global – in the sense that, for each face with an unknown label, a set of neighbors is selected to build a local classification model, which is eventually used to classify only that particular face. Empirical results indicate that the use of presence vectors, local learning and spatial information improve recognition performance together by more than 5%. Finally, the proposed model ranked fourth in the Facial Expression Recognition Challenge, with an accuracy of 67.484% on the final test set. ICML 2013 Workshop on Representation Learning, Atlanta, Georgia, USA, 2013. Copyright 2013 by the author(s).", "title": "" }, { "docid": "073a2c6743b95913b090dfc17204f880", "text": "Recent work has explored the problem of autonomous navigation by imitating a teacher and learning an end-toend policy, which directly predicts controls from raw images. However, these approaches tend to be sensitive to mistakes by the teacher and do not scale well to other environments or vehicles. To this end, we propose Observational Imitation Learning (OIL), a novel imitation learning variant that supports online training and automatic selection of optimal behavior by observing multiple imperfect teachers. We apply our proposed methodology to the challenging problems of autonomous driving and UAV racing. For both tasks, we utilize the Sim4CV simulator [18] that enables the generation of large amounts of synthetic training data and also allows for online learning and evaluation. We train a perception network to predict waypoints from raw image data and use OIL to train another network to predict controls from these waypoints. Extensive experiments demonstrate that our trained network outperforms its teachers, conventional imitation learning (IL) and reinforcement learning (RL) baselines and even humans in simulation.", "title": "" }, { "docid": "0e4917e7a9e1abe867811f8454cbcdc0", "text": "Long videos can be played much faster than real-time by recording only one frame per second or by dropping all but one frame each second, i.e., by creating a timelapse. Unstable hand-held moving videos can be stabilized with a number of recently described methods. Unfortunately, creating a stabilized timelapse, or hyperlapse, cannot be achieved through a simple combination of these two methods. Two hyperlapse methods have been previously demonstrated: one with high computational complexity and one requiring special sensors. We present an algorithm for creating hyperlapse videos that can handle significant high-frequency camera motion and runs in real-time on HD video. Our approach does not require sensor data, thus can be run on videos captured on any camera. We optimally select frames from the input video that best match a desired target speed-up while also resulting in the smoothest possible camera motion. We evaluate our approach using several input videos from a range of cameras and compare these results to existing methods.", "title": "" }, { "docid": "177c52ba3d4e80274b3d90229fcce535", "text": "We address the problem of classifying sparsely labeled networks, where labeled nodes in the network are extremely scarce. Existing algorithms, such as collective classification, have been shown to be effective for jointly deriving labels of related nodes, by exploiting class label dependencies among neighboring nodes. However, when the underlying network is sparsely labeled, most nodes have too few or even no connections to labeled nodes. This makes it very difficult to leverage supervised knowledge from labeled nodes to accurately estimate label dependencies, thereby largely degrading the classification accuracy. In this paper, we propose a novel discriminative matrix factorization (DMF) based algorithm that effectively learns a latent network representation by exploiting topological paths between labeled and unlabeled nodes, in addition to nodes' content information. The main idea is to use matrix factorization to obtain a compact representation of the network that fully encodes nodes' content information and network structure, and unleash discriminative power inferred from labeled nodes to directly benefit collective classification. To achieve this, we formulate a new matrix factorization objective function that integrates network representation learning with an empirical loss minimization for classifying node labels. An efficient optimization algorithm based on conjugate gradient methods is proposed to solve the new objective function. Experimental results on real-world networks show that DMF yields superior performance gain over the state-of-the-art baselines on sparsely labeled networks.", "title": "" }, { "docid": "162f080444935117c5125ae8b7c3d51e", "text": "The named concepts and compositional operators present in natural language provide a rich source of information about the kinds of abstractions humans use to navigate the world. Can this linguistic background knowledge improve the generality and efficiency of learned classifiers and control policies? This paper aims to show that using the space of natural language strings as a parameter space is an effective way to capture natural task structure. In a pretraining phase, we learn a language interpretation model that transforms inputs (e.g. images) into outputs (e.g. labels) given natural language descriptions. To learn a new concept (e.g. a classifier), we search directly in the space of descriptions to minimize the interpreter’s loss on training examples. Crucially, our models do not require language data to learn these concepts: language is used only in pretraining to impose structure on subsequent learning. Results on image classification, text editing, and reinforcement learning show that, in all settings, models with a linguistic parameterization outperform those without.1", "title": "" }, { "docid": "5de517f8ccdbf12228ca334173ecf797", "text": "This paper describes the Chinese handwriting recognition competition held at the 12th International Conference on Document Analysis and Recognition (ICDAR 2013). This third competition in the series again used the CASIAHWDB/OLHWDB databases as the training set, and all the submitted systems were evaluated on closed datasets to report character-level correct rates. This year, 10 groups submitted 27 systems for five tasks: classification on extracted features, online/offline isolated character recognition, online/offline handwritten text recognition. The best results (correct rates) are 93.89% for classification on extracted features, 94.77% for offline character recognition, 97.39% for online character recognition, 88.76% for offline text recognition, and 95.03% for online text recognition, respectively. In addition to the test results, we also provide short descriptions of the recognition methods and brief discussions on the results. Keywords—Chinese handwriting recognition competition; isolated character recongition; handwritten text recognition; offline; online; CASIA-HWDB/OLHWDB database.", "title": "" }, { "docid": "b134cf07e01f1568d127880777492770", "text": "This paper addresses the problem of recovering 3D nonrigid shape models from image sequences. For example, given a video recording of a talking person, we would like to estimate a 3D model of the lips and the full face and its internal modes of variation. Many solutions that recover 3D shape from 2D image sequences have been proposed; these so-called structure-from-motion techniques usually assume that the 3D object is rigid. For example, Tomasi and Kanades’ factorization technique is based on a rigid shape matrix, which produces a tracking matrix of rank 3 under orthographic projection. We propose a novel technique based on a non-rigid model, where the 3D shape in each frame is a linear combination of a set of basis shapes. Under this model, the tracking matrix is of higher rank, and can be factored in a three-step process to yield pose, configuration and shape. To the best of our knowledge, this is the first model free approach that can recover from single-view video sequences nonrigid shape models. We demonstrate this new algorithm on several video sequences. We were able to recover 3D non-rigid human face and animal models with high accuracy.", "title": "" }, { "docid": "b8b3761b658e37783afb1157ef0844b5", "text": "Biometric recognition refers to the automated recognition of individuals based on their biological and behavioral characteristics such as fingerprint, face, iris, and voice. The first scientific paper on automated fingerprint matching was published by Mitchell Trauring in the journal Nature in 1963. The first objective of this paper is to document the significant progress that has been achieved in the field of biometric recognition in the past 50 years since Trauring’s landmark paper. This progress has enabled current state-of-the-art biometric systems to accurately recognize individuals based on biometric trait(s) acquired under controlled environmental conditions from cooperative users. Despite this progress, a number of challenging issues continue to inhibit the full potential of biometrics to automatically recognize humans. The second objective of this paper is to enlist such challenges, analyze the solutions proposed to overcome them, and highlight the research opportunities in this field. One of the foremost challenges is the design of robust algorithms for representing and matching biometric samples obtained from uncooperative subjects under unconstrained environmental conditions (e.g., recognizing faces in a crowd). In addition, fundamental questions such as the distinctiveness and persistence of biometric traits need greater attention. Problems related to the security of biometric data and robustness of the biometric system against spoofing and obfuscation attacks, also remain unsolved. Finally, larger system-level issues like usability, user privacy concerns, integration with the end application, and return on investment have not been adequately addressed. Unlocking the full potential of biometrics through inter-disciplinary research in the above areas will not only lead to widespread adoption of this promising technology, but will also result in wider user acceptance and societal impact. c © 2016 Elsevier Ltd. All rights reserved.", "title": "" } ]
scidocsrr
23ae7a1ecabb57ead35df50b87e4da93
The Rapid Growth of Cybercrimes Affecting Information Systems in the Global : Is this a Myth or Reality in Tanzania ?
[ { "docid": "f2f53f1bdf451c945053bb8f2b8ca9a1", "text": "In this paper we investigated cybercrime and examined the relevant laws available to combat this crime in Nigeria. Therefore, we had a critical review of criminal laws in Nigeria and also computer network and internet security. The internet as an instrument to aid crime ranges from business espionage, to banking fraud, obtaining un-authorized and sabotaging data in computer networks of some key organizations. We investigated these crimes and noted some useful observations. From our observations, we profound solution to the inadequacies of existing enabling laws. Prevention of cybercrime requires the co-operation of all the citizens and not necessarily the police alone who presently lack specialists in its investigating units to deal with cybercrime. The eradication of this crime is crucial in view of the devastating effect on the image of Nigeria and the attendant consequence on the economy. Out of over 140 million Nigerians less than 5x10-4% are involved in cybercrime across Nigeria.", "title": "" } ]
[ { "docid": "9f98e49b38ee172a875a6f62f9c2c4ce", "text": "Recent advances in semiconductor technology have renewed interest in class-D audio amplifiers, especially for portable devices and consumer electronics. In addition to higher efficiency, class-D amplifiers are smaller, lighter, streamlined, cool and quiet with extended battery life compared to the conventional linear amplifiers. A typical class-D amplifier increases the efficiency of amplifiers in consumer and professional applications from the industry norm of approximately 50 percent to 90 percents or better. In this paper, two zero voltage switching (ZVS) modulation strategies are proposed to achieve the goals of high fidelity and high efficiency. Field programmable gate array (FPGA) implementations of the proposed modulation strategies are also provided in this paper. In order to verify the correctness of the proposed methods, a FPGA-based class-D audio amplifier prototype is developed and realized. Experimental results are then presented to validate the proposed modulation strategies. According to the experimental results, the proposed test system is able to deliver more than 15 W into a 4 Omega load with an efficiency of 80% and total harmonic distortion (THD) less than 2 %.", "title": "" }, { "docid": "8358a146d3d1188c82195d5b03c8be4c", "text": "With the development of power conversion technology, power density becomes the major challenge for front-end AC/DC converters. Although increasing switching frequency can dramatically reduce the passive component size, its effectiveness is limited by the converter efficiency and thermal management. In present industry implementations, due to the low efficiency PWM type topologies, DC/DC stage switching frequency is limited to 100 kHz range. Therefore, passive components take large portion of converter volume. As shown in Figure 4-1, DC/DC stage transformer and inductor take more than 30% for total converter space.", "title": "" }, { "docid": "89297a4aef0d3251e8d947ccc2acacc7", "text": "We propose a novel probabilistic framework for learning visual models of 3D object categories by combining appearance information and geometric constraints. Objects are represented as a coherent ensemble of parts that are consistent under 3D viewpoint transformations. Each part is a collection of salient image features. A generative framework is used for learning a model that captures the relative position of parts within each of the discretized viewpoints. Contrary to most of the existing mixture of viewpoints models, our model establishes explicit correspondences of parts across different viewpoints of the object class. Given a new image, detection and classification are achieved by determining the position and viewpoint of the model that maximize recognition scores of the candidate objects. Our approach is among the first to propose a generative probabilistic framework for 3D object categorization. We test our algorithm on the detection task and the viewpoint classification task by using “car” category from both the Savarese et al. 2007 and PASCAL VOC 2006 datasets. We show promising results in both the detection and viewpoint classification tasks on these two challenging datasets.", "title": "" }, { "docid": "4d3b7bf6d7f039875a9c237f9e0568fb", "text": "In virtual environments, virtual hand interactions play key roles in the human-computer interface. Specifically, the virtual grasping of 3D objects provides an intuitive way for users to interact with virtual objects. This paper demonstrates the creation of a sophisticated virtual hand model simulating natural anatomy in its appearance and motion. To achieve good visual realism, the virtual hand is modeled with metaball modeling, and visually enhanced by applying texture mapping. For realistic kinematics modeling, a three-layer model (skeleton, muscle and skin layers) is adopted to handle the motion as well as the deformation of the virtual hand. We also present an approach for virtual grasping of 3D objects with the realistic virtual hand driven by a CyberGlove dataglove. Grasping heuristics are proposed based on the classification with the shapes of objects, and simplified proxies for the virtual hand are used for the purpose of real-time collision detection between the virtual hand and 3D objects.", "title": "" }, { "docid": "c56c45405e0a943e63ab035b11b9fd93", "text": "We present a simple, but expressive type system that supports strong updates—updating a memory cell to hold values of unrelated types at different points in time. Our formulation is based upon a standard linear lambda calculus and, as a result, enjoys a simple semantic interpretation for types that is closely related to models for spatial logics. The typing interpretation is strong enough that, in spite of the fact that our core programming language supports shared, mutable references and cyclic graphs, every well-typed program terminates. We then consider extensions needed to model ML-style references, where the capability to access a reference cell is unrestricted, but strong updates are disallowed. Our extensions include a thaw primitive for re-gaining the capability to perform strong updates on unrestricted references. The thaw primitive is closely related to other mechanisms that support strong updates, such as CQUAL’s restrict.", "title": "" }, { "docid": "a44dbdb9f8dc7815841e58f8429586b9", "text": "In many application domains of recommender systems, explicit rating information is sparse or non-existent. The preferences of the current user have therefore to be approximated by interpreting his or her behavior, i.e., the implicit user feedback. In the literature, a number of algorithm proposals have been made that rely solely on such implicit feedback, among them Bayesian Personalized Ranking (BPR).\n In the BPR approach, pairwise comparisons between the items are made in the training phase and an item i is considered to be preferred over item j if the user interacted in some form with i but not with j. In real-world applications, however, implicit feedback is not necessarily limited to such binary decisions as there are, e.g., different types of user actions like item views, cart or purchase actions and there can exist several actions for an item over time.\n In this paper we show how BPR can be extended to deal with such more fine-granular, graded preference relations. An empirical analysis shows that this extension can help to measurably increase the predictive accuracy of BPR on realistic e-commerce datasets.", "title": "" }, { "docid": "b62075c626513c78f8fce71a23f9e496", "text": "This topical review starts with a warning that despite an impressive wealth of neuroscientific data, a reductionist approach can never fully explain persistent pain. One reason is the complexity of clinical pain (in contrast to experimentally induced pain). Another reason is that the \"pain system\" shows degeneracy, which means that an outcome can have several causes. Problems also arise from lack of conceptual clarity regarding words like nociceptors, pain, and perception. It is, for example, argued that \"homeoceptor\" would be a more meaningful term than nociceptor. Pain experience most likely depends on synchronized, oscillatory activity in a distributed neural network regardless of whether the pain is caused by tissue injury, deafferentation, or hypnosis. In experimental pain, the insula, the second somatosensory area, and the anterior cingulate gyrus are consistently activated. These regions are not pain-specific, however, and are now regarded by most authors as parts of the so-called salience network, which detects all kinds of salient events (pain being highly salient). The networks related to persistent pain seem to differ from the those identified experimentally, and show a more individually varied pattern of activations. One crucial difference seems to be activation of regions implicated in emotional and body-information processing in persistent pain. Basic properties of the \"pain system\" may help to explain why it so often goes awry, leading to persistent pain. Thus, the system must be highly sensitive not to miss important homeostatic threats, it cannot be very specific, and it must be highly plastic to quickly learn important associations. Indeed, learning and memory processes play an important role in persistent pain. Thus, behaviour with the goal of avoiding pain provocation is quickly learned and may persist despite healing of the original insult. Experimental and clinical evidence suggest that the hippocampal formation and neurogenesis (formation of new neurons) in the dentate gyrus are involved in the development and maintenance of persistent pain. There is evidence that persistent pain in many instances may be understood as the result of an interpretation of the organism's state of health. Any abnormal pattern of sensory information as well as lack of expected correspondence between motor commands and sensory feedback may be interpreted as bodily threats and evoke pain. This may, for example, be an important mechanism in many cases of neuropathic pain. Accordingly, many patients with persistent pain show evidence of a distorted body image. Another approach to understanding why the \"pain system\" so often goes awry comes from knowledge of the dynamic and nonlinear behaviour of neuronal networks. In real life the emergence of persistent pain probably depends on the simultaneous occurrence of numerous challenges, and just one extra (however small) might put the network into a an inflexible state with heightened sensitivity to normally innocuous inputs. Finally, the importance of seeking the meaning the patient attributes to his/her pain is emphasized. Only then can we understand why a particular person suffers so much more than another with very similar pathology, and subsequently be able to help the person to alter the meaning of the situation.", "title": "" }, { "docid": "49df721b5115ad7d3f91b6212dbb585e", "text": "We first present a minimal feature set for transition-based dependency parsing, continuing a recent trend started by Kiperwasser and Goldberg (2016a) and Cross and Huang (2016a) of using bi-directional LSTM features. We plug our minimal feature set into the dynamic-programming framework of Huang and Sagae (2010) and Kuhlmann et al. (2011) to produce the first implementation of worst-case Opn3q exact decoders for arc-hybrid and arceager transition systems. With our minimal features, we also present Opn3q global training methods. Finally, using ensembles including our new parsers, we achieve the best unlabeled attachment score reported (to our knowledge) on the Chinese Treebank and the “second-best-in-class” result on the English Penn Treebank. Publication venue: EMNLP 2017", "title": "" }, { "docid": "351faf9d58bd2a2010766acff44dadbc", "text": "صلاخلا ـ ة : ىلع قوفي ةيبرعلا ةغللاب نيثدحتملا ددع نأ نم مغرلا يتئام تنإ يف ةلوذبملا دوهجلا نأ لاإ ،صخش نويلم ةليلق ةيبوساحلا ةيبرعلا ةيوغللا رداصملا جا ادج ب ةيبوساحلا ةيبرعلا مجاعملا لاجم يف ةصاخ . بلغأ نإ ةيفاآ تسيل يهف اذلو ،ةيبنجأ تاغلل امنإ ،ةيبرعلا ةغلل لصلأا يف ممصت مل ةدوجوملا دوهجلا يبرعلا عمتجملا تاجايتحا دسل . فدهي حرتقم ضرع ىلإ ثحبلا اذه لأ جذومن ساح مجعم ةينقت ىلع ينبم يبو \" يجولوتنلأا \" اهيلع دمتعت يتلا ةيساسلأا تاينقتلا نم ةثيدح ةينقت يهو ، ةينقت \" ةيللادلا بيولا \" ام لاجم يف تاقلاعلاو ميهافملل يللادلا يفرعملا ليثمتلاب ىنعت ، . دقو ءانب مت لأا جذومن ةيرظن ساسأ ىلع \" ةيللادلا لوقحلا \" تايوغللا لاجم يف ةفورعملا ، و ت م اهساسأ ىلع ينب يتلا تانايبلا ءاقتسا لأا جذومن نم \" نامزلا ظافلأ \" يف \" ميركلا نآرقلا \" ، يذلا اهلامآو اهيقر يف ةيبرعلا هيلإ تلصو ام قدأ دعي . اذه لثم رفوت نإ لأا جذومن اعفان نوكيس ةيبرعلا ةغلل ةيبرعلا ةغللا لاجم يف ةيبوساحلا تاقيبطتلل . مت دقو م ضرع ثحبلا اذه يف ءانب ةيجهنمل لصف لأا جذومن اهيلإ لصوتلا مت يتلا جئاتنلاو .", "title": "" }, { "docid": "2d845ef6552b77fb4dd0d784233aa734", "text": "The timing of the origin of arthropods in relation to the Cambrian explosion is still controversial, as are the timing of other arthropod macroevolutionary events such as the colonization of land and the evolution of flight. Here we assess the power of a phylogenomic approach to shed light on these major events in the evolutionary history of life on earth. Analyzing a large phylogenomic dataset (122 taxa, 62 genes) with a Bayesian-relaxed molecular clock, we simultaneously reconstructed the phylogenetic relationships and the absolute times of divergences among the arthropods. Simulations were used to test whether our analysis could distinguish between alternative Cambrian explosion scenarios with increasing levels of autocorrelated rate variation. Our analyses support previous phylogenomic hypotheses and simulations indicate a Precambrian origin of the arthropods. Our results provide insights into the 3 independent colonizations of land by arthropods and suggest that evolution of insect wings happened much earlier than the fossil record indicates, with flight evolving during a period of increasing oxygen levels and impressively large forests. These and other findings provide a foundation for macroevolutionary and comparative genomic study of Arthropoda.", "title": "" }, { "docid": "fa0eebbf9c97942a5992ed80fd66cf10", "text": "The increasing popularity of Facebook among adolescents has stimulated research to investigate the relationship between Facebook use and loneliness, which is particularly prevalent in adolescence. The aim of the present study was to improve our understanding of the relationship between Facebook use and loneliness. Specifically, we examined how Facebook motives and two relationship-specific forms of adolescent loneliness are associated longitudinally. Cross-lagged analysis based on data from 256 adolescents (64% girls, M(age) = 15.88 years) revealed that peer-related loneliness was related over time to using Facebook for social skills compensation, reducing feelings of loneliness, and having interpersonal contact. Facebook use for making new friends reduced peer-related loneliness over time, whereas Facebook use for social skills compensation increased peer-related loneliness over time. Hence, depending on adolescents' Facebook motives, either the displacement or the stimulation hypothesis is supported. Implications and suggestions for future research are discussed.", "title": "" }, { "docid": "10bc07996e9016d4de30e27d869b9da7", "text": "Information extraction and knowledge discovery regarding adverse drug reaction (ADR) from large-scale clinical texts are very useful and needy processes. Two major difficulties of this task are the lack of domain experts for labeling examples and intractable processing of unstructured clinical texts. Even though most previous works have been conducted on these issues by applying semisupervised learning for the former and a word-based approach for the latter, they face with complexity in an acquisition of initial labeled data and ignorance of structured sequence of natural language. In this study, we propose automatic data labeling by distant supervision where knowledge bases are exploited to assign an entity-level relation label for each drug-event pair in texts, and then, we use patterns for characterizing ADR relation. The multiple-instance learning with expectation-maximization method is employed to estimate model parameters. The method applies transductive learning to iteratively reassign a probability of unknown drug-event pair at the training time. By investigating experiments with 50,998 discharge summaries, we evaluate our method by varying large number of parameters, that is, pattern types, pattern-weighting models, and initial and iterative weightings of relations for unlabeled data. Based on evaluations, our proposed method outperforms the word-based feature for NB-EM (iEM), MILR, and TSVM with F1 score of 11.3%, 9.3%, and 6.5% improvement, respectively.", "title": "" }, { "docid": "848aae58854681e75fae293e2f8d2fc5", "text": "Over last several decades, computer vision researchers have been devoted to find good feature to solve different tasks, such as object recognition, object detection, object segmentation, activity recognition and so forth. Ideal features transform raw pixel intensity values to a representation in which these computer vision problems are easier to solve. Recently, deep features from covolutional neural network(CNN) have attracted many researchers in computer vision. In the supervised setting, these hierarchies are trained to solve specific problems by minimizing an objective function. More recently, the feature learned from large scale image dataset have been proved to be very effective and generic for many computer vision task. The feature learned from recognition task can be used in the object detection task. This work uncover the principles that lead to these generic feature representations in the transfer learning, which does not need to train the dataset again but transfer the rich feature from CNN learned from ImageNet dataset. We begin by summarize some related prior works, particularly the paper in object recognition, object detection and segmentation. We introduce the deep feature to computer vision task in intelligent transportation system. We apply deep feature in object detection task, especially in vehicle detection task. To make fully use of objectness proposals, we apply proposal generator on road marking detection and recognition task. Third, to fully understand the transportation situation, we introduce the deep feature into scene understanding. We experiment each task for different public datasets, and prove our framework is robust.", "title": "" }, { "docid": "1368a00839a5dd1edc7dbaced35e56f1", "text": "Nowadays, transfer of the health care from ambulance to patient's home needs higher demand on patient's mobility, comfort and acceptance of the system. Therefore, the goal of this study is to proof the concept of a system which is ultra-wearable, less constraining and more suitable for long term measurements than conventional ECG monitoring systems which use conductive electrolytic gels for low impedance electrical contact with skin. The developed system is based on isolated capacitive coupled electrodes without any galvanic contact to patient's body and does not require the common right leg electrode. Measurements performed under real conditions show that it is possible to acquire well known ECG waveforms without the common electrode when the patient is sitting and even during walking. Results of the validation process demonstrate that the system performance is comparable to the conventional ECG system while the wearability is increased.", "title": "" }, { "docid": "dd732081865bb209276acd3bb76ee08f", "text": "A 57-64-GHz low phase-error 5-bit switch-type phase shifter integrated with a low phase-variation variable gain amplifier (VGA) is implemented through TSMC 90-nm CMOS low-power technology. Using the phase compensation technique, the proposed VGA can provide appropriate gain tuning with almost constant phase characteristics, thus greatly reducing the phase-tuning complexity in a phased-array system. The measured root mean square (rms) phase error of the 5-bit phase shifter is 2° at 62 GHz. The phase shifter has a low group-delay deviation (phase distortion) of +/- 8.5 ps and an excellent insertion loss flatness of ±0.8 dB for a specific phase-shifting state, across 57-64 GHz. For all 32 states, the insertion loss is 14.6 ± 3 dB, including pad loss at 60 GHz. For the integrated phase shifter and VGA, the VGA can provide 6.2-dB gain tuning range, which is wide enough to cover the loss variation of the phase shifter, with only 1.86° phase variation. The measured rms phase error of the 5-bit phase shifter and VGA is 3.8° at 63 GHz. The insertion loss of all 32 states is 5.4 dB, including pad loss at 60 GHz, and the loss flatness is ±0.8 dB over 57-64 GHz. To the best of our knowledge, the 5-bit phase shifter presents the best rms phase error at center frequency among the V-band switch-type phase shifter.", "title": "" }, { "docid": "c26caff761092bc5b6af9f1c66986715", "text": "The mechanisms used by DNN accelerators to leverage datareuse and perform data staging are known as dataflow, and they directly impact the performance and energy efficiency of DNN accelerator designs. Co-optimizing the accelerator microarchitecture and its internal dataflow is crucial for accelerator designers, but there is a severe lack of tools and methodologies to help them explore the co-optimization design space. In this work, we first introduce a set of datacentric directives to concisely specify DNN dataflows in a compiler-friendly form. Next, we present an analytical model, MAESTRO, that estimates various cost-benefit tradeoffs of a dataflow including execution time and energy efficiency for a DNN model and hardware configuration. Finally, we demonstrate the use of MAESTRO to drive a hardware design space exploration (DSE) engine. The DSE engine searched 480M designs and identified 2.5M valid designs at an average rate of 0.17M designs per second, and also identified throughputand energy-optimized designs among this set.", "title": "" }, { "docid": "eff8079294d89665bbd8835902c4caa3", "text": "Due to the growing developments in advanced metering and digital technologies, smart cities have been equipped with different electronic devices on the basis of Internet of Things (IoT), therefore becoming smarter than before. The aim of this article is that of providing a comprehensive review on the concepts of smart cities and on their motivations and applications. Moreover, this survey describes the IoT technologies for smart cities and the main components and features of a smart city. Furthermore, practical experiences over the world and the main challenges are explained.", "title": "" }, { "docid": "3e0076e4f2e69238c5f5ebcdc1dbbda1", "text": "This work presents a self-biased MOSFET threshold voltage VT0 monitor. The threshold condition is defined based on a current-voltage relationship derived from a continuous physical model. The model is valid for any operating condition, from weak to strong inversion, and under triode or saturation regimes. The circuit consists in balancing two self-cascode cells operating at different inversion levels, where one of the transistors that compose these cells is biased at the threshold condition. The circuit is MOSFET-only (can be implemented in any standard digital process), and it operates with a power supply of less than 1 V, consuming tenths of nW. We propose a process independent design methodology, evaluating different trade-offs of accuracy, area and power consumption. Schematic simulation results, including Monte Carlo variability analysis, support the VT0 monitoring behavior of the circuit with good accuracy on a 180 nm process.", "title": "" } ]
scidocsrr
9fa9b3b44354c65baf11e082f1deeb38
Online Adaptation for Joint Scene and Object Classification
[ { "docid": "9af70a99010198feeeaff39003faa0f0", "text": "In this paper, we propose a new framework for spectral-spatial classification of hyperspectral image data. The proposed approach serves as an engine in the context of which active learning algorithms can exploit both spatial and spectral information simultaneously. An important contribution of our paper is the fact that we exploit the marginal probability distribution which uses the whole information in the hyperspectral data. We learn such distributions from both the spectral and spatial information contained in the original hyperspectral data using loopy belief propagation. The adopted probabilistic model is a discriminative random field in which the association potential is a multinomial logistic regression classifier and the interaction potential is a Markov random field multilevel logistic prior. Our experimental results with hyperspectral data sets collected using the National Aeronautics and Space Administration's Airborne Visible Infrared Imaging Spectrometer and the Reflective Optics System Imaging Spectrometer system indicate that the proposed framework provides state-of-the-art performance when compared to other similar developments.", "title": "" }, { "docid": "5116079b69aeb1858177429fabd10f80", "text": "Deep convolutional neural networks (CNN) have shown their promise as a universal representation for recognition. However, global CNN activations at present lack geometric invariance, which limits their robustness for tasks such as classification and matching of highly variable scenes. To improve the invariance of CNN activations without degrading their discriminative power, this paper presents a simple but effective scheme called multi-scale orderless pooling (or MOP-CNN for short). This approach works by extracting CNN activations for local patches at multiple scales, followed by orderless VLAD pooling of these activations at each scale level and concatenating the result. This feature representation decisively outperforms global CNN activations and achieves state-of-the-art performance for scene classification on such challenging benchmarks as SUN397, MIT Indoor Scenes, and ILSVRC2012, as well as for instance-level retrieval on the Holidays dataset.", "title": "" }, { "docid": "b4ed15850674851fb7e479b7181751d7", "text": "In this paper we propose an approach to holistic scene understanding that reasons jointly about regions, location, class and spatial extent of objects, presence of a class in the image, as well as the scene type. Learning and inference in our model are efficient as we reason at the segment level, and introduce auxiliary variables that allow us to decompose the inherent high-order potentials into pairwise potentials between a few variables with small number of states (at most the number of classes). Inference is done via a convergent message-passing algorithm, which, unlike graph-cuts inference, has no submodularity restrictions and does not require potential specific moves. We believe this is very important, as it allows us to encode our ideas and prior knowledge about the problem without the need to change the inference engine every time we introduce a new potential. Our approach outperforms the state-of-the-art on the MSRC-21 benchmark, while being much faster. Importantly, our holistic model is able to improve performance in all tasks.", "title": "" }, { "docid": "90bce307651bd6441b216e1aded9cdf3", "text": "This work addresses the problem of segmenting an object of interest out of a video. We show that video object segmentation can be naturally cast as a semi-supervised learning problem and be efficiently solved using harmonic functions. We propose an incremental self-training approach by iteratively labeling the least uncertain frame and updating similarity metrics. Our self-training video segmentation produces superior results both qualitatively and quantitatively. Moreover, usage of harmonic functions naturally supports interactive segmentation. We suggest active learning methods for providing guidance to user on what to annotate in order to improve labeling efficiency. We present experimental results using a ground truth data set and a quantitative comparison to a representative object segmentation system.", "title": "" } ]
[ { "docid": "b20ec220d2b027a54573b4d1338670f2", "text": "With the rapid development of economic globalization, the characteristics of supply chain such as large amount of participants, scattered geographical distribution, and long time span require the participants in supply chain to trust each other for efficient information exchange. Targeting at the pain points such as low trust degree and untimely information exchange in traditional supply chain information system, combining with the core advantages of blockchain technology, this paper proposes the concept of reorganization of supply chain information system based on blockchain technology.\n In this paper, we first review the key problems in supply chain management, analyze the key factors that weaken the resilience of the supply chain, derive the root causes of supply chain information asymmetry and the raise of supply chain risks as a whole caused by imperfections of the trust mechanism. Aimed at the above problems, the concept of reconfiguring the supply chain information system by using blockchain technology is proposed and verified by examples. Finally, by means of the conceptual model of the information platform based on the blockchain technology conceived in this paper, the specific tactics to be implemented and future challenges are clarified for the improvement of supply chain resilience.", "title": "" }, { "docid": "8e8d7b2411fa0b0c19d745ce85fcec11", "text": "Parallel distributed processing (PDP) architectures demonstrate a potentially radical alternative to the traditional theories of language processing that are based on serial computational models. However, learning complex structural relationships in temporal data presents a serious challenge to PDP systems. For example, automata theory dictates that processing strings from a context-free language (CFL) requires a stack or counter memory device. While some PDP models have been hand-crafted to emulate such a device, it is not clear how a neural network might develop such a device when learning a CFL. This research employs standard backpropagation training techniques for a recurrent neural network (RNN) in the task of learning to predict the next character in a simple deterministic CFL (DCFL). We show that an RNN can learn to recognize the structure of a simple DCFL. We use dynamical systems theory to identify how network states re ̄ ect that structure by building counters in phase space. The work is an empirical investigation which is complementary to theoretical analyses of network capabilities, yet original in its speci ® c con® guration of dynamics involved. The application of dynamical systems theory helps us relate the simulation results to theoretical results, and the learning task enables us to highlight some issues for understanding dynamical systems that process language with counters.", "title": "" }, { "docid": "83393c9a0392249409a057914c71b1a0", "text": "Recent achievement of the learning-based classification leads to the noticeable performance improvement in automatic polyp detection. Here, building large good datasets is very crucial for learning a reliable detector. However, it is practically challenging due to the diversity of polyp types, expensive inspection, and labor-intensive labeling tasks. For this reason, the polyp datasets usually tend to be imbalanced, i.e., the number of non-polyp samples is much larger than that of polyp samples, and learning with those imbalanced datasets results in a detector biased toward a non-polyp class. In this paper, we propose a data sampling-based boosting framework to learn an unbiased polyp detector from the imbalanced datasets. In our learning scheme, we learn multiple weak classifiers with the datasets rebalanced by up/down sampling, and generate a polyp detector by combining them. In addition, for enhancing discriminability between polyps and non-polyps that have similar appearances, we propose an effective feature learning method using partial least square analysis, and use it for learning compact and discriminative features. Experimental results using challenging datasets show obvious performance improvement over other detectors. We further prove effectiveness and usefulness of the proposed methods with extensive evaluation.", "title": "" }, { "docid": "1891bf842d446a7d323dc207b38ff5a9", "text": "We use linear programming techniques to obtain new upper bounds on the maximal squared minimum distance of spherical codes with fixed cardinality. Functions Qj(n, s) are introduced with the property that Qj(n, s) < 0 for some j > m iff the Levenshtein bound Lm(n, s) on A(n, s) = max{|W | : W is an (n, |W |, s) code} can be improved by a polynomial of degree at least m+1. General conditions on the existence of new bounds are presented. We prove that for fixed dimension n ≥ 5 there exist a constant k = k(n) such that all Levenshtein bounds Lm(n, s) for m ≥ 2k− 1 can be improved. An algorithm for obtaining new bounds is proposed and discussed.", "title": "" }, { "docid": "d42cba123245ef4e07351c4983b90225", "text": "Deduplication technologies are increasingly being deployed to reduce cost and increase space-efficiency in corporate data centers. However, prior research has not applied deduplication techniques inline to the request path for latency sensitive, primary workloads. This is primarily due to the extra latency these techniques introduce. Inherently, deduplicating data on disk causes fragmentation that increases seeks for subsequent sequential reads of the same data, thus, increasing latency. In addition, deduplicating data requires extra disk IOs to access on-disk deduplication metadata. In this paper, we propose an inline deduplication solution, iDedup, for primary workloads, while minimizing extra IOs and seeks. Our algorithm is based on two key insights from realworld workloads: i) spatial locality exists in duplicated primary data; and ii) temporal locality exists in the access patterns of duplicated data. Using the first insight, we selectively deduplicate only sequences of disk blocks. This reduces fragmentation and amortizes the seeks caused by deduplication. The second insight allows us to replace the expensive, on-disk, deduplication metadata with a smaller, in-memory cache. These techniques enable us to tradeoff capacity savings for performance, as demonstrated in our evaluation with real-world workloads. Our evaluation shows that iDedup achieves 60-70% of the maximum deduplication with less than a 5% CPU overhead and a 2-4% latency impact.", "title": "" }, { "docid": "f71034627014c47b5751ff11455d5df8", "text": "A biometrical-genetical analysis of twin data to elucidate the determinants of variation in extraversion and its components, sociability and impulsiveness, revealed that both genetical and environmental factors contributed to variation in extraversion, to the variation and covariation of its component scales, and to the interaction between subjects and scales. A large environmental correlation between the scales suggested that environmental factors may predominate in determining the unitary nature of extraversion. The interaction between subjects and scales depended more on genetical factors, which suggests that the dual nature of extraversion has a strong genetical basis. A model assuming random mating, additive gene action, and specific environmental effects adequately describes the observed variation and covariation of sociability and impulsiveness. Possible evolutionary implications are discussed.", "title": "" }, { "docid": "8dba4c33323b336850002d1b84951723", "text": "This paper presents octave-tunable resonators and filters with surface mounted lumped tuning elements. Detailed theoretical analysis and modeling in terms of tuning range and unloaded quality factor (Qu) are presented in agreement with simulated and measured results. Based on the models, a systematic design method to maximize tuning ratio and optimize Qu of the resonator is suggested. A resonator tuning from 0.5 to 1.1 GHz with Qu ranging from 90 to 214 is demonstrated using solid-state varactors. A two-pole filter with a tuning range of 0.5-1.1 GHz with a constant 3-dB fractional bandwidth (FBW) of 4±0.1% and insertion loss of 1.67 dB at 1.1 GHz is demonstrated along with a three-pole filter with a tuning range of 0.58-1.22 GHz with a constant 3-dB FBW of 4±0.2% and insertion loss of 2.05 dB at 1.22 GHz. The measured input third-order intermodulation is better than 17 dBm over the frequency range for the two-pole filter.", "title": "" }, { "docid": "9c4845279d61619594461d140cfd9311", "text": "This paper presents a fusion approach for improving human action recognition based on two differing modality sensors consisting of a depth camera and an inertial body sensor. Computationally efficient action features are extracted from depth images provided by the depth camera and from accelerometer signals provided by the inertial body sensor. These features consist of depth motion maps and statistical signal attributes. For action recognition, both feature-level fusion and decision-level fusion are examined by using a collaborative representation classifier. In the feature-level fusion, features generated from the two differing modality sensors are merged before classification, while in the decision-level fusion, the Dempster-Shafer theory is used to combine the classification outcomes from two classifiers, each corresponding to one sensor. The introduced fusion framework is evaluated using the Berkeley multimodal human action database. The results indicate that because of the complementary aspect of the data from these sensors, the introduced fusion approaches lead to 2% to 23% recognition rate improvements depending on the action over the situations when each sensor is used individually.", "title": "" }, { "docid": "b61d31cb5d385f14a58c368a2d71f7ef", "text": "In a modified 4 X 4 factorial design with race (black-white) of the harm-doer and race (black-white) of the victim as the major factors, the phenomenon of differential social perception of intergroup violence was established. While subjects, observing a videotape of purported ongoing ineraction occuring in another room, labeled an act (ambiguous shove) as more violent when it was performed by a black than when the same act was perpetrated by a white. That is, the concept of violence was more accessible when viewing a black than when viewing a white committing the same act. Causal attributions were also found to be divergent. Situation attributions were preferred when the harm-doer was white, and person (dispositional) attributions were preferred in the black-protagonist conditions. The results are discussed in terms of perceptual threshold, sterotypy, and attributional biases.", "title": "" }, { "docid": "76ecd4ba20333333af4d09b894ff29fc", "text": "This study is an application of social identity theory to feminist consciousness and activism. For women, strong gender identifications may enhance support for equality struggles, whereas for men, they may contribute to backlashes against feminism. University students (N � 276), primarily Euroamerican, completed a measure of gender self-esteem (GSE, that part of one’s selfconcept derived from one’s gender), and two measures of feminism. High GSE in women and low GSE in men were related to support for feminism. Consistent with past research, women were more supportive of feminism than men, and in both genders, support for feminist ideas was greater than self-identification as a feminist.", "title": "" }, { "docid": "60f561722cf0aea09a691269c7768322", "text": "Embedded electronic components, so-called ECU (Electronic Controls Units), are nowadays a prominent part of a car's architecture. These ECUs, monitoring and controlling the different subsystems of a car, are interconnected through several gateways and compose the global internal network of the car. Moreover, modern cars are now able to communicate with other devices through wired or wireless interfaces such as USB, Bluetooth, WiFi or even 3G. Such interfaces may expose the internal network to the outside world and can be seen as entry points for cyber attacks. In this paper, we present a survey on security threats and protection mechanisms in embedded automotive networks. After introducing the different protocols being used in the embedded networks of current vehicles, we then analyze the potential threats targeting these networks and describe how the attackers' opportunities can be enhanced by the new communication abilities of modern cars. Finally, we present the security solutions currently being devised to address these problems.", "title": "" }, { "docid": "fc9f5e2fa3a8d45273ed58f1eaa3fa15", "text": "The Penrose-Hameroff orchestrated objective reduction (orch. OR) model assigns a cognitive role to quantum computations in microtubules within the neurons of the brain. Despite an apparently \"warm, wet, and noisy\" intracellular milieu, the proposal suggests that microtubules avoid environmental decoherence long enough to reach threshold for \"self-collapse\" (objective reduction) by a quantum gravity mechanism put forth by Penrose. The model has been criticized as regards the issue of environmental decoherence, and a recent report by Tegmark finds that microtubules can maintain quantum coherence for only 10(-13) s, far too short to be neurophysiologically relevant. Here, we critically examine the decoherence mechanisms likely to dominate in a biological setting and find that (1) Tegmark's commentary is not aimed at an existing model in the literature but rather at a hybrid that replaces the superposed protein conformations of the orch. OR theory with a soliton in superposition along the microtubule; (2) recalculation after correcting for differences between the model on which Tegmark bases his calculations and the orch. OR model (superposition separation, charge vs dipole, dielectric constant) lengthens the decoherence time to 10(-5)-10(-4) s; (3) decoherence times on this order invalidate the assumptions of the derivation and determine the approximation regime considered by Tegmark to be inappropriate to the orch. OR superposition; (4) Tegmark's formulation yields decoherence times that increase with temperature contrary to well-established physical intuitions and the observed behavior of quantum coherent states; (5) incoherent metabolic energy supplied to the collective dynamics ordering water in the vicinity of microtubules at a rate exceeding that of decoherence can counter decoherence effects (in the same way that lasers avoid decoherence at room temperature); (6) microtubules are surrounded by a Debye layer of counterions, which can screen thermal fluctuations, and by an actin gel that might enhance the ordering of water in bundles of microtubules, further increasing the decoherence-free zone by an order of magnitude and, if the dependence on the distance between environmental ion and superposed state is accurately reflected in Tegmark's calculation, extending decoherence times by three orders of magnitude; (7) topological quantum computation in microtubules may be error correcting, resistant to decoherence; and (8) the decohering effect of radiative scatterers on microtubule quantum states is negligible. These considerations bring microtubule decoherence into a regime in which quantum gravity could interact with neurophysiology.", "title": "" }, { "docid": "49af355cfc9e13234a2a3b115f225c1b", "text": "Tattoos play an important role in many religions. Tattoos have been used for thousands of years as important tools in ritual and tradition. Judaism, Christianity, and Islam have been hostile to the use of tattoos, but many religions, in particular Buddhism and Hinduism, make extensive use of them. This article examines their use as tools for protection and devotion.", "title": "" }, { "docid": "40649a3bc0ea3ac37ed99dca22e52b92", "text": "This paper presents a 40 Gb/s serial-link receiver including an adaptive equalizer and a CDR circuit. A parallel-path equalizing filter is used to compensate the high-frequency loss in copper cables. The adaptation is performed by only varying the gain in the high-pass path, which allows a single loop for proper control and completely removes the RC filters used for separately extracting the high- and low-frequency contents of the signal. A full-rate bang-bang phase detector with only five latches is proposed in the following CDR circuit. Minimizing the number of latches saves the power consumption and the area occupied by inductors. The performance is also improved by avoiding complicated routing of high-frequency signals. The receiver is able to recover 40 Gb/s data passing through a 4 m cable with 10 dB loss at 20 GHz. For an input PRBS of 2 7-1, the recovered clock jitter is 0.3 psrms and 4.3 pspp. The retimed data exhibits 500 mV pp output swing and 9.6 pspp jitter with BER <10-12. Fabricated in 90 nm CMOS technology, the receiver consumes 115 mW , of which 58 mW is dissipated in the equalizer and 57 mW in the CDR.", "title": "" }, { "docid": "d5a0702c1e6195be4185e9eb7b183aff", "text": "A sensitive and simple color sensor for indole vapors has been developed based on the Ehrlich-type reaction in solid polymer film. Upon 60-min exposure of the film sensor to the air containing 5 - 100 ppb of indole vapors, pink or magenta color could be recognized by the naked eyes. Alternatively, a trial gas detector tube has been prepared by mixing the reagents with sea sand. When air (100 mL) was pumped through the detector tube, indole vapors above 20 ppb could be detected within 1 min. The sensing was selective to the vapors of indoles and pyrroles, and other VOCs or ambient moisture did not interfere.", "title": "" }, { "docid": "850a7daa56011e6c53b5f2f3e33d4c49", "text": "Multi-objective evolutionary algorithms (MOEAs) have achieved great progress in recent decades, but most of them are designed to solve unconstrained multi-objective optimization problems. In fact, many real-world multi-objective problems usually contain a number of constraints. To promote the research of constrained multi-objective optimization, we first propose three primary types of difficulty, which reflect the challenges in the real-world optimization problems, to characterize the constraint functions in CMOPs, including feasibility-hardness, convergencehardness and diversity-hardness. We then develop a general toolkit to construct difficulty adjustable and scalable constrained multi-objective optimization problems (CMOPs) with three types of parameterized constraint functions according to the proposed three primary types of difficulty. In fact, combination of the three primary constraint functions with different parameters can lead to construct a large variety of CMOPs, whose difficulty can be uniquely defined by a triplet with each of its parameter specifying the level of each primary difficulty type respectively. Furthermore, the number of objectives in this toolkit are able to scale to more than two. Based on this toolkit, we suggest nine difficulty adjustable and scalable CMOPs named DAS-CMOP1-9. To evaluate the proposed test problems, two popular CMOEAs MOEA/D-CDP and NSGA-II-CDP are adopted to test their performances on DAS-CMOP1-9 with different difficulty triplets. The experiment results demonstrate that none of them can solve these problems efficiently, which stimulate us to develop new constrained MOEAs to solve the suggested DAS-CMOPs.", "title": "" }, { "docid": "8891a2d5867408455445bb70ba6a0d30", "text": "A new type of spherical robot, called KisBot, is presented that includes arms and two types of driving mode: rolling and wheeling. In the rolling mode, the robot uses its arms as pendulums and works like a pendulum-driven robot, while in the wheeling mode, it extends its arms to the ground and works like a one-wheel car. The basic design idea of KisBot is introduced and a prototype is implemented. The robot has a wheel-shaped body between two rotating semi-spheres. Each semi-sphere contains one DC motor for propulsion in the rolling mode and wheeling mode, one RC motor for arm extension, a speed controller for changing the direction of the arm rotation, a battery as the power source, and the mechanical components of the arm. Experiments using the rolling mode and wheeling mode verify the driving efficiency of the proposed spherical robot. Key-Words: Spherical robot, Rolling robot, Deformable robot, Locomotion, Motion generation.", "title": "" }, { "docid": "a2a633c972cb84d9b7d27e347bb59cfa", "text": "This study investigated three-dimensional (3D) texture as a possible diagnostic marker of Alzheimer’s disease (AD). T1-weighted magnetic resonance (MR) images were obtained from 17 AD patients and 17 age and gender-matched healthy controls. 3D texture features were extracted from the circular 3D ROIs placed using a semi-automated technique in the hippocampus and entorhinal cortex. We found that classification accuracies based on texture analysis of the ROIs varied from 64.3% to 96.4% due to different ROI selection, feature extraction and selection options, and that most 3D texture features selected were correlated with the mini-mental state examination (MMSE) scores. The results indicated that 3D texture could detect the subtle texture differences between tissues in AD patients and normal controls, and texture features of MR images in the hippocampus and entorhinal cortex might be related to the severity of AD cognitive impairment. These results suggest that 3D texture might be a useful aid in AD diagnosis.", "title": "" }, { "docid": "48a0e75b97fdaa734f033c6b7791e81f", "text": "OBJECTIVE\nTo examine the role of physical activity, inactivity, and dietary patterns on annual weight changes among preadolescents and adolescents, taking growth and development into account.\n\n\nSTUDY DESIGN\nWe studied a cohort of 6149 girls and 4620 boys from all over the United States who were 9 to 14 years old in 1996. All returned questionnaires in the fall of 1996 and a year later in 1997. Each child provided his or her current height and weight and a detailed assessment of typical past-year dietary intakes, physical activities, and recreational inactivities (TV, videos/VCR, and video/computer games).\n\n\nMETHODS\nOur hypotheses were that physical activity and dietary fiber intake are negatively correlated with annual changes in adiposity and that recreational inactivity (TV/videos/games), caloric intake, and dietary fat intake are positively correlated with annual changes in adiposity. Separately for boys and girls, we performed regression analysis of 1-year change in body mass index (BMI; kg/m(2)). All hypothesized factors were in the model simultaneously with several adjustment factors.\n\n\nRESULTS\nLarger increases in BMI from 1996 to 1997 were among girls who reported higher caloric intakes (.0061 +/-.0026 kg/m(2) per 100 kcal/day; beta +/- standard error), less physical activity (-.0284 +/-.0142 kg/m(2)/hour/day) and more time with TV/videos/games (.0372 +/-.0106 kg/m(2)/hour/day) during the year between the 2 BMI assessments. Larger BMI increases were among boys who reported more time with TV/videos/games (.0384 +/-.0101) during the year. For both boys and girls, a larger rise in caloric intake from 1996 to 1997 predicted larger BMI increases (girls:.0059 +/-.0027 kg/m(2) per increase of 100 kcal/day; boys:.0082 +/-.0030). No significant associations were noted for energy-adjusted dietary fat or fiber.\n\n\nCONCLUSIONS\nFor both boys and girls, a 1-year increase in BMI was larger in those who reported more time with TV/videos/games during the year between the 2 BMI measurements, and in those who reported that their caloric intakes increased more from 1 year to the next. Larger year-to-year increases in BMI were also seen among girls who reported higher caloric intakes and less physical activity during the year between the 2 BMI measurements. Although the magnitudes of these estimated effects were small, their cumulative effects, year after year during adolescence, would produce substantial gains in body weight. Strategies to prevent excessive caloric intakes, to decrease time with TV/videos/games, and to increase physical activity would be promising as a means to prevent obesity.", "title": "" }, { "docid": "b76af76207fa3ef07e8f2fbe6436dca0", "text": "Face recognition applications for airport security and surveillance can benefit from the collaborative coupling of mobile and cloud computing as they become widely available today. This paper discusses our work with the design and implementation of face recognition applications using our mobile-cloudlet-cloud architecture named MOCHA and its initial performance results. The challenge lies with how to perform task partitioning from mobile devices to cloud and distribute compute load among cloud servers (cloudlet) to minimize the response time given diverse communication latencies and server compute powers. Our preliminary simulation results show that optimal task partitioning algorithms significantly affect response time with heterogeneous latencies and compute powers. Motivated by these results, we design, implement, and validate the basic functionalities of MOCHA as a proof-of-concept, and develop algorithms that minimize the overall response time for face recognition. Our experimental results demonstrate that high-powered cloudlets are technically feasible and indeed help reduce overall processing time when face recognition applications run on mobile devices using the cloud as the backend servers.", "title": "" } ]
scidocsrr
cde5215a17029c028731d32f7a57441d
Dolphin swarm algorithm
[ { "docid": "51ac5dde554fd8363fcf95e6d3caf439", "text": "Swarm intelligence is a relatively novel field. It addresses the study of the collective behaviors of systems made by many components that coordinate using decentralized controls and self-organization. A large part of the research in swarm intelligence has focused on the reverse engineering and the adaptation of collective behaviors observed in natural systems with the aim of designing effective algorithms for distributed optimization. These algorithms, like their natural systems of inspiration, show the desirable properties of being adaptive, scalable, and robust. These are key properties in the context of network routing, and in particular of routing in wireless sensor networks. Therefore, in the last decade, a number of routing protocols for wireless sensor networks have been developed according to the principles of swarm intelligence, and, in particular, taking inspiration from the foraging behaviors of ant and bee colonies. In this paper, we provide an extensive survey of these protocols. We discuss the general principles of swarm intelligence and of its application to routing. We also introduce a novel taxonomy for routing protocols in wireless sensor networks and use it to classify the surveyed protocols. We conclude the paper with a critical analysis of the status of the field, pointing out a number of fundamental issues related to the (mis) use of scientific methodology and evaluation procedures, and we identify some future research directions. 2010 Elsevier Inc. All rights reserved.", "title": "" } ]
[ { "docid": "54120754dc82632e6642cbd08401d2dc", "text": "In this paper we study the dynamic modeling of a unicycle robot composed of a wheel, a frame and a disk. The unicycle can reach longitudinal stability by appropriate control to the wheel and lateral stability by adjusting appropriate torque imposed by the disk. The dynamic modeling of the unicycle robot is derived by Euler-Lagrange method. The stability and controllability of the system are analyzed according to the mathematic model. Independent simulation using MATLAB and ODE methods are then proposed respectively. Through the simulation, we confirm the validity of the two obtained models of the unicycle robot system, and provide two experimental platforms for the designing of the balance controller.", "title": "" }, { "docid": "fc6e5b83900d87fd5d6eec6d84d47939", "text": "In this letter, we propose a low complexity linear precoding scheme for downlink multiuser MIMO precoding systems where there is no limit on the number of multiple antennas employed at both the base station and the users. In the proposed algorithm, we can achieve the precoder in two steps. In the first step, we balance the multiuser interference (MUI) and noise by carrying out a novel channel extension approach. In the second step, we further optimize the system performance assuming parallel SU MIMO channels. Simulation results show that the proposed algorithm can achieve elaborate performance while offering lower computational complexity.", "title": "" }, { "docid": "8272f6d511cc8aa104ba10c23deb17a5", "text": "The challenge of developing facial recognition systems has been the focus of many research efforts in recent years and has numerous applications in areas such as security, entertainment, and biometrics. Recently, most progress in this field has come from training very deep neural networks on massive datasets which is computationally intensive and time consuming. Here, we propose a deep transfer learning (DTL) approach that integrates transfer learning techniques and convolutional neural networks and apply it to the problem of facial recognition to fine-tune facial recognition models. Transfer learning can allow for the training of robust, high-performance machine learning models that require much less time and resources to produce than similarly performing models that have been trained from scratch. Using a pre-trained face recognition model, we were able to perform transfer learning to produce a network that is capable of making accurate predictions on much smaller datasets. We also compare our results with results produced by a selection of classical algorithms on the same datasets to demonstrate the effectiveness of the proposed DTL approach.", "title": "" }, { "docid": "4a5959a7bcfaa0c7768d9a0d742742be", "text": "In this paper, we are interested in understanding the interrelationships between mainstream and social media in forming public opinion during mass crises, specifically in regards to how events are framed in the mainstream news and on social networks and to how the language used in those frames may allow to infer political slant and partisanship. We study the lingual choices for political agenda setting in mainstream and social media by analyzing a dataset of more than 40M tweets and more than 4M news articles from the mass protests in Ukraine during 2013-2014 — known as \"Euromaidan\" — and the post-Euromaidan conflict between Russian, pro-Russian and Ukrainian forces in eastern Ukraine and Crimea. We design a natural language processing algorithm to analyze at scale the linguistic markers which point to a particular political leaning in online media and show that political slant in news articles and Twitter posts can be inferred with a high level of accuracy. These findings allow us to better understand the dynamics of partisan opinion formation during mass crises and the interplay between mainstream and social media in such circumstances.", "title": "" }, { "docid": "3668b5394b68a6dfc82951121ebdda8d", "text": "Now a day the usage of credit cards has dramatically increased. As credit card becomes the most popular mode of payment for both online as well as regular purchase, cases of fraud associated with it are also rising. Various techniques like classification, clustering and apriori of web mining will be integrated to represent the sequence of operations in credit card transaction processing and show how it can be used for the detection of frauds. Initially, web mining techniques trained with the normal behaviour of a cardholder. If an incoming credit card transaction is not accepted by the web mining model with sufficiently high probability, it is considered to be fraudulent. At the same time, the system will try to ensure that genuine transactions will not be rejected. Using data from a credit card issuer, a web mining model based fraud detection system will be trained on a large sample of labelled credit card account transactions and tested on a holdout data set that consisted of all account activity. Web mining techniques can be trained on examples of fraud due to lost cards, stolen cards, application fraud, counterfeit fraud, and mail-order fraud. The proposed system will be able to detect frauds by considering a cardholder‟s spending habit without its significance. Usually, the details of items purchased in individual transactions are not known to any Fraud Detection System. The proposed system will be an ideal choice for addressing this problem of current fraud detection system. Another important advantage of proposed system will be a drastic reduction in the number of False Positives transactions. FDS module of proposed system will receive the card details and the value of purchase to verify, whether the transaction is genuine or not. If the Fraud Detection System module will confirm the transaction to be of fraud, it will raise an alarm, and the transaction will be declined.", "title": "" }, { "docid": "a33e8a616955971014ceea9da1e8fcbe", "text": "Highlights Auditory middle and late latency responses can be recorded reliably from ear-EEG.For sources close to the ear, ear-EEG has the same signal-to-noise-ratio as scalp.Ear-EEG is an excellent match for power spectrum-based analysis. A method for measuring electroencephalograms (EEG) from the outer ear, so-called ear-EEG, has recently been proposed. The method could potentially enable robust recording of EEG in natural environments. The objective of this study was to substantiate the ear-EEG method by using a larger population of subjects and several paradigms. For rigor, we considered simultaneous scalp and ear-EEG recordings with common reference. More precisely, 32 conventional scalp electrodes and 12 ear electrodes allowed a thorough comparison between conventional and ear electrodes, testing several different placements of references. The paradigms probed auditory onset response, mismatch negativity, auditory steady-state response and alpha power attenuation. By comparing event related potential (ERP) waveforms from the mismatch response paradigm, the signal measured from the ear electrodes was found to reflect the same cortical activity as that from nearby scalp electrodes. It was also found that referencing the ear-EEG electrodes to another within-ear electrode affects the time-domain recorded waveform (relative to scalp recordings), but not the timing of individual components. It was furthermore found that auditory steady-state responses and alpha-band modulation were measured reliably with the ear-EEG modality. Finally, our findings showed that the auditory mismatch response was difficult to monitor with the ear-EEG. We conclude that ear-EEG yields similar performance as conventional EEG for spectrogram-based analysis, similar timing of ERP components, and equal signal strength for sources close to the ear. Ear-EEG can reliably measure activity from regions of the cortex which are located close to the ears, especially in paradigms employing frequency-domain analyses.", "title": "" }, { "docid": "310f13dac8d7cf2d1b40878ef6ce051b", "text": "Traffic Accidents are occurring due to development of automobile industry and the accidents are unavoidable even the traffic rules are very strictly maintained. Data mining algorithm is applied to model the traffic accident injury level by using traffic accident dataset. It helped by obtaining the characteristics of drivers behavior, road condition and weather condition, Accident severity that are connected with different injury severities and death. This paper presents some models to predict the severity of injury using some data mining algorithms. The study focused on collecting the real data from previous research and obtains the injury severity level of traffic accident data.", "title": "" }, { "docid": "a559652585e2df510c1dd060cdf65ead", "text": "Experience replay is an important technique for addressing sample-inefficiency in deep reinforcement learning (RL), but faces difficulty in learning from binary and sparse rewards due to disproportionately few successful experiences in the replay buffer. Hindsight experience replay (HER) (Andrychowicz et al. 2017) was recently proposed to tackle this difficulty by manipulating unsuccessful transitions, but in doing so, HER introduces a significant bias in the replay buffer experiences and therefore achieves a suboptimal improvement in sample-efficiency. In this paper, we present an analysis on the source of bias in HER, and propose a simple and effective method to counter the bias, to most effectively harness the sample-efficiency provided by HER. Our method, motivated by counter-factual reasoning and called ARCHER, extends HER with a trade-off to make rewards calculated for hindsight experiences numerically greater than real rewards. We validate our algorithm on two continuous control environments from DeepMind Control Suite (Tassa et al. 2018) Reacher and Finger, which simulate manipulation tasks with a robotic arm in combination with various reward functions, task complexities and goal sampling strategies. Our experiments consistently demonstrate that countering bias using more aggressive hindsight rewards increases sample efficiency, thus establishing the greater benefit of ARCHER in RL applications with limited computing budget.", "title": "" }, { "docid": "e3d212f67713f6a902fe0f3eb468eddf", "text": "We propose a novel LSTM-based deep multi-task learning framework for aspect term extraction from user review sentences. Two LSTMs equipped with extended memories and neural memory operations are designed for jointly handling the extraction tasks of aspects and opinions via memory interactions. Sentimental sentence constraint is also added for more accurate prediction via another LSTM. Experiment results over two benchmark datasets demonstrate the effectiveness of our framework.", "title": "" }, { "docid": "9ebf703bcf5004a74189638514b20313", "text": "In many real-world tasks, there are abundant unlabeled examples but the number of labeled training examples is limited, because labeling the examples requires human efforts and expertise. So, semi-supervised learning which tries to exploit unlabeled examples to improve learning performance has become a hot topic. Disagreement-based semi-supervised learning is an interesting paradigm, where multiple learners are trained for the task and the disagreements among the learners are exploited during the semi-supervised learning process. This survey article provides an introduction to research advances in this paradigm.", "title": "" }, { "docid": "69049d1f5a3b14bb00d57d16a93ec47f", "text": "The porphyrias are disorders of haem biosynthesis which present with acute neurovisceral attacks or disorders of sun-exposed skin. Acute attacks occur mainly in adults and comprise severe abdominal pain, nausea, vomiting, autonomic disturbance, central nervous system involvement and peripheral motor neuropathy. Cutaneous porphyrias can be acute or chronic presenting at various ages. Timely diagnosis depends on clinical suspicion leading to referral of appropriate samples for screening by reliable biochemical methods. All samples should be protected from light. Investigation for an acute attack: • Porphobilinogen (PBG) quantitation in a random urine sample collected during symptoms. Urine concentration must be assessed by measuring creatinine, and a repeat requested if urine creatinine <2 mmol/L. • Urgent porphobilinogen testing should be available within 24 h of sample receipt at the local laboratory. Urine porphyrin excretion (TUP) should subsequently be measured on this urine. • Urine porphobilinogen should be measured using a validated quantitative ion-exchange resin-based method or LC-MS. • Increased urine porphobilinogen excretion requires confirmatory testing and clinical advice from the National Acute Porphyria Service. • Identification of individual acute porphyrias requires analysis of urine, plasma and faecal porphyrins. Investigation for cutaneous porphyria: • An EDTA blood sample for plasma porphyrin fluorescence emission spectroscopy and random urine sample for TUP. • Whole blood for porphyrin analysis is essential to identify protoporphyria. • Faeces need only be collected, if first-line tests are positive or if clinical symptoms persist. Investigation for latent porphyria or family history: • Contact a specialist porphyria laboratory for advice. Clinical, family details are usually required.", "title": "" }, { "docid": "8caa44dc9d57b91c3455b66b152c131b", "text": "Prediction of protein function is of significance in studying biological processes. One approach for function prediction is to classify a protein into functional family. Support vector machine (SVM) is a useful method for such classification, which may involve proteins with diverse sequence distribution. We have developed a web-based software, SVMProt, for SVM classification of a protein into functional family from its primary sequence. SVMProt classification system is trained from representative proteins of a number of functional families and seed proteins of Pfam curated protein families. It currently covers 54 functional families and additional families will be added in the near future. The computed accuracy for protein family classification is found to be in the range of 69.1-99.6%. SVMProt shows a certain degree of capability for the classification of distantly related proteins and homologous proteins of different function and thus may be used as a protein function prediction tool that complements sequence alignment methods. SVMProt can be accessed at http://jing.cz3.nus.edu.sg/cgi-bin/svmprot.cgi.", "title": "" }, { "docid": "1427c235b4ca0b0557d62317d48e6b3f", "text": "In this paper, we propose a novel classification method for lung nodules from CT images based on hybrid features. Towards nodules of different types, including well-circumscribed, vascularized, juxta-pleural, pleural-tail, as well as ground glass optical (GGO) and non-nodule from CT scans, our method has achieved promising classification results. The proposed method utilizes hybrid descriptors consisting of statistical features from multi-view multi-scale convolutional neural networks (CNNs) and geometrical features from Fisher vector (FV) encodings based on scaleinvariant feature transform (SIFT). First, we approximate the nodule radii based on icosahedron sampling and intensity analysis. Then, we apply high frequency content measure analysis to obtain sampling views with more abundant information. After that, based on re-sampled views, we train multi-view multi-scale CNNs to extract statistical features and calculate FV encodings as geometrical features. Finally, we achieve hybrid features by merging statistical and geometrical features based on multiple kernel learning (MKL) and classify nodule types through a multi-class support vector machine. The experiments on LIDC-IDRI and ELCAP have shown that our method has achieved promising results and can be of great assistance for radiologists’ diagnosis of lung cancer in clinical practice.", "title": "" }, { "docid": "20c588c0d4985dfe2d2be406caf7f145", "text": "As clouds move to the network edge to facilitate mobile applications, edge cloud providers are facing new challenges on resource allocation. As users may move and resource prices may vary arbitrarily, %and service delays are heterogeneous, resources in edge clouds must be allocated and adapted continuously in order to accommodate such dynamics. In this paper, we first formulate this problem with a comprehensive model that captures the key challenges, then introduce a gap-preserving transformation of the problem, and propose a novel online algorithm that optimally solves a series of subproblems with a carefully designed logarithmic objective, finally producing feasible solutions for edge cloud resource allocation over time. We further prove via rigorous analysis that our online algorithm can provide a parameterized competitive ratio, without requiring any a priori knowledge on either the resource price or the user mobility. Through extensive experiments with both real-world and synthetic data, we further confirm the effectiveness of the proposed algorithm. We show that the proposed algorithm achieves near-optimal results with an empirical competitive ratio of about 1.1, reduces the total cost by up to 4x compared to static approaches, and outperforms the online greedy one-shot optimizations by up to 70%.", "title": "" }, { "docid": "a3db8f51d9dfa6608677d63492d2fb6f", "text": "In this article, we introduce nonlinear versions of the popular structure tensor, also known as second moment matrix. These nonlinear structure tensors replace the Gaussian smoothing of the classical structure tensor by discontinuity-preserving nonlinear diffusions. While nonlinear diffusion is a well-established tool for scalar and vector-valued data, it has not often been used for tensor images so far. Two types of nonlinear diffusion processes for tensor data are studied: an isotropic one with a scalar-valued diffusivity, and its anisotropic counterpart with a diffusion tensor. We prove that these schemes preserve the positive semidefiniteness of a matrix field and are, therefore, appropriate for smoothing structure tensor fields. The use of diffusivity functions of total variation (TV) type allows us to construct nonlinear structure tensors without specifying additional parameters compared to the conventional structure tensor. The performance of nonlinear structure tensors is demonstrated in three fields where the classic structure tensor is frequently used: orientation estimation, optic flow computation, and corner detection. In all these cases, the nonlinear structure tensors demonstrate their superiority over the classical linear one. Our experiments also show that for corner detection based on nonlinear structure tensors, anisotropic nonlinear tensors give the most precise localisation. q 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "9a397ca2a072d9b1f861f8a6770aa792", "text": "Computational photography systems are becoming increasingly diverse, while computational resources---for example on mobile platforms---are rapidly increasing. As diverse as these camera systems may be, slightly different variants of the underlying image processing tasks, such as demosaicking, deconvolution, denoising, inpainting, image fusion, and alignment, are shared between all of these systems. Formal optimization methods have recently been demonstrated to achieve state-of-the-art quality for many of these applications. Unfortunately, different combinations of natural image priors and optimization algorithms may be optimal for different problems, and implementing and testing each combination is currently a time-consuming and error-prone process. ProxImaL is a domain-specific language and compiler for image optimization problems that makes it easy to experiment with different problem formulations and algorithm choices. The language uses proximal operators as the fundamental building blocks of a variety of linear and nonlinear image formation models and cost functions, advanced image priors, and noise models. The compiler intelligently chooses the best way to translate a problem formulation and choice of optimization algorithm into an efficient solver implementation. In applications to the image processing pipeline, deconvolution in the presence of Poisson-distributed shot noise, and burst denoising, we show that a few lines of ProxImaL code can generate highly efficient solvers that achieve state-of-the-art results. We also show applications to the nonlinear and nonconvex problem of phase retrieval.", "title": "" }, { "docid": "16635864d498d3ae01b42dd45085f2a1", "text": "The previously developed radially bounded nearest neighbor (RBNN) algorithm have been shown a good performance for 3D point cloud segmentation in indoor scenarios. In outdoor scenarios however it is hard to adapt the original RBNN to an intelligent vehicle directly due to several drawbacks. In this paper, drawbacks of RBNN are addressed and we propose an enhanced RBNN for an intelligent vehicle operating in urban environments by proposing the ground elimination and the distance-varying radius. After the ground removal, objects can be remained to segment without merging the ground and objects, whereas the original RBNN with the fixed radius induced over-segmentation or under-segmentation. We design the distance-varying radius which is varied properly from the distance between a laser scanner and scanning objects. The proposed distance-varying radius is successfully induced to segment objects without over or under segmentation. In the experimental results, we have shown that the enhance RBNN is preferable to segment urban structures in terms of time consumption, and even segmentation rates.", "title": "" }, { "docid": "0424eb9791e726cb3852d0413924a94e", "text": "We address the problem of automatically recognizing artistic movement in digitized paintings. We make the following contributions: Firstly, we introduce a large digitized painting database that contains refined annotations of artistic movement. Secondly, we propose a new system for the automatic categorization that resorts to image descriptions by color structure and novel topographical features as well as to an adapted boosted ensemble of support vector machines. The system manages to isolate initially misclassified images and to correct such errors in further stages of the boosting process. The resulting performance of the system compares favorably with classical solutions in terms of accuracy and even manages to outperform modern deep learning frameworks.", "title": "" }, { "docid": "bf241075beac4fedfb0ad9f8551c652d", "text": "This paper discloses a new very broadband compact transition between double-ridge waveguide and coaxial line. The transition includes an original waveguide to coaxial mode converter and modified impedance transformer. Very good performance is predicted theoretically and confirmed experimentally over a 3:1 bandwidth.", "title": "" } ]
scidocsrr
6a80ac077fdd5a02af9567a309146f62
Botcoin: Monetizing Stolen Cycles
[ { "docid": "bc8b40babfc2f16144cdb75b749e3a90", "text": "The Bitcoin scheme is a rare example of a large scale global payment system in which all the transactions are publicly accessible (but in an anonymous way). We downloaded the full history of this scheme, and analyzed many statistical properties of its associated transaction graph. In this paper we answer for the first time a variety of interesting questions about the typical behavior of users, how they acquire and how they spend their bitcoins, the balance of bitcoins they keep in their accounts, and how they move bitcoins between their various accounts in order to better protect their privacy. In addition, we isolated all the large transactions in the system, and discovered that almost all of them are closely related to a single large transaction that took place in November 2010, even though the associated users apparently tried to hide this fact with many strange looking long chains and fork-merge structures in the transaction graph.", "title": "" } ]
[ { "docid": "24167db00908c65558e8034d94dfb8da", "text": "Due to the wide variety of devices used in computer network systems, cybersecurity plays a major role in securing and improving the performance of the network or system. Although cybersecurity has received a large amount of global interest in recent years, it remains an open research space. Current security solutions in network-based cyberspace provide an open door to attackers by communicating first before authentication, thereby leaving a black hole for an attacker to enter the system before authentication. This article provides an overview of cyberthreats, traditional security solutions, and the advanced security model to overcome current security drawbacks.", "title": "" }, { "docid": "761be34401cc6ef1d8eea56465effca9", "text": "Résumé: Dans cet article, nous proposons une nouvelle approche pour le résumé automatique de textes utilisant un algorithme d'apprentissage numérique spécifique à la tâche d'ordonnancement. L'objectif est d'extraire les phrases d'un document qui sont les plus représentatives de son contenu. Pour se faire, chaque phrase d'un document est représentée par un vecteur de scores de pertinence, où chaque score est un score de similarité entre une requête particulière et la phrase considérée. L'algorithme d'ordonnancement effectue alors une combinaison linéaire de ces scores, avec pour but d'affecter aux phrases pertinentes d'un document des scores supérieurs à ceux des phrases non pertinentes du même document. Les algorithmes d'ordonnancement ont montré leur efficacité en particulier dans le domaine de la méta-recherche, et leur utilisation pour le résumé est motivée par une analogie peut être faite entre la méta-recherche et le résumé automatique qui consiste, dans notre cas, à considérer les similarités des phrases avec les différentes requêtes comme étant des sorties de différents moteurs de recherche. Nous montrons empiriquement que l'algorithme d'ordonnancement a de meilleures performances qu'une approche utilisant un algorithme de classification sur deux corpus distincts.", "title": "" }, { "docid": "0cb237a05e30a4bc419dc374f3a7b55a", "text": "Question-and-answer (Q&A) websites, such as Yahoo! Answers, Stack Overflow and Quora, have become a popular and powerful platform for Web users to share knowledge on a wide range of subjects. This has led to a rapidly growing volume of information and the consequent challenge of readily identifying high quality objects (questions, answers and users) in Q&A sites. Exploring the interdependent relationships among different types of objects can help find high quality objects in Q&A sites more accurately. In this paper, we specifically focus on the ranking problem of co-ranking questions, answers and users in a Q&A website. By studying the tightly connected relationships between Q&A objects, we can gain useful insights toward solving the co-ranking problem. However, co-ranking multiple objects in Q&A sites is a challenging task: a) With the large volumes of data in Q&A sites, it is important to design a model that can scale well; b) The large-scale Q&A data makes extracting supervised information very expensive. In order to address these issues, we propose an unsupervised Network-based Co-Ranking framework (NCR) to rank multiple objects in Q&A sites. Empirical studies on real-world Yahoo! Answers datasets demonstrate the effectiveness and the efficiency of the proposed NCR method.", "title": "" }, { "docid": "e0b8b4c2431b92ff878df197addb4f98", "text": "Malware classification is a critical part of the cybersecurity. Traditional methodologies for the malware classification typically use static analysis and dynamic analysis to identify malware. In this paper, a malware classification methodology based on its binary image and extracting local binary pattern (LBP) features is proposed. First, malware images are reorganized into 3 by 3 grids which are mainly used to extract LBP feature. Second, the LBP is implemented on the malware images to extract features in that it is useful in pattern or texture classification. Finally, Tensorflow, a library for machine learning, is applied to classify malware images with the LBP feature. Performance comparison results among different classifiers with different image descriptors such as GIST, a spatial envelope, and the LBP demonstrate that our proposed approach outperforms others.", "title": "" }, { "docid": "dd1fd4f509e385ea8086a45a4379a8b5", "text": "As we move towards large-scale object detection, it is unrealistic to expect annotated training data for all object classes at sufficient scale, and so methods capable of unseen object detection are required. We propose a novel zero-shot method based on training an end-to-end model that fuses semantic attribute prediction with visual features to propose object bounding boxes for seen and unseen classes. While we utilize semantic features during training, our method is agnostic to semantic information for unseen classes at test-time. Our method retains the efficiency and effectiveness of YOLO [1] for objects seen during training, while improving its performance for novel and unseen objects. The ability of state-of-art detection methods to learn discriminative object features to reject background proposals also limits their performance for unseen objects. We posit that, to detect unseen objects, we must incorporate semantic information into the visual domain so that the learned visual features reflect this information and leads to improved recall rates for unseen objects. We test our method on PASCAL VOC and MS COCO dataset and observed significant improvements on the average precision of unseen classes.", "title": "" }, { "docid": "173d791e05859ec4cc28b9649c414c62", "text": "Breast cancer is the most common invasive cancer in females worldwide. It usually presents with a lump in the breast with or without other manifestations. Diagnosis of breast cancer depends on physical examination, mammographic findings and biopsy results. Treatment of breast cancer depends on the stage of the disease. Lines of treatment include mainly surgical removal of the tumor followed by radiotherapy or chemotherapy. Other lines including immunotherapy, thermochemotherapy and alternative medicine may represent a hope for breast cancer", "title": "" }, { "docid": "438a9e517a98c6f98f7c86209e601f1b", "text": "One of the most challenging tasks in large-scale multi-label image retrieval is to map images into binary codes while preserving multilevel semantic similarity. Recently, several deep supervised hashing methods have been proposed to learn hash functions that preserve multilevel semantic similarity with deep convolutional neural networks. However, these triplet label based methods try to preserve the ranking order of images according to their similarity degrees to the queries while not putting direct constraints on the distance between the codes of very similar images. Besides, the current evaluation criteria are not able to measure the performance of existing hashing methods on preserving fine-grained multilevel semantic similarity. To tackle these issues, we propose a novel Deep Multilevel Semantic Similarity Preserving Hashing (DMSSPH) method to learn compact similarity-preserving binary codes for the huge body of multi-label image data with deep convolutional neural networks. In our approach, we make the best of the supervised information in the form of pairwise labels to maximize the discriminability of output binary codes. Extensive evaluations conducted on several benchmark datasets demonstrate that the proposed method significantly outperforms the state-of-the-art supervised and unsupervised hashing methods at the accuracies of top returned images, especially for shorter binary codes. Meanwhile, the proposed method shows better performance on preserving fine-grained multilevel semantic similarity according to the results under the Jaccard coefficient based evaluation criteria we propose.", "title": "" }, { "docid": "335ac6b7770ec7aaf2ec43ac32c1dc9e", "text": "The biodistribution and pharmacokinetics of (111)In-DTPA-labeled pegylated liposomes (IDLPL) were studied in 17 patients with locally advanced cancers. The patients received 65-107 MBq of IDLPL, and nuclear medicine whole body gamma camera imaging was used to study liposome biodistribution. The t(1/2beta) of IDLPL was 76.1 h. Positive tumor images were obtained in 15 of 17 studies (4 of 5 breast, 5 of 5 head and neck, 3 of 4 bronchus, 2 of 2 glioma, and 1 of 1 cervix cancer). The levels of tumor liposome uptake estimated from regions of interest on gamma camera images were approximately 0.5-3.5% of the injected dose at 72 h. The greatest levels of uptake were seen in the patients with head and neck cancers [33.0 +/- 15.8% ID/kg (percentage of injected dose/kg)]. The uptake in the lung tumors was at an intermediate level (18.3 +/- 5.7% ID/kg), and the breast cancers showed relatively low levels of uptake (5.3 +/- 2.6% ID/kg). These liposome uptake values mirrored the estimated tumor volumes of the various tumor types (36.2 +/- 18.0 cm3 for squamous cell cancer of the head and neck, 114.5 +/- 42.0 cm3 for lung tumors, and 234.7 +/- 101.4 cm3 for breast tumors). In addition, significant localization of the liposomes was seen in the tissues of the reticuloendothelial system (liver, spleen, and bone marrow). One patient with extensive mucocutaneous AIDS-related Kaposi sarcoma was also studied according to a modified protocol, and prominent deposition of the radiolabeled liposomes was demonstrated in these lesions. An additional two patients with resectable head and neck cancer received 26 MBq of IDLPL 48 h before undergoing surgical excision of their tumors. Samples of the tumor, adjacent normal mucosa, muscle, fat, skin, and salivary tissue were obtained at operation. The levels of tumor uptake were 8.8 and 15.9% ID/kg, respectively, with tumor uptake exceeding that in normal mucosa by a mean ratio of 2.3:1, in skin by 3.6:1, in salivary gland by 5.6:1, in muscle by 8.3:1, and in fat by 10.8:1. These data strongly support the development of pegylated liposomal agents for the treatment of solid tumors, particularly those of the head and neck.", "title": "" }, { "docid": "77ea0e24066d028d085069cb8f6733e0", "text": "Road scene reconstruction is a fundamental and crucial module at the perception phase for autonomous vehicles, and will influence the later phase, such as object detection, motion planing and path planing. Traditionally, self-driving car uses Lidar, camera or fusion of the two kinds of sensors for sensing the environment. However, single Lidar or camera-based approaches will miss crucial information, and the fusion-based approaches often consume huge computing resources. We firstly propose a conditional Generative Adversarial Networks (cGANs)-based deep learning model that can rebuild rich semantic scene images from upsampled Lidar point clouds only. This makes it possible to remove cameras to reduce resource consumption and improve the processing rate. Simulation on the KITTI dataset also demonstrates that our model can reestablish color imagery from a single Lidar point cloud, and is effective enough for real time sensing on autonomous driving vehicles.", "title": "" }, { "docid": "d786b83c7315b49b6251e27d73983e08", "text": "Memory access efficiency is a key factor in fully utilizing the computational power of graphics processing units (GPUs). However, many details of the GPU memory hierarchy are not released by GPU vendors. In this paper, we propose a novel fine-grained microbenchmarking approach and apply it to three generations of NVIDIA GPUs, namely Fermi, Kepler, and Maxwell, to expose the previously unknown characteristics of their memory hierarchies. Specifically, we investigate the structures of different GPU cache systems, such as the data cache, the texture cache and the translation look-aside buffer (TLB). We also investigate the throughput and access latency of GPU global memory and shared memory. Our microbenchmark results offer a better understanding of the mysterious GPU memory hierarchy, which will facilitate the software optimization and modelling of GPU architectures. To the best of our knowledge, this is the first study to reveal the cache properties of Kepler and Maxwell GPUs, and the superiority of Maxwell in shared memory performance under bank conflict.", "title": "" }, { "docid": "bffd767503e0ab9627fc8637ca3b2efb", "text": "Automatically searching for optimal hyperparameter configurations is of crucial importance for applying deep learning algorithms in practice. Recently, Bayesian optimization has been proposed for optimizing hyperparameters of various machine learning algorithms. Those methods adopt probabilistic surrogate models like Gaussian processes to approximate and minimize the validation error function of hyperparameter values. However, probabilistic surrogates require accurate estimates of sufficient statistics (e.g., covariance) of the error distribution and thus need many function evaluations with a sizeable number of hyperparameters. This makes them inefficient for optimizing hyperparameters of deep learning algorithms, which are highly expensive to evaluate. In this work, we propose a new deterministic and efficient hyperparameter optimization method that employs radial basis functions as error surrogates. The proposed mixed integer algorithm, called HORD, searches the surrogate for the most promising hyperparameter values through dynamic coordinate search and requires many fewer function evaluations. HORD does well in low dimensions but it is exceptionally better in higher dimensions. Extensive evaluations on MNIST and CIFAR-10 for four deep neural networks demonstrate HORD significantly outperforms the well-established Bayesian optimization methods such as GP, SMAC and TPE. For instance, on average, HORD is more than 6 times faster than GP-EI in obtaining the best configuration of 19 hyperparameters.", "title": "" }, { "docid": "f27e985a97fe7a61ce14c01aa1fd4a41", "text": "We propose a method for learning dictionaries towards sparse approximation of signals defined on vertices of arbitrary graphs. Dictionaries are expected to describe effectively the main spatial and spectral components of the signals of interest, so that their structure is dependent on the graph information and its spectral representation. We first show how operators can be defined for capturing different spectral components of signals on graphs. We then propose a dictionary learning algorithm built on a sparse approximation step and a dictionary update function, which iteratively leads to adapting the structured dictionary to the class of target signals. Experimental results on synthetic and natural signals on graphs demonstrate the efficiency of the proposed algorithm both in terms of sparse approximation and support recovery performance.", "title": "" }, { "docid": "51e65b3be95c641beb9221fb31687adc", "text": "This paper describes a robust localization system, similar to the used by the teams participating in the Robocup Small size league (SLL). The system, developed in Object Pascal, allows real time localization and control of an autonomous omnidirectional mobile robot. The localization algorithm is done resorting to odometry and global vision data fusion, applying an extended Kalman filter, being this method a standard approach for reducing the error in a least squares sense, using measurements from different sources.", "title": "" }, { "docid": "f7a2f86526209860d7ea89d3e7f2b576", "text": "Natural Language Processing continues to grow in popularity in a range of research and commercial applications, yet managing the wide array of potential NLP components remains a difficult problem. This paper describes CURATOR, an NLP management framework designed to address some common problems and inefficiencies associated with building NLP process pipelines; and EDISON, an NLP data structure library in Java that provides streamlined interactions with CURATOR and offers a range of useful supporting functionality.", "title": "" }, { "docid": "fcfc8cc9ed49f8fd023957156b86281c", "text": "As consumers spend more time on their mobile devices, a focal retailer’s natural approach is to target potential customers in close proximity to its own location. Yet focal (own) location targeting may cannibalize profits on infra-marginal sales. This study demonstrates the effectiveness of competitive locational targeting, the practice of promoting to consumers near a competitor’s location. The analysis is based on a randomized field experiment in which mobile promotions were sent to customers at three similar shopping areas (competitive, focal, and benchmark locations). The results show that competitive locational targeting can take advantage of heightened demand that a focal retailer would not otherwise capture. Competitive locational targeting produced increasing returns to promotional discount depth, whereas targeting the focal location produced decreasing returns to deep discounts, indicating saturation effects and profit cannibalization. These findings are important for marketers, who can use competitive locational targeting to generate incremental sales without cannibalizing profits. While the experiment focuses on the effects of unilateral promotions, it represents the first step in understanding the competitive implications of mobile marketing technologies.", "title": "" }, { "docid": "bdc9bc09af90bd85f64c79cbca766b61", "text": "The inhalation route is frequently used to administer drugs for the management of respiratory diseases such as asthma or chronic obstructive pulmonary disease. Compared with other routes of administration, inhalation offers a number of advantages in the treatment of these diseases. For example, via inhalation, a drug is directly delivered to the target organ, conferring high pulmonary drug concentrations and low systemic drug concentrations. Therefore, drug inhalation is typically associated with high pulmonary efficacy and minimal systemic side effects. The lung, as a target, represents an organ with a complex structure and multiple pulmonary-specific pharmacokinetic processes, including (1) drug particle/droplet deposition; (2) pulmonary drug dissolution; (3) mucociliary and macrophage clearance; (4) absorption to lung tissue; (5) pulmonary tissue retention and tissue metabolism; and (6) absorptive drug clearance to the systemic perfusion. In this review, we describe these pharmacokinetic processes and explain how they may be influenced by drug-, formulation- and device-, and patient-related factors. Furthermore, we highlight the complex interplay between these processes and describe, using the examples of inhaled albuterol, fluticasone propionate, budesonide, and olodaterol, how various sequential or parallel pulmonary processes should be considered in order to comprehend the pulmonary fate of inhaled drugs.", "title": "" }, { "docid": "c2a297417553cb46fd98353d8b8351ac", "text": "Recent advances in methods and techniques enable us to develop an interactive overlay to the global map of science based on aggregated citation relations among the 9,162 journals contained in the Science Citation Index and Social Science Citation Index 2009 combined. The resulting mapping is provided by VOSViewer. We first discuss the pros and cons of the various options: cited versus citing, multidimensional scaling versus spring-embedded algorithms, VOSViewer versus Gephi, and the various clustering algorithms and similarity criteria. Our approach focuses on the positions of journals in the multidimensional space spanned by the aggregated journal-journal citations. A number of choices can be left to the user, but we provide default options reflecting our preferences. Some examples are also provided; for example, the potential of using this technique to assess the interdisciplinarity of organizations and/or document sets.", "title": "" }, { "docid": "0c7221ffca357ba80401551333e1080d", "text": "The effects of temperature and current on the resistance of small geometry silicided contact structures have been characterized and modeled for the first time. Both, temperature and high current induced self heating have been shown to cause contact resistance lowering which can be significant in the performance of advanced ICs. It is demonstrated that contact-resistance sensitivity to temperature and current is controlled by the silicide thickness which influences the interface doping concentration, N. Behavior of W-plug and force-fill (FF) Al plug contacts have been investigated in detail. A simple model has been formulated which directly correlates contact resistance to temperature and N. Furthermore, thermal impedance of these contact structures have been extracted and a critical failure temperature demonstrated that can be used to design robust contact structures.", "title": "" }, { "docid": "9fdecc8854f539ddf7061c304616130b", "text": "This paper describes the pricing strategy model deployed at Airbnb, an online marketplace for sharing home and experience. The goal of price optimization is to help hosts who share their homes on Airbnb set the optimal price for their listings. In contrast to conventional pricing problems, where pricing strategies are applied to a large quantity of identical products, there are no \"identical\" products on Airbnb, because each listing on our platform offers unique values and experiences to our guests. The unique nature of Airbnb listings makes it very difficult to estimate an accurate demand curve that's required to apply conventional revenue maximization pricing strategies.\n Our pricing system consists of three components. First, a binary classification model predicts the booking probability of each listing-night. Second, a regression model predicts the optimal price for each listing-night, in which a customized loss function is used to guide the learning. Finally, we apply additional personalization logic on top of the output from the second model to generate the final price suggestions. In this paper, we focus on describing the regression model in the second stage of our pricing system. We also describe a novel set of metrics for offline evaluation. The proposed pricing strategy has been deployed in production to power the Price Tips and Smart Pricing tool on Airbnb. Online A/B testing results demonstrate the effectiveness of the proposed strategy model.", "title": "" }, { "docid": "9cc30ebeb2b51dbf70732a8df7c7fda2", "text": "This paper provides a summary of the 2007 Mars Design Reference Architecture 5.0 (DRA 5.0) [1], which is the latest in a series of NASA Mars reference missions. It provides a vision of one potential approach to human Mars exploration, including how Constellation systems could be used. The strategy and example implementation concepts that are described here should not be viewed as constituting a formal plan for the human exploration of Mars, but rather provide a common framework for future planning of systems concepts, technology development, and operational testing as well as potential Mars robotic missions, research that is conducted on the International Space Station, and future potential lunar exploration missions. This summary of the Mars DRA 5.0 provides an overview of the overall mission approach, surface strategy and exploration goals, as well as the key systems and challenges for the first three concepts for human missions to Mars.1,2", "title": "" } ]
scidocsrr
1f93b7fb042ffae8d131829d82f9e999
Neural systems for recognizing emotion
[ { "docid": "87a0972d43efa272887c3bcc70cab656", "text": "We used event-related fMRI to assess whether brain responses to fearful versus neutral faces are modulated by spatial attention. Subjects performed a demanding matching task for pairs of stimuli at prespecified locations, in the presence of task-irrelevant stimuli at other locations. Faces or houses unpredictably appeared at the relevant or irrelevant locations, while the faces had either fearful or neutral expressions. Activation of fusiform gyri by faces was strongly affected by attentional condition, but the left amygdala response to fearful faces was not. Right fusiform activity was greater for fearful than neutral faces, independently of the attention effect on this region. These results reveal differential influences on face processing from attention and emotion, with the amygdala response to threat-related expressions unaffected by a manipulation of attention that strongly modulates the fusiform response to faces.", "title": "" } ]
[ { "docid": "78fc46165449f94e75e70a2654abf518", "text": "This paper presents a non-photorealistic rendering technique that automatically generates a line drawing from a photograph. We aim at extracting a set of coherent, smooth, and stylistic lines that effectively capture and convey important shapes in the image. We first develop a novel method for constructing a smooth direction field that preserves the flow of the salient image features. We then introduce the notion of flow-guided anisotropic filtering for detecting highly coherent lines while suppressing noise. Our method is simple and easy to implement. A variety of experimental results are presented to show the effectiveness of our method in producing self-contained, high-quality line illustrations.", "title": "" }, { "docid": "87f6ede5af3b95933d8db69c6551588e", "text": "Circumcision remains the most common operation performed on males. Although, not technically difficult, it is accompanied by a rate of morbidity and can result in complications ranging from trivial to tragic. The reported incidence of complications varies from 0.1% to 35% the most common being infection, bleeding and failure to remove the appropriate amount of foreskin. Forty patients suffering from different degrees of circumcision complications and their treatment are presented. In all patients satisfactory functional and cosmetic results were achieved. Whether it is done for ritualistic, religious or medical reasons circumcision should be performed by a fully trained surgeon using a proper technique as follows 1) adequate use of antiseptic agents; 2) complete separation of inner preputial epithelium from the glans; 3) marking the skin to be removed at the beginning of operation; 4) careful attention to the baby’s voiding within the first 6 to 8 h after circumcision; 5) removal or replacement of the dressings on the day following circumcision.", "title": "" }, { "docid": "6e5e6b361d113fa68b2ca152fbf5b194", "text": "Spectral learning algorithms have recently become popular in data-rich domains, driven in part by recent advances in large scale randomized SVD, and in spectral estimation of Hidden Markov Models. Extensions of these methods lead to statistical estimation algorithms which are not only fast, scalable, and useful on real data sets, but are also provably correct. Following this line of research, we propose four fast and scalable spectral algorithms for learning word embeddings – low dimensional real vectors (called Eigenwords) that capture the “meaning” of words from their context. All the proposed algorithms harness the multi-view nature of text data i.e. the left and right context of each word, are fast to train and have strong theoretical properties. Some of the variants also have lower sample complexity and hence higher statistical power for rare words. We provide theory which establishes relationships between these algorithms and optimality criteria for the estimates they provide. We also perform thorough qualitative and quantitative evaluation of Eigenwords showing that simple linear approaches give performance comparable to or superior than the state-of-the-art non-linear deep learning based methods.", "title": "" }, { "docid": "e35d00d5b7cedc937e34526b6c73ffc6", "text": "Unintentional falls can cause severe injuries and even death, especially if no immediate assistance is given. The aim of Fall Detection Systems (FDSs) is to detect an occurring fall. This information can be used to trigger the necessary assistance in case of injury. This can be done by using either ambient-based sensors, e.g. cameras, or wearable devices. The aim of this work is to study the technical aspects of FDSs based on wearable devices and artificial intelligence techniques, in particular Deep Learning (DL), to implement an effective algorithm for on-line fall detection. The proposed classifier is based on a Recurrent Neural Network (RNN) model with underlying Long Short-Term Memory (LSTM) blocks. The method is tested on the publicly available SisFall dataset, with extended annotation, and compared with the results obtained by the SisFall authors.", "title": "" }, { "docid": "ec58969d15eb194fd7cb57843124b425", "text": "Fully convolutional neural networks give accurate, per-pixel prediction for input images and have applications like semantic segmentation. However, a typical FCN usually requires lots of floating point computation and large run-time memory, which effectively limits its usability. We propose a method to train Bit Fully Convolution Network (BFCN), a fully convolutional neural network that has low bit-width weights and activations. Because most of its computation-intensive convolutions are accomplished between low bit-width numbers, a BFCN can be accelerated by an efficient bit-convolution implementation. On CPU, the dot product operation between two bit vectors can be reduced to bitwise operations and popcounts, which can offer much higher throughput than 32-bit multiplications and additions. To validate the effectiveness of BFCN, we conduct experiments on the PASCAL VOC 2012 semantic segmentation task and Cityscapes. Our BFCN with 1-bit weights and 2-bit activations, which runs 7.8x faster on CPU or requires less than 1% resources on FPGA, can achieve comparable performance as the 32-bit counterpart. Introduction Deep convolutional neural networks (DCNN), with its recent progress, has considerably changed the landscape of computer vision (Krizhevsky, Sutskever, and Hinton 2012) and many other fields. To achieve close to state-of-the-art performance, a DCNN usually has a lot of parameters and high computational complexity, which may easily overwhelm resource capability of embedded devices. Substantial research efforts have been invested in speeding up DCNNs on both general-purpose (Vanhoucke, Senior, and Mao 2011; Gong et al. 2014; Han et al. 2015) and specialized computer hardware (Farabet et al. 2009; Farabet et al. 2011; Pham et al. 2012; Chen et al. 2014b; Chen et al. 2014c; Zhang et al. 2015a). Recent progress in using low bit-width networks has considerably reduced parameter storage size and computation burden by using 1-bit weight and low bit-width activations. In particular, in BNN (Kim and Smaragdis 2016) and XNOR-net (Rastegari et al. 2016), during the forward pass the most computationally expensive convolutions can Copyright c © 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. network VOC12 Cityscapes speedup 32-bit FCN 69.8% 62.1% 1x 2-bit BFCN 67.0% 60.3% 4.1x 1-2 BFCN 62.8% 57.4% 7.8x Table 1: Summary results of our BFCNs. Performance measure in mean IoU. be done by combining xnor and popcount operations, thanks to the following equivalence when x and y are bit vectors:", "title": "" }, { "docid": "61b76ad4241e9a6e9c04b29bbbcf9ec1", "text": "This paper reports a study that attempts to explore how using mobile technologies in direct physical interaction with space and with other players can be combined with principles of engagement and self-motivation to create a powerful and engaging learning experience. We developed a mobile gaming experience designed to encourage the development of children’s conceptual understanding of animal behaviour. Ten children (five boys and five girls) aged between 11 and 12 years played and explored the game. The findings from this study offer interesting insights into the extent to which mobile gaming might be employed as a tool for supporting learning. It also highlights a number of major challenges that this format raises for the organisation of learning within schools and the design of such resources.", "title": "" }, { "docid": "78a2bf1c2edec7ec9eb48f8b07dc9e04", "text": "The performance of the most commonly used metal antennas close to the human body is one of the limiting factors of the performance of bio-sensors and wireless body area networks (WBAN). Due to the high dielectric and conductivity contrast with respect to most parts of the human body (blood, skin, …), the range of most of the wireless sensors operating in RF and microwave frequencies is limited to 1–2 cm when attached to the body. In this paper, we introduce the very novel idea of liquid antennas, that is based on engineering the properties of liquids. This approach allows for the improvement of the range by a factor of 5–10 in a very easy-to-realize way, just modifying the salinity of the aqueous solution of the antenna. A similar methodology can be extended to the development of liquid RF electronics for implantable devices and wearable real-time bio-signal monitoring, since it can potentially lead to very flexible antenna and electronic configurations.", "title": "" }, { "docid": "23ff4a40f9a62c8a26f3cc3f8025113d", "text": "In the early ages of implantable devices, radio frequency (RF) technologies were not commonplace due to the challenges stemming from the inherent nature of biological tissue boundaries. As technology improved and our understanding matured, the benefit of RF in biomedical applications surpassed the implementation challenges and is thus becoming more widespread. The fundamental challenge is due to the significant electromagnetic (EM) effects of the body at high frequencies. The EM absorption and impedance boundaries of biological tissue result in significant reduction of power and signal integrity for transcutaneous propagation of RF fields. Furthermore, the dielectric properties of the body tissue surrounding the implant must be accounted for in the design of its RF components, such as antennas and inductors, and the tissue is often heterogeneous and the properties are highly variable. Additional challenges for implantable applications include the need for miniaturization, power minimization, and often accounting for a conductive casing due to biocompatibility and hermeticity requirements [1]?[3]. Today, wireless technologies are essentially a must have in most electrical implants due to the need to communicate with the device and even transfer usable energy to the implant [4], [5]. Low-frequency wireless technologies face fewer challenges in this implantable setting than its higher frequency, or RF, counterpart, but are limited to much lower communication speeds and typically have a very limited operating distance. The benefits of high-speed communication and much greater communication distances in biomedical applications have spawned numerous wireless standards committees, and the U.S. Federal Communications Commission (FCC) has allocated numerous frequency bands for medical telemetry as well as those to specifically target implantable applications. The development of analytical models, advanced EM simulation software, and representative RF human phantom recipes has significantly facilitated design and optimization of RF components for implantable applications.", "title": "" }, { "docid": "5c512bf8cb37f3937b27855e03e111d6", "text": "Tensor CANDECOMP/PARAFAC (CP) decomposition has wide applications in statistical learning of latent variable models and in data mining. In this paper, we propose fast and randomized tensor CP decomposition algorithms based on sketching. We build on the idea of count sketches, but introduce many novel ideas which are unique to tensors. We develop novel methods for randomized computation of tensor contractions via FFTs, without explicitly forming the tensors. Such tensor contractions are encountered in decomposition methods such as tensor power iterations and alternating least squares. We also design novel colliding hashes for symmetric tensors to further save time in computing the sketches. We then combine these sketching ideas with existing whitening and tensor power iterative techniques to obtain the fastest algorithm on both sparse and dense tensors. The quality of approximation under our method does not depend on properties such as sparsity, uniformity of elements, etc. We apply the method for topic modeling and obtain competitive results.", "title": "" }, { "docid": "ed7a114d02244b7278c8872c567f1ba6", "text": "We present a new visualization, called the Table Lens, for visualizing and making sense of large tables. The visualization uses a focus+context (fisheye) technique that works effectively on tabular information because it allows display of crucial label information and multiple distal focal areas. In addition, a graphical mapping scheme for depicting table contents has been developed for the most widespread kind of tables, the cases-by-variables table. The Table Lens fuses symbolic and graphical representations into a single coherent view that can be fluidly adjusted by the user. This fusion and interactivity enables an extremely rich and natural style of direct manipulation exploratory data analysis.", "title": "" }, { "docid": "c57911c03df15837a800ec491f7ca597", "text": "This paper presents a novel unifying framework of anytime sparse Gaussian process regression (SGPR) models that can produce good predictive performance fast and improve their predictive performance over time. Our proposed unifying framework reverses the variational inference procedure to theoretically construct a non-trivial, concave functional that is maximized at the predictive distribution of any SGPR model of our choice. As a result, a stochastic natural gradient ascent method can be derived that involves iteratively following the stochastic natural gradient of the functional to improve its estimate of the predictive distribution of the chosen SGPR model and is guaranteed to achieve asymptotic convergence to it. Interestingly, we show that if the predictive distribution of the chosen SGPR model satisfies certain decomposability conditions, then the stochastic natural gradient is an unbiased estimator of the exact natural gradient and can be computed in constant time (i.e., independent of data size) at each iteration. We empirically evaluate the trade-off between the predictive performance vs. time efficiency of the anytime SGPR models on two real-world million-sized datasets.", "title": "" }, { "docid": "160a866ca769a847138c5afc7f34db38", "text": "STUDY OBJECTIVE\nThe purpose of this article is to review the published literature and perform a systematic review to evaluate the effectiveness and feasibility of the use of a hysteroscope for vaginoscopy or hysteroscopy in diagnosing and establishing therapeutic management of adolescent patients with gynecologic problems.\n\n\nDESIGN\nA systematic review.\n\n\nSETTING\nPubMed, Web of science, and Scopus searches were performed for the period up to September 2013 to identify all the eligible studies. Additional relevant articles were identified using citations within these publications.\n\n\nPARTICIPANTS\nFemale adolescents aged 10 to 18 years.\n\n\nRESULTS\nA total of 19 studies were included in the systematic review. We identified 19 case reports that described the application of a hysteroscope as treatment modality for some gynecologic conditions or diseases in adolescents. No original study was found matching the age of this specific population.\n\n\nCONCLUSIONS\nA hysteroscope is a useful substitute for vaginoscopy or hysteroscopy for the exploration of the immature genital tract and may help in the diagnosis and treatment of gynecologic disorders in adolescent patients with an intact hymen, limited vaginal access, or a narrow vagina.", "title": "" }, { "docid": "65d3f781e9b32d9b070c88191704fbce", "text": "The richest information about different emotional states and thinking styles of the person is carried by signature The signature analysis is one of the most effective and reliable indicator for prediction of personality. As it reveals the true personality which includes fears, honesty and many other individual personality traits. This can happen with the help of few features like underscores below signature, appearance of dot on the letter, curved start, ending stroke, and streaks disconnected. This paper is focused on above mentioned features. The database has 60 signatures of 10 different persons with 6 samples of each and is properly scanned at 500dpi resolution scanner. The performance is evaluated by splitting a signature into five categories-left, right, upper, middle and bottom. An artificial neural network and structural identification algorithm is used for the prediction of personality and the obtained accuracy rate is 100%, 95%, 94%, 96% and 92% respectively The significance of work is to reveal personality traits in criminology, medical science and counseling.", "title": "" }, { "docid": "bf81781cebd7ec0b92132b87b36dafcb", "text": "3D fingerprint recognition is an emerging technology in biometrics. However, current 3D fingerprint acquisition systems are usually with complex structure and high-cost and that has become the main obstacle for its popularization. In this work, we present a novel photometric method and an experimental setup for real-time 3D fingerprint reconstruction. The proposed system consists of seven LED lights that mounted around one camera. In the surface reflectance modeling of finger surface, a simplified Hanrahan-Krueger model is introduced. And a neural network approach is used to solve the model for accurate estimation of surface normals. A calibration method is also proposed to determine the lighting directions as well as the correction of the lighting fields. Moreover, to stand out the fingerprint ridge features and get better visual effects, a linear transformation is applied to the recovered normals field. Experiments on live fingerprint and the comparison with traditional photometric stereo algorithm are used to demonstrate its high performance.", "title": "" }, { "docid": "b0d9c5716052e9cfe9d61d20e5647c8c", "text": "We propose Efficient Neural Architecture Search (ENAS), a faster and less expensive approach to automated model design than previous methods. In ENAS, a controller learns to discover neural network architectures by searching for an optimal path within a larger model. The controller is trained with policy gradient to select a path that maximizes the expected reward on the validation set. Meanwhile the model corresponding to the selected path is trained to minimize the cross entropy loss. On the Penn Treebank dataset, ENAS can discover a novel architecture thats achieves a test perplexity of 57.8, which is state-of-the-art among automatic model design methods on Penn Treebank. On the CIFAR-10 dataset, ENAS can design novel architectures that achieve a test error of 2.89%, close to the 2.65% achieved by standard NAS (Zoph et al., 2017). Most importantly, our experiments show that ENAS is more than 10x faster and 100x less resource-demanding than NAS.", "title": "" }, { "docid": "b19808098620568e7005bb655345e407", "text": "Spanner is Google’s scalable, multiversion, globally distributed, and synchronously replicated database. It is the first system to distribute data at global scale and support externally-consistent distributed transactions. This article describes how Spanner is structured, its feature set, the rationale underlying various design decisions, and a novel time API that exposes clock uncertainty. This API and its implementation are critical to supporting external consistency and a variety of powerful features: nonblocking reads in the past, lock-free snapshot transactions, and atomic schema changes, across all of Spanner.", "title": "" }, { "docid": "5ea65120d42f75d594d73e92cc82dc48", "text": "There is a new generation of emoticons, called emojis, that is increasingly being used in mobile communications and social media. In the past two years, over ten billion emojis were used on Twitter. Emojis are Unicode graphic symbols, used as a shorthand to express concepts and ideas. In contrast to the small number of well-known emoticons that carry clear emotional contents, there are hundreds of emojis. But what are their emotional contents? We provide the first emoji sentiment lexicon, called the Emoji Sentiment Ranking, and draw a sentiment map of the 751 most frequently used emojis. The sentiment of the emojis is computed from the sentiment of the tweets in which they occur. We engaged 83 human annotators to label over 1.6 million tweets in 13 European languages by the sentiment polarity (negative, neutral, or positive). About 4% of the annotated tweets contain emojis. The sentiment analysis of the emojis allows us to draw several interesting conclusions. It turns out that most of the emojis are positive, especially the most popular ones. The sentiment distribution of the tweets with and without emojis is significantly different. The inter-annotator agreement on the tweets with emojis is higher. Emojis tend to occur at the end of the tweets, and their sentiment polarity increases with the distance. We observe no significant differences in the emoji rankings between the 13 languages and the Emoji Sentiment Ranking. Consequently, we propose our Emoji Sentiment Ranking as a European language-independent resource for automated sentiment analysis. Finally, the paper provides a formalization of sentiment and a novel visualization in the form of a sentiment bar.", "title": "" }, { "docid": "4fd93d479f3fc9f223fcdd232f4b5453", "text": "Tarlov cysts and nerve roots anomalies usually involve lumbosacral roots and are often asymptomatic. MRI has enabled recognition of many conditions that used to be missed by CT or myelography investigations performed for back and leg pain. However, even without additional compressive impingement (disc hernia, spondylolisthesis or lumbar canal stenosis) these anomalies can be responsible for sciatica, motor deficit and bladder sphincter dysfunction. Tarlov cysts are perinervous dilatations of the dorsal root ganglion. CT and especially MRI can reveal these cysts and their precise relations with the neighboring structures. Delayed filling of the cysts can be visualized on the myelogram. MRI is more sensitive than CT myelography for a positive diagnosis of nerve root anomalies, a differential diagnosis with disc hernia and classification of these anomalies. Surgical treatment is indicated for symptomatic Tarlov cysts and nerve root anomalies resistant to conservative treatment. Better outcome is observed in patients with an additional compressive impingement component. We report two cases of sciatica: one caused by Tarlov cysts diagnosed by MRI and the other by nerve root anomalies diagnosed by CT myelography. In both cases, conservative treatment was undertaken. The clinical, radiological and therapeutic aspects of these disorders are discussed.", "title": "" }, { "docid": "a12538c128f7cd49f2561170f6aaf0ac", "text": "We also define qp(kp) = 0, k ∈ Z. Fermat quotients appear and play a major role in various questions of computational and algebraic number theory and thus their distribution modulo p has been studied in a number of works; see, for example, [1, 5, 6, 7, 8, 9, 10, 11, 13, 15, 16, 17, 18] and the references therein. In particular, the image set Ip(U) = {qp(u) : 1 ≤ u ≤ U} has been investigated in some of these works. Let Ip(U) = #Ip(U) be the cardinality of Ip(U). It is well known (see, for example, [6, Section 2]) that", "title": "" } ]
scidocsrr
d8f9b990587bf5674d33191f25c9e0e4
A Low-Rank Approximation Approach to Learning Joint Embeddings of News Stories and Images for Timeline Summarization
[ { "docid": "346e160403ff9eb55c665f6cb8cca481", "text": "Tasks in visual analytics differ from typical information retrieval tasks in fundamental ways. A critical part of a visual analytics is to ask the right questions when dealing with a diverse collection of information. In this article, we introduce the design and application of an integrated exploratory visualization system called Storylines. Storylines provides a framework to enable analysts visually and systematically explore and study a body of unstructured text without prior knowledge of its thematic structure. The system innovatively integrates latent semantic indexing, natural language processing, and social network analysis. The contributions of the work include providing an intuitive and directly accessible representation of a latent semantic space derived from the text corpus, an integrated process for identifying salient lines of stories, and coordinated visualizations across a spectrum of perspectives in terms of people, locations, and events involved in each story line. The system is tested with the 2006 VAST contest data, in particular, the portion of news articles.", "title": "" }, { "docid": "c8768e560af11068890cc097f1255474", "text": "Abstract This paper describes the functionality of MEAD, a comprehensive, public domain, open source, multidocument multilingual summarization environment that has been thus far downloaded by more than 500 organizations. MEAD has been used in a variety of summarization applications ranging from summarization for mobile devices to Web page summarization within a search engine and to novelty detection.", "title": "" } ]
[ { "docid": "93297115eb5153a41a79efe582bd34b1", "text": "Abslract Bayesian probabilily theory provides a unifying framework for dara modelling. In this framework the overall aims are to find models that are well-matched to, the &a, and to use &se models to make optimal predictions. Neural network laming is interpreted as an inference of the most probable parameters for Ihe model, given the training data The search in model space (i.e., the space of architectures, noise models, preprocessings, regularizes and weight decay constants) can then also be treated as an inference problem, in which we infer the relative probability of alternative models, given the data. This review describes practical techniques based on G ~ ~ s s ~ M approximations for implementation of these powerful methods for controlling, comparing and using adaptive network$.", "title": "" }, { "docid": "19fe8c6452dd827ffdd6b4c6e28bc875", "text": "Motivation for the investigation of position and waypoint controllers is the demand for Unattended Aerial Systems (UAS) capable of fulfilling e.g. surveillance tasks in contaminated or in inaccessible areas. Hence, this paper deals with the development of a 2D GPS-based position control system for 4 Rotor Helicopters able to keep positions above given destinations as well as to navigate between waypoints while minimizing trajectory errors. Additionally, the novel control system enables permanent full speed flight with reliable altitude keeping considering that the resulting lift is decreasing while changing pitch or roll angles for position control. In the following chapters the control procedure for position control and waypoint navigation is described. The dynamic behavior was simulated by means of Matlab/Simulink and results are shown. Further, the control strategies were implemented on a flight demonstrator for validation, experimental results are provided and a comparison is discussed.", "title": "" }, { "docid": "4918abc325eae43369e9173c2c75706b", "text": "We propose a fast regression model for practical single image super-resolution based on in-place examples, by leveraging two fundamental super-resolution approaches- learning from an external database and learning from self-examples. Our in-place self-similarity refines the recently proposed local self-similarity by proving that a patch in the upper scale image have good matches around its origin location in the lower scale image. Based on the in-place examples, a first-order approximation of the nonlinear mapping function from low-to high-resolution image patches is learned. Extensive experiments on benchmark and real-world images demonstrate that our algorithm can produce natural-looking results with sharp edges and preserved fine details, while the current state-of-the-art algorithms are prone to visual artifacts. Furthermore, our model can easily extend to deal with noise by combining the regression results on multiple in-place examples for robust estimation. The algorithm runs fast and is particularly useful for practical applications, where the input images typically contain diverse textures and they are potentially contaminated by noise or compression artifacts.", "title": "" }, { "docid": "607247339e5bb0299f06db3104deef77", "text": "This paper discusses the advantages of using the ACT-R cognitive architecture over the Prolog programming language for the research and development of a large-scale, functional, cognitively motivated model of natural language analysis. Although Prolog was developed for Natural Language Processing (NLP), it lacks any probabilistic mechanisms for dealing with ambiguity and relies on failure detection and algorithmic backtracking to explore alternative analyses. These mechanisms are problematic for handling ill-formed or unexpected inputs, often resulting in an exploration of the entire search space, which becomes intractable as the complexity and variability of the allowed inputs and corresponding grammar grow. By comparison, ACT-R provides context dependent and probabilistic mechanisms which allow the model to incrementally pursue the best analysis. When combined with a nonmonotonic context accommodation mechanism that supports modest adjustment of the evolving analysis to handle cases where the locally best analysis is not globally preferred, the result is an efficient pseudo-deterministic mechanism that obviates the need for failure detection and backtracking, aligns with our basic understanding of Human Language Processing (HLP) and is scalable to broad coverage. The successful transition of the natural language analysis model from Prolog to ACT-R suggests that a cognitively motivated approach to natural language analysis may also be suitable for achieving a functional capability.", "title": "" }, { "docid": "582aed7bc35603a67d5ff2e5c6e9da28", "text": "In this article we use machine activity metrics to automatically distinguish between malicious and trusted portable executable software samples. The motivation stems from the growth of cyber attacks using techniques that have been employed to surreptitiously deploy Advanced Persistent Threats (APTs). APTs are becoming more sophisticated and able to obfuscate much of their identifiable features through encryption, custom code bases and inmemory execution. Our hypothesis is that we can produce a high degree of accuracy in distinguishing malicious from trusted samples using Machine Learning with features derived from the inescapable footprint left behind on a computer system during execution. This includes CPU, RAM, Swap use and network traffic at a count level of bytes and packets. These features are continuous and allow us to be more flexible with the classification of samples than discrete features such as API calls (which can also be obfuscated) that form the main feature of the extant literature. We use these continuous data and develop a novel classification method using Self Organizing Feature Maps to reduce over fitting during training through the ability to create unsupervised clusters of similar “behaviour” that are subsequently used as features for classification, rather than using the raw data. We compare our method to a set of machine classification methods that have been applied in previous research and demonstrate an increase of between 7.24% and 25.68% in classification accuracy using our method and an unseen dataset over the range of other machine classification methods that have been applied in previous research. © 2017 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).", "title": "" }, { "docid": "cca235b52cc6e7b52febecf15a1ad599", "text": "In this work, we investigated the use of noninvasive, targeted transcutaneous electrical nerve stimulation (TENS) of peripheral nerves to provide sensory feedback to two amputees, one with targeted sensory reinnervation (TSR) and one without TSR. A major step in developing a closed-loop prosthesis is providing the sense of touch back to the amputee user. We investigated the effect of targeted nerve stimulation amplitude, pulse width, and frequency on stimulation perception. We discovered that both subjects were able to reliably detect stimulation patterns with pulses less than 1 ms. We utilized the psychophysical results to produce a subject specific stimulation pattern using a leaky integrate and fire (LIF) neuron model from force sensors on a prosthetic hand during a grasping task. For the first time, we show that TENS is able to provide graded sensory feedback at multiple sites in both TSR and non-TSR amputees while using behavioral results to tune a neuromorphic stimulation pattern driven by a force sensor output from a prosthetic hand.", "title": "" }, { "docid": "7cad8fccadff2d8faa8a372c6237469e", "text": "In the spirit of the tremendous success of deep Convolutional Neural Networks as generic feature extractors from images, we propose Timenet : a multilayered recurrent neural network (RNN) trained in an unsupervised manner to extract features from time series. Fixed-dimensional vector representations or embeddings of variable-length sentences have been shown to be useful for a variety of document classification tasks. Timenet is the encoder network of an auto-encoder based on sequence-to-sequence models that transforms varying length time series to fixed-dimensional vector representations. Once Timenet is trained on diverse sets of time series, it can then be used as a generic off-the-shelf feature extractor for time series. We train Timenet on time series from 24 datasets belonging to various domains from the UCR Time Series Classification Archive, and then evaluate embeddings from Timenet for classification on 30 other datasets not used for training the Timenet. We observe that a classifier learnt over the embeddings obtained from a pre-trained Timenet yields significantly better performance compared to (i) a classifier learnt over the embeddings obtained from the encoder network of a domain-specific auto-encoder, as well as (ii) a nearest neighbor classifier based on the well-known and effective Dynamic Time Warping (DTW) distance measure. We also observe that a classifier trained on embeddings from Timenet give competitive results in comparison to a DTW-based classifier even when using significantly smaller set of labeled training data, providing further evidence that Timenet embeddings are robust. Finally, t-SNE visualizations of Timenet embeddings show that time series from different classes form well-separated clusters.", "title": "" }, { "docid": "61b6cf4bc86ae9a817f6e809fdf59ad2", "text": "In the last few years, phishing scams have rapidly grown posing huge threat to global Internet security. Today, phishing attack is one of the most common and serious threats over Internet where cyber attackers try to steal user’s personal or financial credentials by using either malwares or social engineering. Detection of phishing attacks with high accuracy has always been an issue of great interest. Recent developments in phishing detection techniques have led to various new techniques, specially designed for phishing detection where accuracy is extremely important. Phishing problem is widely present as there are several ways to carry out such an attack, which implies that one solution is not adequate to address it. Two main issues are addressed in our paper. First, we discuss in detail phishing attacks, history of phishing attacks and motivation of attacker behind performing this attack. In addition, we also provide taxonomy of various types of phishing attacks. Second, we provide taxonomy of various solutions proposed in the literature to detect and defend from phishing attacks. In addition, we also discuss various issues and challenges faced in dealing with phishing attacks and spear phishing and how phishing is now targeting the emerging domain of IoT. We discuss various tools and datasets that are used by the researchers for the evaluation of their approaches. This provides better understanding of the problem, current solution space and future research scope to efficiently deal with such attacks.", "title": "" }, { "docid": "9787d99954114de7ddd5a58c18176380", "text": "This paper presents a system for acoustic event detection in recordings from real life environments. The events are modeled using a network of hidden Markov models; their size and topology is chosen based on a study of isolated events recognition. We also studied the effect of ambient background noise on event classification performance. On real life recordings, we tested recognition of isolated sound events and event detection. For event detection, the system performs recognition and temporal positioning of a sequence of events. An accuracy of 24% was obtained in classifying isolated sound events into 61 classes. This corresponds to the accuracy of classifying between 61 events when mixed with ambient background noise at 0dB signal-to-noise ratio. In event detection, the system is capable of recognizing almost one third of the events, and the temporal positioning of the events is not correct for 84% of the time.", "title": "" }, { "docid": "a97f71e0d5501add1ae08eeee5378045", "text": "Machine learning is being implemented in bioinformatics and computational biology to solve challenging problems emerged in the analysis and modeling of biological data such as DNA, RNA, and protein. The major problems in classifying protein sequences into existing families/superfamilies are the following: the selection of a suitable sequence encoding method, the extraction of an optimized subset of features that possesses significant discriminatory information, and the adaptation of an appropriate learning algorithm that classifies protein sequences with higher classification accuracy. The accurate classification of protein sequence would be helpful in determining the structure and function of novel protein sequences. In this article, we have proposed a distance-based sequence encoding algorithm that captures the sequence’s statistical characteristics along with amino acids sequence order information. A statistical metric-based feature selection algorithm is then adopted to identify the reduced set of features to represent the original feature space. The performance of the proposed technique is validated using some of the best performing classifiers implemented previously for protein sequence classification. An average classification accuracy of 92% was achieved on the yeast protein sequence data set downloaded from the benchmark UniProtKB database.", "title": "" }, { "docid": "5feea8e7bcb96c826bdf19922e47c922", "text": "This chapter is a review of conceptions of knowledge as they appear in selected bodies of research on teaching. Writing as a philosopher of education, my interest is in how notions of knowledge are used and analyzed in a number of research programs that study teachers and their teaching. Of particular interest is the growing research literature on the knowledge that teachers generate as a result of their experience as teachers, in contrast to the knowledge of teaching that is generated by those who specialize in research on teaching. This distinction, as will become apparent, is one that divides more conventional scientific approaches to the study of teaching from what might be thought of as alternative approaches.", "title": "" }, { "docid": "de1ed7fbb69e5e33e17d1276d265a3e1", "text": "Abnormal glucose metabolism and enhanced oxidative stress accelerate cardiovascular disease, a chronic inflammatory condition causing high morbidity and mortality. Here, we report that in monocytes and macrophages of patients with atherosclerotic coronary artery disease (CAD), overutilization of glucose promotes excessive and prolonged production of the cytokines IL-6 and IL-1β, driving systemic and tissue inflammation. In patient-derived monocytes and macrophages, increased glucose uptake and glycolytic flux fuel the generation of mitochondrial reactive oxygen species, which in turn promote dimerization of the glycolytic enzyme pyruvate kinase M2 (PKM2) and enable its nuclear translocation. Nuclear PKM2 functions as a protein kinase that phosphorylates the transcription factor STAT3, thus boosting IL-6 and IL-1β production. Reducing glycolysis, scavenging superoxide and enforcing PKM2 tetramerization correct the proinflammatory phenotype of CAD macrophages. In essence, PKM2 serves a previously unidentified role as a molecular integrator of metabolic dysfunction, oxidative stress and tissue inflammation and represents a novel therapeutic target in cardiovascular disease.", "title": "" }, { "docid": "cd2e7e24b4d8fc12df4f866b4c4e9da2", "text": "The extracellular matrix (ECM) is a major component of tumors and a significant contributor to cancer progression. In this study, we use proteomics to investigate the ECM of human mammary carcinoma xenografts and show that primary tumors of differing metastatic potential differ in ECM composition. Both tumor cells and stromal cells contribute to the tumor matrix and tumors of differing metastatic ability differ in both tumor- and stroma-derived ECM components. We define ECM signatures of poorly and highly metastatic mammary carcinomas and these signatures reveal up-regulation of signaling pathways including TGFβ and VEGF. We further demonstrate that several proteins characteristic of highly metastatic tumors (LTBP3, SNED1, EGLN1, and S100A2) play causal roles in metastasis, albeit at different steps. Finally we show that high expression of LTBP3 and SNED1 correlates with poor outcome for ER(-)/PR(-)breast cancer patients. This study thus identifies novel biomarkers that may serve as prognostic and diagnostic tools. DOI: http://dx.doi.org/10.7554/eLife.01308.001.", "title": "" }, { "docid": "951532d8e0bea472139298de9c5e9842", "text": "Alzheimer's disease (AD), the most common form of dementia, shares many aspects of abnormal brain aging. We present a novel magnetic resonance imaging (MRI)-based biomarker that predicts the individual progression of mild cognitive impairment (MCI) to AD on the basis of pathological brain aging patterns. By employing kernel regression methods, the expression of normal brain-aging patterns forms the basis to estimate the brain age of a given new subject. If the estimated age is higher than the chronological age, a positive brain age gap estimation (BrainAGE) score indicates accelerated atrophy and is considered a risk factor for conversion to AD. Here, the BrainAGE framework was applied to predict the individual brain ages of 195 subjects with MCI at baseline, of which a total of 133 developed AD during 36 months of follow-up (corresponding to a pre-test probability of 68%). The ability of the BrainAGE framework to correctly identify MCI-converters was compared with the performance of commonly used cognitive scales, hippocampus volume, and state-of-the-art biomarkers derived from cerebrospinal fluid (CSF). With accuracy rates of up to 81%, BrainAGE outperformed all cognitive scales and CSF biomarkers in predicting conversion of MCI to AD within 3 years of follow-up. Each additional year in the BrainAGE score was associated with a 10% greater risk of developing AD (hazard rate: 1.10 [CI: 1.07-1.13]). Furthermore, the post-test probability was increased to 90% when using baseline BrainAGE scores to predict conversion to AD. The presented framework allows an accurate prediction even with multicenter data. Its fast and fully automated nature facilitates the integration into the clinical workflow. It can be exploited as a tool for screening as well as for monitoring treatment options.", "title": "" }, { "docid": "ce8729f088aaf9f656c9206fc67ff4bd", "text": "Traditional passive radar detectors compute cross correlation of the raw data in the reference and surveillance channels. However, there is no optimality guarantee for this detector in the presence of a noisy reference. Here, we develop a new detector that utilizes a test statistic based on the cross correlation of the principal left singular vectors of the reference and surveillance signal-plus-noise matrices. This detector offers better performance by exploiting the inherent low-rank structure when the transmitted signals are a weighted periodic summation of several identical waveforms (amplitude and phase modulation), as is the case with commercial digital illuminators as well as noncooperative radar. We consider a scintillating target. We provide analytical detection performance guarantees establishing signal-to-noise ratio thresholds above which the proposed detection statistic reliably discriminates, in an asymptotic sense, the signal versus no-signal hypothesis. We validate these results using extensive numerical simulations. We demonstrate the “near constant false alarm rate (CFAR)” behavior of the proposed detector with respect to a fixed, SNR-independent threshold and contrast that with the need to adjust the detection threshold in an SNR-dependent manner to maintain CFAR for other detectors found in the literature. Extensions of the proposed detector for settings applicable to orthogonal frequency division multiplexing (OFDM), adaptive radar are discussed.", "title": "" }, { "docid": "d44a76f19aa8292b156914e821b1361d", "text": "Current concepts in the steps of upper limb development and the way the limb is patterned along its 3 spatial axes are reviewed. Finally, the embryogenesis of various congenital hand anomalies is delineated with an emphasis on the pathogenetic basis for each anomaly.", "title": "" }, { "docid": "4790a2dfcdf74d5c9ae5ae8c9f42eb0b", "text": "Inspired by the success of deploying deep learning in the fields of Computer Vision and Natural Language Processing, this learning paradigm has also found its way into the field of Music Information Retrieval. In order to benefit from deep learning in an effective, but also efficient manner, deep transfer learning has become a common approach. In this approach, it is possible to reuse the output of a pre-trained neural network as the basis for a new learning task. The underlying hypothesis is that if the initial and new learning tasks show commonalities and are applied to the same type of input data (e.g., music audio), the generated deep representation of the data is also informative for the new task. Since, however, most of the networks used to generate deep representations are trained using a single initial learning source, their representation is unlikely to be informative for all possible future tasks. In this paper, we present the results of our investigation of what are the most important factors to generate deep representations for the data and learning tasks in the music domain. We conducted this investigation via an extensive empirical study that involves multiple learning sources, as well as multiple deep learning architectures with varying levels of information sharing between sources, in order to learn music representations. We then validate these representations considering multiple target datasets for evaluation. The results of our experiments yield several insights into how to approach the design of methods for learning widely deployable deep data representations in the music domain.", "title": "" }, { "docid": "583e56fcef68f697d19b179766341aba", "text": "We recorded echolocation calls from 14 sympatric species of bat in Britain. Once digitised, one temporal and four spectral features were measured from each call. The frequency-time course of each call was approximated by fitting eight mathematical functions, and the goodness of fit, represented by the mean-squared error, was calculated. Measurements were taken using an automated process that extracted a single call from background noise and measured all variables without intervention. Two species of Rhinolophus were easily identified from call duration and spectral measurements. For the remaining 12 species, discriminant function analysis and multilayer back-propagation perceptrons were used to classify calls to species level. Analyses were carried out with and without the inclusion of curve-fitting data to evaluate its usefulness in distinguishing among species. Discriminant function analysis achieved an overall correct classification rate of 79% with curve-fitting data included, while an artificial neural network achieved 87%. The removal of curve-fitting data improved the performance of the discriminant function analysis by 2 %, while the performance of a perceptron decreased by 2 %. However, an increase in correct identification rates when curve-fitting information was included was not found for all species. The use of a hierarchical classification system, whereby calls were first classified to genus level and then to species level, had little effect on correct classification rates by discriminant function analysis but did improve rates achieved by perceptrons. This is the first published study to use artificial neural networks to classify the echolocation calls of bats to species level. Our findings are discussed in terms of recent advances in recording and analysis technologies, and are related to factors causing convergence and divergence of echolocation call design in bats.", "title": "" }, { "docid": "ed44c393c44ee6e63cab1305146a4f9d", "text": "This paper presents a novel method for online and incremental appearance-based localization and mapping in a highly dynamic environment. Using position-invariant robust features (PIRFs), the method can achieve a high rate of recall with 100% precision. It can handle both strong perceptual aliasing and dynamic changes of places efficiently. Its performance also extends beyond conventional images; it is applicable to omnidirectional images for which the major portions of scenes are similar for most places. The proposed PIRF-based Navigation method named PIRF-Nav is evaluated by testing it on two standard datasets as is in FAB-MAP and on an additional omnidirectional image dataset that we collected. This extra dataset is collected on two days with different specific events, i.e., an open-campus event, to present challenges related to illumination variance and strong dynamic changes, and to test assessment of dynamic scene changes. Results show that PIRF-Nav outperforms FAB-MAP; PIRF-Nav at precision-1 yields a recall rate about two times (approximately 80%) higher than that of FAB-MAP. Its computation time is sufficiently short for real-time applications. The method is fully incremental, and requires no offline process for dictionary creation. Additional testing using combined datasets proves that PIRF-Nav can function over a long term and can solve the kidnapped robot problem.", "title": "" }, { "docid": "dbb21f81126dd049a569b26596151409", "text": "A flexible statistical framework is developed for the analysis of read counts from RNA-Seq gene expression studies. It provides the ability to analyse complex experiments involving multiple treatment conditions and blocking variables while still taking full account of biological variation. Biological variation between RNA samples is estimated separately from the technical variation associated with sequencing technologies. Novel empirical Bayes methods allow each gene to have its own specific variability, even when there are relatively few biological replicates from which to estimate such variability. The pipeline is implemented in the edgeR package of the Bioconductor project. A case study analysis of carcinoma data demonstrates the ability of generalized linear model methods (GLMs) to detect differential expression in a paired design, and even to detect tumour-specific expression changes. The case study demonstrates the need to allow for gene-specific variability, rather than assuming a common dispersion across genes or a fixed relationship between abundance and variability. Genewise dispersions de-prioritize genes with inconsistent results and allow the main analysis to focus on changes that are consistent between biological replicates. Parallel computational approaches are developed to make non-linear model fitting faster and more reliable, making the application of GLMs to genomic data more convenient and practical. Simulations demonstrate the ability of adjusted profile likelihood estimators to return accurate estimators of biological variability in complex situations. When variation is gene-specific, empirical Bayes estimators provide an advantageous compromise between the extremes of assuming common dispersion or separate genewise dispersion. The methods developed here can also be applied to count data arising from DNA-Seq applications, including ChIP-Seq for epigenetic marks and DNA methylation analyses.", "title": "" } ]
scidocsrr
910bd01b3f25245ce3c314607368b4ae
Fast Cost-Volume Filtering for Visual Correspondence and Beyond
[ { "docid": "0a78c9305d4b5584e87327ba2236d302", "text": "This paper presents GeoS, a new algorithm for the efficient segmentation of n-dimensional image and video data. The segmentation problem is cast as approximate energy minimization in a conditional random field. A new, parallel filtering operator built upon efficient geodesic distance computation is used to propose a set of spatially smooth, contrast-sensitive segmentation hypotheses. An economical search algorithm finds the solution with minimum energy within a sensible and highly restricted subset of all possible labellings. Advantages include: i) computational efficiency with high segmentation accuracy; ii) the ability to estimate an approximation to the posterior over segmentations; iii) the ability to handle generally complex energy models. Comparison with max-flow indicates up to 60 times greater computational efficiency as well as greater memory efficiency. GeoS is validated quantitatively and qualitatively by thorough comparative experiments on existing and novel ground-truth data. Numerous results on interactive and automatic segmentation of photographs, video and volumetric medical image data are presented.", "title": "" } ]
[ { "docid": "3a81f0fc24dd90f6c35c47e60db3daa4", "text": "Advances in information and Web technologies have open numerous opportunities for online retailing. The pervasiveness of the Internet coupled with the keenness in competition among online retailers has led to virtual experiential marketing (VEM). This study examines the relationship of five VEM elements on customer browse and purchase intentions and loyalty, and the moderating effects of shopping orientation and Internet experience on these relationships. A survey was conducted of customers who frequently visited two online game stores to play two popular games in Taiwan. The results suggest that of the five VEM elements, three have positive effects on browse intention, and two on purchase intentions. Both browse and purchase intentions have positive effects on customer loyalty. Economic orientation was found to moderate that relationships between the VEM elements and browse and purchase intentions. However, convenience orientation moderated only the relationships between the VEM elements and browse intention.", "title": "" }, { "docid": "0a4a124589dffca733fa9fa87dc94b35", "text": "where ri is the reward in cycle i of a given history, and the expected value is taken over all possible interaction histories of π and μ. The choice of γi is a subtle issue that controls how greedy or far sighted the agent should be. Here we use the near-harmonic γi := 1/i2 as this produces an agent with increasing farsightedness of the order of its current age [Hutter2004]. As we desire an extremely general definition of intelligence for arbitrary systems, our space of environments should be as large as possible. An obvious choice is the space of all probability measures, however this causes serious problems as we cannot even describe some of these measures in a finite way.", "title": "" }, { "docid": "9b45bb1734e9afc34b14fa4bc47d8fba", "text": "To achieve complex solutions in the rapidly changing world of e-commerce, it is impossible to go it alone. This explains the latest trend in IT outsourcing---global and partner-based alliances. But where do we go from here?", "title": "" }, { "docid": "eb1045f1e85d7197a2952c6580604f75", "text": "There's a large push toward offering solutions and services in the cloud due to its numerous advantages. However, there are no clear guidelines for designing and deploying cloud solutions that can seamlessly operate to handle Web-scale traffic. The authors review industry best practices and identify principles for operating Web-scale cloud solutions by deriving design patterns that enable each principle in cloud solutions. In addition, using a seemingly straightforward cloud service as an example, they explain the application of the identified patterns.", "title": "" }, { "docid": "9bacc1ef43fd8c05dde814a18f59e467", "text": "The processes that affect removal and retention of nitrogen during wastewater treatment in constructed wetlands (CWs) are manifold and include NH(3) volatilization, nitrification, denitrification, nitrogen fixation, plant and microbial uptake, mineralization (ammonification), nitrate reduction to ammonium (nitrate-ammonification), anaerobic ammonia oxidation (ANAMMOX), fragmentation, sorption, desorption, burial, and leaching. However, only few processes ultimately remove total nitrogen from the wastewater while most processes just convert nitrogen to its various forms. Removal of total nitrogen in studied types of constructed wetlands varied between 40 and 55% with removed load ranging between 250 and 630 g N m(-2) yr(-1) depending on CWs type and inflow loading. However, the processes responsible for the removal differ in magnitude among systems. Single-stage constructed wetlands cannot achieve high removal of total nitrogen due to their inability to provide both aerobic and anaerobic conditions at the same time. Vertical flow constructed wetlands remove successfully ammonia-N but very limited denitrification takes place in these systems. On the other hand, horizontal-flow constructed wetlands provide good conditions for denitrification but the ability of these system to nitrify ammonia is very limited. Therefore, various types of constructed wetlands may be combined with each other in order to exploit the specific advantages of the individual systems. The soil phosphorus cycle is fundamentally different from the N cycle. There are no valency changes during biotic assimilation of inorganic P or during decomposition of organic P by microorganisms. Phosphorus transformations during wastewater treatment in CWs include adsorption, desorption, precipitation, dissolution, plant and microbial uptake, fragmentation, leaching, mineralization, sedimentation (peat accretion) and burial. The major phosphorus removal processes are sorption, precipitation, plant uptake (with subsequent harvest) and peat/soil accretion. However, the first three processes are saturable and soil accretion occurs only in FWS CWs. Removal of phosphorus in all types of constructed wetlands is low unless special substrates with high sorption capacity are used. Removal of total phosphorus varied between 40 and 60% in all types of constructed wetlands with removed load ranging between 45 and 75 g N m(-2) yr(-1) depending on CWs type and inflow loading. Removal of both nitrogen and phosphorus via harvesting of aboveground biomass of emergent vegetation is low but it could be substantial for lightly loaded systems (cca 100-200 g N m(-2) yr(-1) and 10-20 g P m(-2) yr(-1)). Systems with free-floating plants may achieve higher removal of nitrogen via harvesting due to multiple harvesting schedule.", "title": "" }, { "docid": "15cb7023c175e2c92cd7b392205fb87f", "text": "Feedback has a strong influence on effective learning from computer-based instruction. Prior research on feedback in computer-based instruction has mainly focused on static feedback schedules that employ the same feedback schedule throughout an instructional session. This study examined transitional feedback schedules in computer-based multimedia instruction on procedural problem-solving in electrical circuit analysis. Specifically, we compared two transitional feedback schedules: the TFS-P schedule switched from initial feedback after each problem step to feedback after a complete problem at later learning states; the TFP-S schedule transitioned from feedback after a complete problem to feedback after each problem step. As control conditions, we also considered two static feedback schedules, namely providing feedback after each practice problem-solving step (SFS) or providing feedback after attempting a complete multi-step practice problem (SFP). Results indicate that the static stepwise (SFS) and transitional stepwise to problem (TFS-P) feedback produce higher problem solving near-transfer post-test performance than static problem (SFP) and transitional problem to step (TFP-S) feedback. Also, TFS-P resulted in higher ratings of program liking and feedback helpfulness than TFP-S. Overall, the study results indicate benefits of maintaining high feedback frequency (SFS) and reducing feedback frequency (TFS-P) compared to low feedback frequency (SFP) or increasing feedback frequency (TFP-S) as novice learners acquire engineering problem solving skills. © 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "19cc879d09bb01ae363b532ef9056ae8", "text": "This paper proposes a system that can detect and rephrase profanity in Chinese text. Rather than just masking detected profanity, we want to revise the input sentence by using inoffensive words while keeping their original meanings. 29 of such rephrasing rules were invented after observing sentences on real-word social websites. The overall accuracy of the proposed system is 85.56%", "title": "" }, { "docid": "fb4630a6b558ac9b8d8444275e1978e3", "text": "Relational graphs are widely used in modeling large scale networks such as biological networks and social networks. In this kind of graph, connectivity becomes critical in identifying highly associated groups and clusters. In this paper, we investigate the issues of mining closed frequent graphs with connectivity constraints in massive relational graphs where each graph has around 10K nodes and 1M edges. We adopt the concept of edge connectivity and apply the results from graph theory, to speed up the mining process. Two approaches are developed to handle different mining requests: CloseCut, a pattern-growth approach, and splat, a pattern-reduction approach. We have applied these methods in biological datasets and found the discovered patterns interesting.", "title": "" }, { "docid": "e19e7510ce6d5f7517d0366e0771c999", "text": "Human physical activity recognition from sensor data is a growing area of research due to the widespread adoption of sensor-rich wearable and smart devices. The growing interest resulted in several formulations with multiple proposals for each of them. This paper is interested in activity recognition from short sequences of sensor readings. Traditionally, solutions to this problem have relied on handcrafted features and feature selection from large predefined feature sets. More recently, deep methods have been employed to provide an end-to-end classification system for activity recognition with higher accuracy at the expense of much slower performance. This paper proposes a middle ground in which a deep neural architecture is employed for feature learning followed by traditional feature selection and classification. This approach is shown to outperform state-of-the-art systems on six out of seven experiments using publicly available datasets.", "title": "" }, { "docid": "2dc23ce5b1773f12905ebace6ef221a5", "text": "With the increasing demand for higher data rates and more reliable service capabilities for wireless devices, wireless service providers are facing an unprecedented challenge to overcome a global bandwidth shortage. Early global activities on beyond fourth-generation (B4G) and fifth-generation (5G) wireless communication systems suggest that millimeter-wave (mmWave) frequencies are very promising for future wireless communication networks due to the massive amount of raw bandwidth and potential multigigabit-per-second (Gb/s) data rates [1]?[3]. Both industry and academia have begun the exploration of the untapped mmWave frequency spectrum for future broadband mobile communication networks. In April 2014, the Brooklyn 5G Summit [4], sponsored by Nokia and the New York University (NYU) WIRELESS research center, drew global attention to mmWave communications and channel modeling. In July 2014, the IEEE 802.11 next-generation 60-GHz study group was formed to increase the data rates to over 20 Gb/s in the unlicensed 60-GHz frequency band while maintaining backward compatibility with the emerging IEEE 802.11ad wireless local area network (WLAN) standard [5].", "title": "" }, { "docid": "6235c7e1682b5406c95f91f9259288f8", "text": "Model-driven development is an emerging area in software development that provides a way to express system requirements and architecture at a high level of abstraction through models. It involves using these models as the primary artifacts during the development process. One aspect that is holding back MDD from more wide-spread adoption is the lack of a well established and easy way of performing model to model (M2M) transformations. We propose to explore and compare popular M2M model transformation languages in existence: EMT , Kermeta, and ATL. Each of these languages support transformation of Ecore models within the Eclipse Modeling Framework (EMF). We attempt to implement the same transformation rule on identical meta models in each of these languages to achieve the appropriate transformed model. We provide our observations in using each tool to perform the transformation and comment on each language/tool’s expressive power, ease of use, and modularity. We conclude by noting that ATL is our language / tool of choice because it strikes a balance between ease of use and expressive power and still allows for modularity. We believe this, in conjunction with ATL’s role in the official Eclipse M2M project will lead to widespread use of ATL and, hopefully, a step forward in M2M transformations.", "title": "" }, { "docid": "8217042c3779267570276664dc960612", "text": "We introduce a taxonomy that reflects the theoretical contribution of empirical articles along two dimensions: theory building and theory testing. We used that taxonomy to track trends in the theoretical contributions offered by articles over the past five decades. Results based on data from a sample of 74 issues of the Academy of Management Journal reveal upward trends in theory building and testing over time. In addition, the levels of theory building and testing within articles are significant predictors of citation rates. In particular, articles rated moderate to high on both dimensions enjoyed the highest levels of citations.", "title": "" }, { "docid": "abd714774cc36892e597314f1bf77bc5", "text": "In this paper, we cast natural-image segmentation as a problem of clustering texure features as multivariate mixed data. We model the distribution of the texture features using a mixture of Gaussian distributions. However, unlike most existing clustering methods, we allow the mixture components to be degenerate or nearly-degenerate. We contend that this assumption is particularly important for mid-level image segmentation, where degeneracy is typically introduced by using a common feature representation for different textures. We show that such a mixture distribution can be effectively segmented by a simple agglomerative clustering algorithm derived from a lossy data compression approach. Using simple fixed-size Gaussian windows as texture features, the algorithm segments an image by minimizing the overall coding length of all the feature vectors. In terms of a variety of performance indices, our algorithm compares favorably against other well-known image segmentation methods on the Berkeley image database.", "title": "" }, { "docid": "2b314587816255285bf985a086719572", "text": "Tomatoes are well-known vegetables, grown and eaten around the world due to their nutritional benefits. The aim of this research was to determine the chemical composition (dry matter, soluble solids, titritable acidity, vitamin C, lycopene), the taste index and maturity in three cherry tomato varieties (Sakura, Sunstream, Mathew) grown and collected from greenhouse at different stages of ripening. The output of the analyses showed that there were significant differences in the mean values among the analysed parameters according to the stage of ripening and variety. During ripening, the content of soluble solids increases on average two times in all analyzed varieties; the highest content of vitamin C and lycopene was determined in tomatoes of Sunstream variety in red stage. The highest total acidity expressed as g of citric acid 100 g was observed in pink stage (variety Sakura) or a breaker stage (varieties Sunstream and Mathew). The taste index of the variety Sakura was higher at all analyzed ripening stages in comparison with other varieties. This shows that ripening stages have a significant effect on tomato biochemical composition along with their variety.", "title": "" }, { "docid": "c62acdb764816d43daa8a4c3e59815e9", "text": "Despite substantial recent progress, our understanding of the principles and mechanisms underlying complex brain function and cognition remains incomplete. Network neuroscience proposes to tackle these enduring challenges. Approaching brain structure and function from an explicitly integrative perspective, network neuroscience pursues new ways to map, record, analyze and model the elements and interactions of neurobiological systems. Two parallel trends drive the approach: the availability of new empirical tools to create comprehensive maps and record dynamic patterns among molecules, neurons, brain areas and social systems; and the theoretical framework and computational tools of modern network science. The convergence of empirical and computational advances opens new frontiers of scientific inquiry, including network dynamics, manipulation and control of brain networks, and integration of network processes across spatiotemporal domains. We review emerging trends in network neuroscience and attempt to chart a path toward a better understanding of the brain as a multiscale networked system.", "title": "" }, { "docid": "e2f69fd023cfe69432459e8a82d4c79a", "text": "Thresholding is one of the popular and fundamental techniques for conducting image segmentation. Many thresholding techniques have been proposed in the literature. Among them, the minimum cross entropy thresholding (MCET) have been widely adopted. Although the MCET method is effective in the bilevel thresholding case, it could be very time-consuming in the multilevel thresholding scenario for more complex image analysis. This paper first presents a recursive programming technique which reduces an order of magnitude for computing the MCET objective function. Then, a particle swarm optimization (PSO) algorithm is proposed for searching the near-optimal MCET thresholds. The experimental results manifest that the proposed PSO-based algorithm can derive multiple MCET thresholds which are very close to the optimal ones examined by the exhaustive search method. The convergence of the proposed method is analyzed mathematically and the results validate that the proposed method is efficient and is suited for real-time applications. 2006 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "6d7f3bc1179083492b968009631b8839", "text": "FranTk is a new high level library for programming Graphical User Interfaces (GUIs) in Haskell. It is based on Fran (Functional Reactive Animation), and uses the notions of Behaviors and Events to structure code. Behaviors are time-varying, reactive values. They can be used to represent the state of an application. Events are streams of values that occur at discrete points in time. They can be used, for instance, to represent user input. FranTk allows user interfaces to be structured in a more declarative manner than has been possible with previous functional GUI libraries. We demonstrate, through a series of examples, how this is achieved, and why it is important. These examples are elements of a prototype, Air Traffic Control simulator. FranTk uses a binding to the popular Tcl/Tk toolkit to provide a powerful set of platform independent set of widgets. It has been released as a Haskell library that runs under Hugs and GHC.", "title": "" }, { "docid": "2444b0ae9920e55cf0e3e329b048a2e8", "text": "Concurrent Clean is an experimental, lazy, higher-order parallel functional programming language based on term graph rewriting. An important diierence with other languages is that in Clean graphs are manipulated and not terms. This can be used by the programmer to control communication and sharing of computation. Cyclic structures can be deened. Concurrent Clean furthermore allows to control the (parallel) order of evaluation to make eecient evaluation possible. With help of sequential annotations the default lazy evaluation can be locally changed into eager evaluation. The language enables the deenition of partially strict data structures which make a whole new class of algorithms feasible in a functional language. A powerful and fast strictness analyser is incorporated in the system. The quality of the code generated by the Clean compiler has been greatly improved such that it is one of the best code generators for a lazy functional language. Two very powerful parallel annotations enable the programmer to deene concurrent functional programs with arbitrary process topologies. Concurrent Clean is set up in such a way that the eeciency achieved for the sequential case can largely be maintained for a parallel implementation on loosely coupled parallel machine architectures.", "title": "" }, { "docid": "bbb8eb67a626b32398617cb5832d5dae", "text": "Firewall has many shortages, such as it cannot keep away interior attacks, it cannot provide a consistent security strategy, and it has a single bottleneck spot and invalid spot, etc. Intrusion Detection System (IDS) also has many defects, such as low detection ability, lack of effective response mechanism, poor manageability, etc. If firewall and IDS are integrated, the cooperation of them can implement the network security to a great extent. on the one hand, IDS monitors the network, provides a realtime detection of attacks from the interior and exterior, and automatically informs firewall and dynamically alters the rules of firewall once an attack is found; on the other hand, firewall loads dynamic rules to hold up the intrusion, controls the data traffic of IDS and provides the security protection of IDS Keywords— Protocol, Detection, Generation, Prevention,", "title": "" }, { "docid": "fce4b1fcd876094bcec9c6a9659ff5d5", "text": "Organelle biogenesis is concomitant to organelle inheritance during cell division. It is necessary that organelles double their size and divide to give rise to two identical daughter cells. Mitochondrial biogenesis occurs by growth and division of pre-existing organelles and is temporally coordinated with cell cycle events [1]. However, mitochondrial biogenesis is not only produced in association with cell division. It can be produced in response to an oxidative stimulus, to an increase in the energy requirements of the cells, to exercise training, to electrical stimulation, to hormones, during development, in certain mitochondrial diseases, etc. [2]. Mitochondrial biogenesis is therefore defined as the process via which cells increase their individual mitochondrial mass [3]. Recent discoveries have raised attention to mitochondrial biogenesis as a potential target to treat diseases which up to date do not have an efficient cure. Mitochondria, as the major ROS producer and the major antioxidant producer exert a crucial role within the cell mediating processes such as apoptosis, detoxification, Ca2+ buffering, etc. This pivotal role makes mitochondria a potential target to treat a great variety of diseases. Mitochondrial biogenesis can be pharmacologically manipulated. This issue tries to cover a number of approaches to treat several diseases through triggering mitochondrial biogenesis. It contains recent discoveries in this novel field, focusing on advanced mitochondrial therapies to chronic and degenerative diseases, mitochondrial diseases, lifespan extension, mitohormesis, intracellular signaling, new pharmacological targets and natural therapies. It contributes to the field by covering and gathering the scarcely reported pharmacological approaches in the novel and promising field of mitochondrial biogenesis. There are several diseases that have a mitochondrial origin such as chronic progressive external ophthalmoplegia (CPEO) and the Kearns- Sayre syndrome (KSS), myoclonic epilepsy with ragged-red fibers (MERRF), mitochondrial encephalomyopathy, lactic acidosis and strokelike episodes (MELAS), Leber's hereditary optic neuropathy (LHON), the syndrome of neurogenic muscle weakness, ataxia and retinitis pigmentosa (NARP), and Leigh's syndrome. Likewise, other diseases in which mitochondrial dysfunction plays a very important role include neurodegenerative diseases, diabetes or cancer. Generally, in mitochondrial diseases a mutation in the mitochondrial DNA leads to a loss of functionality of the OXPHOS system and thus to a depletion of ATP and overproduction of ROS, which can, in turn, induce further mtDNA mutations. The work by Yu-Ting Wu, Shi-Bei Wu, and Yau-Huei Wei (Department of Biochemistry and Molecular Biology, National Yang-Ming University, Taiwan) [4] focuses on the aforementioned mitochondrial diseases with special attention to the compensatory mechanisms that prompt mitochondria to produce more energy even under mitochondrial defect-conditions. These compensatory mechanisms include the overexpression of antioxidant enzymes, mitochondrial biogenesis and overexpression of respiratory complex subunits, as well as metabolic shift to glycolysis. The pathways observed to be related to mitochondrial biogenesis as a compensatory adaptation to the energetic deficits in mitochondrial diseases are described (PGC- 1, Sirtuins, AMPK). Several pharmacological strategies to trigger these signaling cascades, according to these authors, are the use of bezafibrate to activate the PPAR-PGC-1α axis, the activation of AMPK by resveratrol and the use of Sirt1 agonists such as quercetin or resveratrol. Other strategies currently used include the addition of antioxidant supplements to the diet (dietary supplementation with antioxidants) such as L-carnitine, coenzyme Q10,MitoQ10 and other mitochondria-targeted antioxidants,N-acetylcysteine (NAC), vitamin C, vitamin E vitamin K1, vitamin B, sodium pyruvate or -lipoic acid. As aforementioned, other diseases do not have exclusively a mitochondrial origin but they might have an important mitochondrial component both on their onset and on their development. This is the case of type 2 diabetes or neurodegenerative diseases. Type 2 diabetes is characterized by a peripheral insulin resistance accompanied by an increased secretion of insulin as a compensatory system. Among the explanations about the origin of insulin resistance Mónica Zamora and Josep A. Villena (Department of Experimental and Health Sciences, Universitat Pompeu Fabra / Laboratory of Metabolism and Obesity, Universitat Autònoma de Barcelona, Spain) [5] consider the hypothesis that mitochondrial dysfunction, e.g. impaired (mitochondrial) oxidative capacity of the cell or tissue, is one of the main underlying causes of insulin resistance and type 2 diabetes. Although this hypothesis is not free of controversy due to the uncertainty on the sequence of events during type 2 diabetes onset, e.g. whether mitochondrial dysfunction is the cause or the consequence of insulin resistance, it has been widely observed that improving mitochondrial function also improves insulin sensitivity and prevents type 2 diabetes. Thus restoring oxidative capacity by increasing mitochondrial mass appears as a suitable strategy to treat insulin resistance. The effort made by researchers trying to understand the signaling pathways mediating mitochondrial biogenesis has uncovered new potential pharmacological targets and opens the perspectives for the design of suitable treatments for insulin resistance. In addition some of the current used strategies could be used to treat insulin resistance such as lifestyle interventions (caloric restriction and endurance exercise) and pharmacological interventions (thiazolidinediones and other PPAR agonists, resveratrol and other calorie restriction mimetics, AMPK activators, ERR activators). Mitochondrial biogenesis is of special importance in modern neurochemistry because of the broad spectrum of human diseases arising from defects in mitochondrial ion and ROS homeostasis, energy production and morphology [1]. Parkinson´s Disease (PD) is a very good example of this important mitochondrial component on neurodegenerative diseases. Anuradha Yadav, Swati Agrawal, Shashi Kant Tiwari, and Rajnish K. Chaturvedi (CSIR-Indian Institute of Toxicology Research / Academy of Scientific and Innovative Research, India) [6] remark in their review the role of mitochondrial dysfunction in PD with special focus on the role of oxidative stress and bioenergetic deficits. These alterations may have their origin on pathogenic gene mutations in important genes such as DJ-1, -syn, parkin, PINK1 or LRRK2. These mutations, in turn, may cause defects in mitochondrial dynamics (key events like fission/fusion, biogenesis, trafficking in retrograde and anterograde directions, and mitophagy). This work reviews different strategies to enhance mitochondrial bioenergetics in order to ameliorate the neurodegenerative process, with an emphasis on clinical trials reports that indicate their potential. Among them creatine, Coenzyme Q10 and mitochondrial targeted antioxidants/peptides are reported to have the most remarkable effects in clinical trials. They highlight a dual effect of PGC-1α expression on PD prognosis. Whereas a modest expression of this transcriptional co-activator results in positive effects, a moderate to substantial overexpession may have deleterious consequences. As strategies to induce PGC-1α activation, these authors remark the possibility to activate Sirt1 with resveratrol, to use PPAR agonists such as pioglitazone, rosiglitazone, fenofibrate and bezafibrate. Other strategies include the triggering of Nrf2/antioxidant response element (ARE) pathway by triterpenoids (derivatives of oleanolic acid) or by Bacopa monniera, the enhancement of ATP production by carnitine and -lipoic acid. Mitochondrial dysfunctions are the prime source of neurodegenerative diseases and neurodevelopmental disorders. In the context of neural differentiation, Martine Uittenbogaard and Anne Chiaramello (Department of Anatomy and Regenerative Biology, George Washington University School of Medicine and Health Sciences, USA) [7] thoroughly describe the implication of mitochondrial biogenesis on neuronal differentiation, its timing, its regulation by specific signaling pathways and new potential therapeutic strategies. The maintenance of mitochondrial homeostasis is crucial for neuronal development. A mitochondrial dynamic balance is necessary between mitochondrial fusion, fission and quality control systems and mitochondrial biogenesis. Concerning the signaling pathways leading to mitochondrial biogenesis this review highlights the implication of different regulators such as AMPK, SIRT1, PGC-1α, NRF1, NRF2, Tfam, etc. on the specific case of neuronal development, providing examples of diseases in which these pathways are altered and transgenic mouse models lacking these regulators. A common hallmark of several neurodegenerative diseases (Huntington´s Disease, Alzheimer´s Disease and Parkinson´s Disease) is the impaired function or expression of PGC-1α, the master regulator of mitochondrial biogenesis. Among the promising strategies to ameliorate mitochondrial-based diseases these authors highlight the induction of PGC-1α via activation of PPAR receptors (rosiglitazone, bezafibrate) or modulating its activity by AMPK (AICAR, metformin, resveratrol) or SIRT1 (SRT1720 and several isoflavone-derived compounds). This article also presents a review of the current animal and cellular models useful to study mitochondriogenesis. Although it is known that many neurodegenerative and neurodevelopmental diseases are originated in mitochondria, the regulation of mitochondrial biogenesis has never been extensively studied. (ABSTRACT TRUNCATED)", "title": "" } ]
scidocsrr
073804c9795eea8b41df2075ffe9bb1c
GOSELO: Goal-Directed Obstacle and Self-Location Map for Robot Navigation Using Reactive Neural Networks
[ { "docid": "75a67ebf9c8aee616f8e0536cb9c92f3", "text": "We present a new public dataset with a focus on simulating robotic vision tasks in everyday indoor environments using real imagery. The dataset includes 20,000+ RGB-D images and 50,000+ 2D bounding boxes of object instances densely captured in 9 unique scenes. We train a fast object category detector for instance detection on our data. Using the dataset we show that, although increasingly accurate and fast, the state of the art for object detection is still severely impacted by object scale, occlusion, and viewing direction all of which matter for robotics applications. We next validate the dataset for simulating active vision, and use the dataset to develop and evaluate a deep-network-based system for next best move prediction for object classification using reinforcement learning. Our dataset is available for download at cs.unc.edu/∼ammirato/active_vision_dataset_website/.", "title": "" } ]
[ { "docid": "d42ed4f231d51cacaf1f42de1c723c31", "text": "A stepped circular waveguide dual-mode (SCWDM) filter is fully investigated in this paper, from its basic characteristic to design formula. As compared to a conventional circular waveguide dual-mode (CWDM) filter, it provides more freedoms for shifting and suppressing the spurious modes in a wide frequency band. This useful attribute can be used for a broadband waveguide contiguous output multiplexer (OMUX) in satellite payloads. The scaling factor for relating coupling value M to its corresponding impedance inverter K in a stepped cavity is derived for full-wave EM design. To validate the design technique, four design examples are presented. One challenging example is a wideband 17-channel Ku-band contiguous multiplexer with two SCWDM channel filters. A triplexer hardware covering the same included bandwidth is also designed and measured. The measurement results show excellent agreement with those of the theoretical EM designs, justifying the effectiveness of full-wave EM modal analysis. Comparing to the best possible design of conventional CWDM filters, at least 30% more spurious-free range in both Ku-band and C-band can be achieved by using SCWDM filters.", "title": "" }, { "docid": "0ab5eb54ba6a58ebb297a8dc49ea513c", "text": "This paper presents a technique to build a lexical resource used for annotation of parallel corpora where the t ags can be seen as multilingual ‘synsets’. The approach can be extended to add relationships between these synsets that ar e akin to WordNet relationships of synonymy and hypernymy. The paper also discusses how the success of this approach can be measured. The reported results are for English, German, Fre nch, and Greek using the Europarl parallel corpus.", "title": "" }, { "docid": "a63aee3bb6f93567e68535e6ee94cd79", "text": "While users trust the selections of their social friends in recommendation systems, the preferences of friends do not necessarily match. In this study, we introduce a deep learning approach to learn both about user preferences and the social influence of friends when generating recommendations. In our model we design a deep learning architecture by stacking multiple marginalized Denoising Autoencoders. We define a joint objective function to enforce the latent representation of social relationships in the Autoencoder's hidden layer to be as close as possible to the users' latent representation when factorizing the user-item matrix. We formulate a joint objective function as a minimization problem to learn both user preferences and friends' social influence and we present an optimization algorithm to solve the joint minimization problem. Our experiments on four benchmark datasets show that the proposed approach achieves high recommendation accuracy, compared to other state-of-the-art methods.", "title": "" }, { "docid": "6da917550ccf45604e20d897bd74b1ab", "text": "OBJECTIVE\nTemporomandibular disorders (TMD) is a term reflecting chronic, painful, craniofacial conditions usually of unclear etiology with impaired jaw function. The effect of osteopathic manual therapy (OMT) in patients with TMD is largely unknown, and its use in such patients is controversial. Nevertheless, empiric evidence suggests that OMT might be effective in alleviating symptoms. A randomized controlled clinical trial of efficacy was performed to test this hypothesis.\n\n\nMETHODS\nWe performed a randomized, controlled trial that involved adult patients who had TMD. Patients were randomly divided into two groups: an OMT group (25 patients, 12 males and 13 females, age 40.6+/-11.03) and a conventional conservative therapy (CCT) group (25 patients, 10 males and 15 females, age 38.4+/-15.33). At the first visit (T0), at the end of treatment (after six months, T1) and two months after the end of treatment (T2), all patients were subjected to clinical evaluation. Assessments were performed by subjective pain intensity (visual analogue pain scale, VAS), clinical evaluation (Temporomandibular index) and measurements of the range of maximal mouth opening and lateral movement of the head around its axis.\n\n\nRESULTS\nPatients in both groups improved during the six months. The OMT group required significantly less medication (non-steroidal medication and muscle relaxants) (P<0.001).\n\n\nCONCLUSIONS\nThe two therapeutic modalities had similar clinical results in patients with TMD, even if the use of medication was greater in CCT group. Our findings suggest that OMT is a valid option for the treatment of TMD.", "title": "" }, { "docid": "7a13897b16e1b08eb8d38cfd8cea8d57", "text": "Vedran Dunjko, 2, ∗ Jacob M. Taylor, 4, † and Hans J. Briegel ‡ Institut für Theoretische Physik, Universität Innsbruck, Technikerstraße 25, A-6020 Innsbruck, Austria Division of Molecular Biology, Rud̄er Bošković Institute, Bijenička cesta 54, 10002 Zagreb, Croatia. Joint Quantum Institute, National Institute of Standards and Technology, Gaithersburg, MD 20899 USA Joint Center for Quantum Information and Computer Science, University of Maryland, College Park, MD 20742 USA (Dated: November 6, 2018)", "title": "" }, { "docid": "e86e4a07d1daa8a113d855fca2781815", "text": "In this paper, we propose a bidimensional attention based recursive autoencoder (BattRAE) to integrate cues and source-target interactions at multiple levels of granularity into bilingual phrase representations. We employ recursive autoencoders to generate tree structures of phrase with embeddings at different levels of granularity (e.g., words, sub-phrases, phrases). Over these embeddings on the source and target side, we introduce a bidimensional attention network to learn their interactions encoded in a bidimensional attention matrix, from which we extract two soft attention weight distributions simultaneously. The weight distributions enable BattRAE to generate compositive phrase representations via convolution. Based on the learned phrase representations, we further use a bilinear neural model, trained via a max-margin method, to measure bilingual semantic similarity. In order to evaluate the effectiveness of BattRAE, we incorporate this semantic similarity as an additional feature into a state-of-the-art SMT system. Extensive experiments on NIST Chinese-English test sets show that our model achieves a substantial improvement of up to 1.82 BLEU points over the baseline.", "title": "" }, { "docid": "67d126ce0e2060c5a94f9171c972fffc", "text": "We study the relationship between social media output and National Football League (NFL) games, using a dataset containing messages from Twitter and NFL game statistics. Specifically, we consider tweets pertaining to specific teams and games in the NFL season and use them alongside statistical game data to build predictive models for future game outcomes (which team will win?) and sports betting outcomes (which team will win with the point spread? will the total points be over/under the line?). We experiment with several feature sets and find that simple features using large volumes of tweets can match or exceed the performance of more traditional features that use game statistics.", "title": "" }, { "docid": "423228556cb473e0fab48a2dc57cbf6f", "text": "This paper focus on the dynamic modeling and the LQR and PID controllers for the self balancing unicycle robot. The mechanism of the unicycle robot is designed. The pitching and rolling balance could be achieved by the driving of the motor on the wheel and the balance weight on the body of robot. The dynamic equations of the robot are presented based on the Routh equation. On this basis, the LQR and PID controllers of the unicycle robot are proposed. The experimentations of balance control are showed through the Simulink toolbox of Matlab. The simulation results show that the robot could achieve self balancing after a short period of time by the designed controllers. According to comparing the results, the errors of PID controller are relatively smaller than LQR. The response speed of LQR controller is faster than PID. At last a kind of LQR&PID controller is proposed. This controller has the advantages of both LQR and PID controllers.", "title": "" }, { "docid": "af28e57d508511ce4f494eb45da0e525", "text": "Posthumanism entails the idea of transcendence of the human being achieved through technology. The article begins by distinguishing perfection and change (or growth). It also attempts to show the anthropological premises of posthumanism itself and suggests that we can identify two roots: the liberal humanistic subject (autonomous and unrelated that simply realizes herself/himself through her/his own project) and the interpretation of thought as a computable process. Starting from these premises, many authors call for the loosening of the clear boundaries of one’s own subject in favour of blending with other beings. According to these theories, we should become post-human: if the human being is thought and thought is a computable process, whatever is able to process information broader and faster is better than the actual human being and has to be considered as the way towards the real completeness of the human being itself. The paper endeavours to discuss the adequacy of these premises highlighting the structural dependency of the human being, the role of the human body, the difference between thought and a computational process, the singularity of some useless and unexpected human acts. It also puts forward the need for axiological criteria to define growth as perfectionism.", "title": "" }, { "docid": "5f63aa64d24dcb011db3dc2604af5e73", "text": "Communication aimed at promoting civic engagement may become problematic when citizen roles undergo historic changes. In the current era, younger generations are embracing more expressive styles of actualizing citizenship defined around peer content sharing and social media, in contrast to earlier models of dutiful citizenship based on one-way communication managed by authorities. An analysis of 90 youth Web sites operated by diverse civic and political organizations in the United States reveals uneven conceptions of citizenship and related civic skills, suggesting that many established organization are out of step with changing civic styles.", "title": "" }, { "docid": "6ef6cbb60da56bfd53ae945480908d3c", "text": "OBJECTIVE\nIn multidisciplinary prenatal diagnosis centers, the search for a tetrasomy 12p mosaic is requested following the discovery of a diaphragmatic hernia in the antenatal period. Thus, the series of Pallister Killian syndromes (PKS: OMIM 601803) probably overestimate the prevalence of diaphragmatic hernia in this syndrome to the detriment of other morphological abnormalities.\n\n\nMETHODS\nA multicenter retrospective study was conducted with search for assistance from members of the French society for Fetal Pathology. For each identified case, we collected all antenatal and postnatal data. Antenatal data were compared with data from the clinicopathological examination to assess the adequacy of sonographic signs of PKS. A review of the literature on antenatal morphological anomalies in case of PKS completed the study.\n\n\nRESULTS\nTen cases were referred to us: 7 had cytogenetic confirmation and 6 had ultrasound screening. In the prenatal as well as post mortem period, the most common sign is facial dysmorphism (5 cases/6). A malformation of limbs is reported in half of the cases (3 out of 6). Ultrasound examination detected craniofacial dysmorphism in 5 cases out of 6. We found 1 case of left diaphragmatic hernia. Our results are in agreement with the malformation spectrum described in the literature.\n\n\nCONCLUSION\nSome malformation associations could evoke a SPK without classical diaphragmatic hernia.", "title": "" }, { "docid": "f8fbe385a98a3e614a0556b5b23dbdf3", "text": "The recent developments by considering a rather unexpected application of the theory of Independent component analysis (ICA) found in outlier detection , data clustering and multivariate data visualization etc . Accurate identification of outliers plays an important role in statistical analysis. If classical statistical models are blindly applied to data containing outliers, the results can be misleading at best. In addition, outliers themselves are often the special points of interest in many practical situations and their identification is the main purpose of the investigation. This paper takes an attempt a new and novel method for multivariate outlier detection using ICA and compares with different outlier detect ion techniques in the literature.", "title": "" }, { "docid": "eeef9681cc03cf520141a722d698fae7", "text": "In recent years researchers have achieved considerable success applying neural network methods to question answering (QA). These approaches have achieved state of the art results in simplified closeddomain settings1 such as the SQuAD (Rajpurkar et al., 2016) dataset, which provides a pre-selected passage, from which the answer to a given question may be extracted. More recently, researchers have begun to tackle open-domain QA, in which the model is given a question and access to a large corpus (e.g., wikipedia) instead of a pre-selected passage (Chen et al., 2017a). This setting is more complex as it requires large-scale search for relevant passages by an information retrieval component, combined with a reading comprehension model that “reads” the passages to generate an answer to the question. Performance in this setting lags considerably behind closed-domain performance. In this paper, we present a novel opendomain QA system called Reinforced Ranker-Reader (R3), based on two algorithmic innovations. First, we propose a new pipeline for open-domain QA with a Ranker component, which learns to rank retrieved passages in terms of likelihood of generating the ground-truth answer to a given question. Second, we propose a novel method that jointly trains the Ranker ∗This work has been done during the 1st author’s internship with IBM. In the QA community, “openness” can be interpreted as referring either to the scope of question topics or to the breadth and generality of the knowledge source used to answer each question. Following Chen et al. 2017a we adopt the latter definition. along with an answer-generation Reader model, based on reinforcement learning. We report extensive experimental results showing that our method significantly improves on the state of the art for multiple open-domain QA datasets.", "title": "" }, { "docid": "48c4b2a708f2607a8d66b642e917433d", "text": "In this paper we present an approach to control a real car with brain signals. To achieve this, we use a brain computer interface (BCI) which is connected to our autonomous car. The car is equipped with a variety of sensors and can be controlled by a computer. We implemented two scenarios to test the usability of the BCI for controlling our car. In the first scenario our car is completely brain controlled, using four different brain patterns for steering and throttle/brake. We will describe the control interface which is necessary for a smooth, brain controlled driving. In a second scenario, decisions for path selection at intersections and forkings are made using the BCI. Between these points, the remaining autonomous functions (e.g. path following and obstacle avoidance) are still active. We evaluated our approach in a variety of experiments on a closed airfield and will present results on accuracy, reaction times and usability.", "title": "" }, { "docid": "ba3bf5f03e44e29a657d8035bb00535c", "text": "Due to the broadcast nature of WiFi communication anyone with suitable hardware is able to monitor surrounding traffic. However, a WiFi device is able to listen to only one channel at any given time. The simple solution for capturing traffic across multiple channels involves channel hopping, which as a side effect reduces dwell time per channel. Hence monitoring with channel hopping does not produce a comprehensive view of the traffic across all channels at a given time.\n In this paper we present an inexpensive multi-channel WiFi capturing system (dubbed the wireless shark\") and evaluate its performance in terms of traffic cap- turing efficiency. Our results confirm and quantify the intuition that the performance is directly related to the number of WiFi adapters being used for listening. As a second contribution of the paper we use the wireless shark to observe the behavior of 14 different mobile devices, both in controlled and normal office environments. In our measurements, we focus on the probe traffic that the devices send when they attempt to discover available WiFi networks. Our results expose some distinct characteristics in various mobile devices' probing behavior.", "title": "" }, { "docid": "be9c88e6916e1c5af04e8ae1b6dc5748", "text": "In neural networks, the learning rate of the gradient descent strongly affects performance. This prevents reliable out-of-the-box training of a model on a new problem. We propose the All Learning Rates At Once (Alrao) algorithm: each unit or feature in the network gets its own learning rate sampled from a random distribution spanning several orders of magnitude, in the hope that enough units will get a close-to-optimal learning rate. Perhaps surprisingly, stochastic gradient descent (SGD) with Alrao performs close to SGD with an optimally tuned learning rate, for various network architectures and problems. In our experiments, all Alrao runs were able to learn well without any tuning.", "title": "" }, { "docid": "77a0234ae555075aebd10b0d9926484f", "text": "The antibacterial effect of visible light irradiation combined with photosensitizers has been reported. The objective of this was to test the effect of visible light irradiation without photosensitizers on the viability of oral microorganisms. Strains of Porphyromonas gingivalis, Fusobacterium nucleatum, Streptococcus mutans and Streptococcus faecalis in suspension or grown on agar were exposed to visible light at wavelengths of 400-500 nm. These wavelengths are used to photopolymerize composite resins widely used for dental restoration. Three photocuring light sources, quartz-tungsten-halogen lamp, light-emitting diode and plasma-arc, at power densities between 260 and 1300 mW/cm2 were used for up to 3 min. Bacterial samples were also exposed to a near-infrared diode laser (wavelength, 830 nm), using identical irradiation parameters for comparison. The results show that blue light sources exert a phototoxic effect on P. gingivalis and F. nucleatum. The minimal inhibitory dose for P. gingivalis and F. nucleatum was 16-62 J/cm2, a value significantly lower than that for S. mutans and S. faecalis (159-212 J/cm2). Near-infrared diode laser irradiation did not affect any of the bacteria tested. Our results suggest that visible light sources without exogenous photosensitizers have a phototoxic effect mainly on Gram-negative periodontal pathogens.", "title": "" }, { "docid": "f53885bda1368b5d7b9d14848d3002d2", "text": "This paper presents a method for a reconfigurable magnetic resonance-coupled wireless power transfer (R-MRC-WPT) system in order to achieve higher transmission efficiency under various transmission distance and/or misalignment conditions. Higher efficiency, longer transmission distance, and larger misalignment tolerance can be achieved with the presented R-MRC-WPT system when compared to the conventional four-coil MRC-WPT (C-MRC-WPT) system. The reconfigurability in the R-MRC-WPT system is achieved by adaptively switching between different sizes of drive loops and load loops. All drive loops are in the same plane and all load loops are also in the same plane; this method does not require mechanical movements of the drive loop and load loop and does not result in the system volume increase. Theoretical basis of the method for the R-MRC-WPT system is derived based on a circuit model and an analytical model. Results from a proof-of-concept experimental prototype, with transmitter and receiver coil diameter of 60 cm each, show that the transmission efficiency of the R-MRC-WPT system is higher than the transmission efficiency of the C-MRC-WPT system and the capacitor tuning system for all distances up to 200 cm (~3.3 times the coil diameter) and for all lateral misalignment values within 60 cm (one coil diameter).", "title": "" }, { "docid": "e5bad6942b0afa06f3a87e3c9347bf13", "text": "We present a monocular 3D reconstruction algorithm for inextensible deformable surfaces. It uses point correspondences between a single image of the deformed surface taken by a camera with known intrinsic parameters and a template. The main assumption we make is that the surface shape as seen in the template is known. Since the surface is inextensible, its deformations are isometric to the template. We exploit the distance preservation constraints to recover the 3D surface shape as seen in the image. Though the distance preservation constraints have already been investigated in the literature, we propose a new way to handle them. Spatial smoothness priors are easily incorporated, as well as temporal smoothness priors in the case of reconstruction from a video. The reconstruction can be used for 3D augmented reality purposes thanks to a fast implementation. We report results on synthetic and real data. Some of them are compared to stereo-based 3D reconstructions to demonstrate the efficiency of our method.", "title": "" }, { "docid": "115fb4dcd7d5a1240691e430cd107dce", "text": "Human motion capture data, which are used to animate animation characters, have been widely used in many areas. To satisfy the high-precision requirement, human motion data are captured with a high frequency (120 frames/s) by a high-precision capture system. However, the high frequency and nonlinear structure make the storage, retrieval, and browsing of motion data challenging problems, which can be solved by keyframe extraction. Current keyframe extraction methods do not properly model two important characteristics of motion data, i.e., sparseness and Riemannian manifold structure. Therefore, we propose a new model called joint kernel sparse representation (SR), which is in marked contrast to all current keyframe extraction methods for motion data and can simultaneously model the sparseness and the Riemannian manifold structure. The proposed model completes the SR in a kernel-induced space with a geodesic exponential kernel, whereas the traditional SR cannot model the nonlinear structure of motion data in the Euclidean space. Meanwhile, because of several important modifications to traditional SR, our model can also exploit the relations between joints and solve two problems, i.e., the unreasonable distribution and redundancy of extracted keyframes, which current methods do not solve. Extensive experiments demonstrate the effectiveness of the proposed method.", "title": "" } ]
scidocsrr
495db6941b1511566cbddc79590bd5a4
Information extraction on novel text using machine learning and rule-based system
[ { "docid": "289e17669dac7b06a0f5d919fd01bd58", "text": "In this paper, we present a rule-based relation extraction approach which uses DBpedia and linguistic information provided by the syntactic parser Fips. Our goal is twofold: (i) the morpho-syntactic patterns are defined using the syntactic parser Fips to identify relations between named entities (ii) the RDF triples extracted from DBpedia are used to improve RE task by creating gazetteer relations. NEBHI, Kamel. A Rule-Based Relation Extraction System using DBpedia and Syntactic Parsing. In: Proceedings of the NLP-DBPEDIA-2013 Workshop co-located with the 12th International Semantic Web Conference (ISWC 2013). 2013.", "title": "" }, { "docid": "3b2ddbef9ee3e5db60e2b315064a02c3", "text": "It is indispensable to understand and analyze industry structure and company relations from documents, such as news articles, in order to make management decisions concerning supply chains, selection of business partners, etc. Analysis of company relations from news articles requires both a macro-viewpoint, e.g., overviewing competitor groups, and a micro-viewpoint, e.g., grasping the descriptions of the relationship between a specific pair of companies collaborating. Research has typically focused on only the macro-viewpoint, classifying each company pair into a specific relation type. In this paper, to support company relation analysis from both macro-and micro-viewpoints, we propose a method that extracts collaborative/competitive company pairs from individual sentences in Web news articles by applying a Markov logic network and gather extracted relations from each company pair. By this method, we are able not only to perform clustering of company pairs into competitor groups based on the dominant relations of each pair (macro-viewpoint) but also to know how each company pair is described in individual sentences (micro-viewpoint). We empirically confirmed that the proposed method is feasible through analysis of 4,661 Web news articles on the semiconductor and related industries.", "title": "" }, { "docid": "89aa60cefe11758e539f45c5cba6f48a", "text": "For undergraduate or advanced undergraduate courses in Classical Natural Language Processing, Statistical Natural Language Processing, Speech Recognition, Computational Linguistics, and Human Language Processing. An explosion of Web-based language techniques, merging of distinct fields, availability of phone-based dialogue systems, and much more make this an exciting time in speech and language processing. The first of its kind to thoroughly cover language technology at all levels and with all modern technologies this text takes an empirical approach to the subject, based on applying statistical and other machine-learning algorithms to large corporations. The authors cover areas that traditionally are taught in different courses, to describe a unified vision of speech and language processing. Emphasis is on practical applications and scientific evaluation. An accompanying Website contains teaching materials for instructors, with pointers to language processing resources on the Web. The Second Edition offers a significant amount of new and extended material. Supplements: Click on the \"Resources\" tab to View Downloadable Files:Solutions Power Point Lecture Slides Chapters 1-5, 8-10, 12-13 and 24 Now Available! For additional resourcse visit the author website: http://www.cs.colorado.edu/~martin/slp.html", "title": "" } ]
[ { "docid": "60e94f9a6731e1a148e05aa0f9a31683", "text": "Bright light therapy for seasonal affective disorder (SAD) has been investigated and applied for over 20 years. Physicians and clinicians are increasingly confident that bright light therapy is a potent, specifically active, nonpharmaceutical treatment modality. Indeed, the domain of light treatment is moving beyond SAD, to nonseasonal depression (unipolar and bipolar), seasonal flare-ups of bulimia nervosa, circadian sleep phase disorders, and more. Light therapy is simple to deliver to outpatients and inpatients alike, although the optimum dosing of light and treatment time of day requires individual adjustment. The side-effect profile is favorable in comparison with medications, although the clinician must remain vigilant about emergent hypomania and autonomic hyperactivation, especially during the first few days of treatment. Importantly, light therapy provides a compatible adjunct to antidepressant medication, which can result in accelerated improvement and fewer residual symptoms.", "title": "" }, { "docid": "f6e8eda4fa898a24f3a7d1116e49f42c", "text": "This is the eBook of the printed book and may not include any media, website access codes, or print supplements that may come packaged with the bound book. Search Engines: Information Retrieval in Practice is ideal for introductory information retrieval courses at the undergraduate and graduate level in computer science, information science and computer engineering departments. It is also a valuable tool for search engine and information retrieval professionals. В Written by a leader in the field of information retrieval, Search Engines: Information Retrieval in Practice , is designed to give undergraduate students the understanding and tools they need to evaluate, compare and modify search engines.В Coverage of the underlying IR and mathematical models reinforce key concepts. The bookвЂTMs numerous programming exercises make extensive use of Galago, a Java-based open source search engine.", "title": "" }, { "docid": "d49ea26480f4170ec3684ddbf3272306", "text": "Basic surgical skills of suturing and knot tying are an essential part of medical training. Having an automated system for surgical skills assessment could help save experts time and improve training efficiency. There have been some recent attempts at automated surgical skills assessment using either video analysis or acceleration data. In this paper, we present a novel approach for automated assessment of OSATS-like surgical skills and provide an analysis of different features on multi-modal data (video and accelerometer data). We conduct a large study for basic surgical skill assessment on a dataset that contained video and accelerometer data for suturing and knot-tying tasks. We introduce “entropy-based” features—approximate entropy and cross-approximate entropy, which quantify the amount of predictability and regularity of fluctuations in time series data. The proposed features are compared to existing methods of Sequential Motion Texture, Discrete Cosine Transform and Discrete Fourier Transform, for surgical skills assessment. We report average performance of different features across all applicable OSATS-like criteria for suturing and knot-tying tasks. Our analysis shows that the proposed entropy-based features outperform previous state-of-the-art methods using video data, achieving average classification accuracies of 95.1 and 92.2% for suturing and knot tying, respectively. For accelerometer data, our method performs better for suturing achieving 86.8% average accuracy. We also show that fusion of video and acceleration features can improve overall performance for skill assessment. Automated surgical skills assessment can be achieved with high accuracy using the proposed entropy features. Such a system can significantly improve the efficiency of surgical training in medical schools and teaching hospitals.", "title": "" }, { "docid": "78e3d9bbfc9fdd9c3454c34f09e5abd4", "text": "This paper presents the first ever reported implementation of the Gapped Basic Local Alignment Search Tool (Gapped BLAST) for biological sequence alignment, with the Two-Hit method, on CUDA (compute unified device architecture)-compatible Graphic Processing Units (GPUs). The latter have recently emerged as relatively low cost and easy to program high performance platforms for general purpose computing. Our Gapped BLAST implementation on an NVIDIA Geforce 8800 GTX GPU is up to 2.7x quicker than the most optimized CPU-based implementation, namely NCBI BLAST, running on a Pentium4 3.4 GHz desktop computer with 2GB RAM.", "title": "" }, { "docid": "7eb9e3aac9d25e3ae0628ffe0beea533", "text": "Many believe that an essential component for the discovery of the tremendous diversity in natural organisms was the evolution of evolvability, whereby evolution speeds up its ability to innovate by generating a more adaptive pool of offspring. One hypothesized mechanism for evolvability is developmental canalization, wherein certain dimensions of variation become more likely to be traversed and others are prevented from being explored (e.g., offspring tend to have similar-size legs, and mutations affect the length of both legs, not each leg individually). While ubiquitous in nature, canalization is rarely reported in computational simulations of evolution, which deprives us of in silico examples of canalization to study and raises the question of which conditions give rise to this form of evolvability. Answering this question would shed light on why such evolvability emerged naturally, and it could accelerate engineering efforts to harness evolution to solve important engineering challenges. In this article, we reveal a unique system in which canalization did emerge in computational evolution. We document that genomes entrench certain dimensions of variation that were frequently explored during their evolutionary history. The genetic representation of these organisms also evolved to be more modular and hierarchical than expected by chance, and we show that these organizational properties correlate with increased fitness. Interestingly, the type of computational evolutionary experiment that produced this evolvability was very different from traditional digital evolution in that there was no objective, suggesting that open-ended, divergent evolutionary processes may be necessary for the evolution of evolvability.", "title": "" }, { "docid": "3685470e05a3f763817b9c6f28747336", "text": "G' A. Linz, Peter. An introduction to formal languages and automata / Peter Linz'--3'd cd charrgcs ftrr the second edition wercl t)volutionary rather than rcvolrrtionary and addressed Initially, I felt that giving solutions to exercises was undesirable hecause it lirrritcd the Chapter 1 fntroduction to the Theory of Computation. Issuu solution manual to introduction to languages. Introduction theory computation 2nd edition solution manual sipser. Structural Theory of automata: solution manual of theory of computation. Kellison theory of interest pdf. Transformation, Sylvester's theorem(without proof), Solution of Second Order. Linear Differential Higher Engineering Mathematics by B.S. Grewal, 40th Edition, Khanna. Publication. 2. Introduction Of Automata Theory, Languages and computationHopcroft. Motwani&Ulman UNIX system Utilities manual. 4.", "title": "" }, { "docid": "e1af677fc2a19ade2f315ffc6f660ca6", "text": "In enterprise and data center networks, the scalability of the data plane becomes increasingly challenging as forwarding tables and link speeds grow. Simply building switches with larger amounts of faster memory is not appealing, since high-speed memory is both expensive and power hungry. Implementing hash tables in SRAM is not appealing either because it requires significant overprovisioning to ensure that all forwarding table entries fit. Instead, we propose the BUFFALO architecture, which uses a small SRAM to store one Bloom filter of the addresses associated with each outgoing link. We provide a practical switch design leveraging flat addresses and shortest-path routing. BUFFALO gracefully handles false positives without reducing the packet-forwarding rate, while guaranteeing that packets reach their destinations with bounded stretch with high probability. We tune the sizes of Bloom filters to minimize false positives for a given memory size. We also handle routing changes and dynamically adjust Bloom filter sizes using counting Bloom filters in slow memory. Our extensive analysis, simulation, and prototype implementation in kernel-level Click show that BUFFALO significantly reduces memory cost, increases the scalability of the data plane, and improves packet-forwarding performance.", "title": "" }, { "docid": "c2891abf8297b5dcf0e21dfa9779a017", "text": "The success of knowledge-sharing communities like Wikipedia and the advances in automatic information extraction from textual and Web sources have made it possible to build large \"knowledge repositories\" such as DBpedia, Freebase, and YAGO. These collections can be viewed as graphs of entities and relationships (ER graphs) and can be represented as a set of subject-property-object (SPO) triples in the Semantic-Web data model RDF. Queries can be expressed in the W3C-endorsed SPARQL language or by similarly designed graph-pattern search. However, exact-match query semantics often fall short of satisfying the users' needs by returning too many or too few results. Therefore, IR-style ranking models are crucially needed.\n In this paper, we propose a language-model-based approach to ranking the results of exact, relaxed and keyword-augmented graph pattern queries over RDF graphs such as ER graphs. Our method estimates a query model and a set of result-graph models and ranks results based on their Kullback-Leibler divergence with respect to the query model. We demonstrate the effectiveness of our ranking model by a comprehensive user study.", "title": "" }, { "docid": "cbbd8c44de7e060779ed60c6edc31e3c", "text": "This letter presents a compact broadband microstrip-line-fed sleeve monopole antenna for application in the DTV system. The design of meandering the monopole into a compact structure is applied for size reduction. By properly selecting the length and spacing of the sleeve, the broadband operation for the proposed design can be achieved, and the obtained impedance bandwidth covers the whole DTV (470862 MHz) band. Most importantly, the matching condition over a wide frequency range can be performed well even when a small ground-plane length is used; meanwhile, a small variation in the impedance bandwidth is observed for the ground-plane length varied in a great range.", "title": "" }, { "docid": "9d3e0a8af748c9addf598a27f414e0b2", "text": "Although insecticide resistance is a widespread problem for most insect pests, frequently the assessment of resistance occurs over a limited geographic range. Herein, we report the first widespread survey of insecticide resistance in the USA ever undertaken for the house fly, Musca domestica, a major pest in animal production facilities. The levels of resistance to six different insecticides were determined (using discriminating concentration bioassays) in 10 collections of house flies from dairies in nine different states. In addition, the frequencies of Vssc and CYP6D1 alleles that confer resistance to pyrethroid insecticides were determined for each fly population. Levels of resistance to the six insecticides varied among states and insecticides. Resistance to permethrin was highest overall and most consistent across the states. Resistance to methomyl was relatively consistent, with 65-91% survival in nine of the ten collections. In contrast, resistance to cyfluthrin and pyrethrins + piperonyl butoxide varied considerably (2.9-76% survival). Resistance to imidacloprid was overall modest and showed no signs of increasing relative to collections made in 2004, despite increasing use of this insecticide. The frequency of Vssc alleles that confer pyrethroid resistance was variable between locations. The highest frequencies of kdr, kdr-his and super-kdr were found in Minnesota, North Carolina and Kansas, respectively. In contrast, the New Mexico population had the highest frequency (0.67) of the susceptible allele. The implications of these results to resistance management and to the understanding of the evolution of insecticide resistance are discussed.", "title": "" }, { "docid": "be79f036d17e26a3df61a6712b169c50", "text": "We introduce Question-Answer Meaning Representations (QAMRs), which represent the predicate-argument structure of a sentence as a set of question-answer pairs. We also develop a crowdsourcing scheme to show that QAMRs can be labeled with very little training, and gather a dataset with over 5,000 sentences and 100,000 questions. A detailed qualitative analysis demonstrates that the crowd-generated question-answer pairs cover the vast majority of predicate-argument relationships in existing datasets (including PropBank, NomBank, QASRL, and AMR) along with many previously under-resourced ones, including implicit arguments and relations. The QAMR data and annotation code is made publicly available1 to enable future work on how best to model these complex phenomena.", "title": "" }, { "docid": "d46434bbbf73460bf422ebe4bd65b590", "text": "We present an efficient block-diagonal approximation to the Gauss-Newton matrix for feedforward neural networks. Our resulting algorithm is competitive against state-of-the-art first-order optimisation methods, with sometimes significant improvement in optimisation performance. Unlike first-order methods, for which hyperparameter tuning of the optimisation parameters is often a laborious process, our approach can provide good performance even when used with default settings. A side result of our work is that for piecewise linear transfer functions, the network objective function can have no differentiable local maxima, which may partially explain why such transfer functions facilitate effective optimisation.", "title": "" }, { "docid": "2c5eb3fb74c6379dfd38c1594ebe85f4", "text": "Accurately recognizing speaker emotion and age/gender from speech can provide better user experience for many spoken dialogue systems. In this study, we propose to use deep neural networks (DNNs) to encode each utterance into a fixed-length vector by pooling the activations of the last hidden layer over time. The feature encoding process is designed to be jointly trained with the utterance-level classifier for better classification. A kernel extreme learning machine (ELM) is further trained on the encoded vectors for better utterance-level classification. Experiments on a Mandarin dataset demonstrate the effectiveness of our proposed methods on speech emotion and age/gender recognition tasks.", "title": "" }, { "docid": "fc9b6653b447cf23ed002c525c792d3f", "text": "Education is enhanced when students are able to be active, rather than passive, learners (McKeachie, 2002). Fortunately, social psychologists have a rich history of creating and publishing classroom demonstrations that allow for such active learning. Unfortunately, these demonstrations have been published in diverse journals, teaching manuals, and edited volumes that are not always readily available. The purpose of this article is to review demonstrations and exercises that have been developed for teaching students about social influence. Using an annotated bibliography format, we review more than five dozen techniques that assist instructors in demonstrating such social influence principles as cognitive dissonance, conformity, obedience, deindividuation, propaganda, framing, persuasion, advertising, social norms, and the selffulfilling prophecy.", "title": "" }, { "docid": "e74a15889f39ea03256fe5c7d9cb9819", "text": "In healthy cells, cytochrome c (Cyt c) is located in the mitochondrial intermembrane/intercristae spaces, where it functions as an electron shuttle in the respiratory chain and interacts with cardiolipin (CL). Several proapoptotic stimuli induce the permeabilization of the outer membrane, facilitate the communication between intermembrane and intercristae spaces and promote the mobilization of Cyt c from CL, allowing for Cyt c release. In the cytosol, Cyt c mediates the allosteric activation of apoptosis-protease activating factor 1, which is required for the proteolytic maturation of caspase-9 and caspase-3. Activated caspases ultimately lead to apoptotic cell dismantling. Nevertheless, cytosolic Cyt c has been associated also to vital cell functions (i.e. differentiation), suggesting that its release not always occurs in an all-or-nothing fashion and that mitochondrial outer membrane permeabilization may not invariably lead to cell death. This review deals with the events involved in Cyt c release from mitochondria, with special attention to its regulation and final consequences.", "title": "" }, { "docid": "060c24145965fba4c4489a4f1cfc34d0", "text": "In many real-world applications, data are represented by matrices or high-order tensors. Despite the promising performance, the existing 2-D discriminant analysis algorithms employ a single projection model to exploit the discriminant information for projection, making the model less flexible. In this paper, we propose a novel compound rank-k projection (CRP) algorithm for bilinear analysis. The CRP deals with matrices directly without transforming them into vectors, and it, therefore, preserves the correlations within the matrix and decreases the computation complexity. Different from the existing 2-D discriminant analysis algorithms, objective function values of CRP increase monotonically. In addition, the CRP utilizes multiple rank-k projection models to enable a larger search space in which the optimal solution can be found. In this way, the discriminant ability is enhanced. We have tested our approach on five data sets, including UUIm, CVL, Pointing'04, USPS, and Coil20. Experimental results show that the performance of our proposed CRP performs better than other algorithms in terms of classification accuracy.", "title": "" }, { "docid": "e0e62a76b1e2875f9aee585603da36ce", "text": "Article history: Available online 4 August 2012", "title": "" }, { "docid": "45260b1efb4858e231c8c15879db89d1", "text": "Distributed denial-of-service (DDoS) is a rapidly growing problem. The multitude and variety of both the attacks and the defense approaches is overwhelming. This paper presents two taxonomies for classifying attacks and defenses, and thus provides researchers with a better understanding of the problem and the current solution space. The attack classification criteria was selected to highlight commonalities and important features of attack strategies, that define challenges and dictate the design of countermeasures. The defense taxonomy classifies the body of existing DDoS defenses based on their design decisions; it then shows how these decisions dictate the advantages and deficiencies of proposed solutions.", "title": "" }, { "docid": "3f2c0a1fb27c4df6ff02bc7d0a885dfd", "text": "Advances in semiconductor manufacturing processes and large scale integration keep pushing demanding applications further away from centralized processing, and closer to the edges of the network (i.e. Edge Computing). It has become possible to perform complex in-network image processing using low-power embedded smart cameras, enabling a multitude of new collaborative image processing applications. This paper introduces OpenMV, a new low-power smart camera that lends itself naturally to wireless sensor networks and machine vision applications. The uniqueness of this platform lies in running an embedded Python3 interpreter, allowing its peripherals and machine vision library to be scripted in Python. In addition, its hardware is extensible via modules that augment the platform with new capabilities, such as thermal imaging and networking modules.", "title": "" }, { "docid": "a157987bf55765c495b9949b38c91ea2", "text": "www.thelancet.com Vol 373 May 16, 2009 1693 Anthony Costello, Mustafa Abbas, Adriana Allen, Sarah Ball, Sarah Bell, Richard Bellamy, Sharon Friel, Nora Groce, Anne Johnson, Maria Kett, Maria Lee, Caren Levy, Mark Maslin, David McCoy, Bill McGuire, Hugh Montgomery, David Napier, Christina Pagel, Jinesh Patel, Jose Antonio Puppim de Oliveira, Nanneke Redclift, Hannah Rees, Daniel Rogger, Joanne Scott, Judith Stephenson, John Twigg, Jonathan Wolff , Craig Patterson*", "title": "" } ]
scidocsrr
6a8634e600608faf08193cb6fda06816
Host Identity Protocol (HIP) Architecture
[ { "docid": "ccebd8a3c44632d760c9d9d4a4adfe01", "text": "Status of This Memo This document specifies an Internet standards track protocol for the Internet community, and requests discussion and suggestions for improvements. Please refer to the current edition of the \"Internet Official Protocol Standards\" (STD 1) for the standardization state and status of this protocol. Distribution of this memo is unlimited. Abstract The Domain Name System Security Extensions (DNSSEC) add data origin authentication and data integrity to the Domain Name System. This document introduces these extensions and describes their capabilities and limitations. This document also discusses the services that the DNS security extensions do and do not provide. Last, this document describes the interrelationships between the documents that collectively describe DNSSEC.", "title": "" } ]
[ { "docid": "a8ca07bf7784d7ac1d09f84ac76be339", "text": "AbstructEstimation of 3-D information from 2-D image coordinates is a fundamental problem both in machine vision and computer vision. Circular features are the most common quadratic-curved features that have been addressed for 3-D location estimation. In this paper, a closed-form analytical solution to the problem of 3-D location estimation of circular features is presented. Two different cases are considered: 1) 3-D orientation and 3-D position estimation of a circular feature when its radius is known, and 2) 3-D orientation and 3-D position estimation of a circular feature when its radius is not known. As well, extension of the developed method to 3-D quadratic features is addressed. Specifically, a closed-form analytical solution is derived for 3-D position estimation of spherical features. For experimentation purposes, simulated as well as real setups were employed. Simulated experimental results obtained for all three cases mentioned above verified the analytical method developed in this paper. In the case of real experiments, a set of circles located on a calibration plate, whose locations were known with respect to a reference frame, were used for camera calibration as well as for the application of the developed method. Since various distortion factors had to be compensated in order to obtain accurate estimates of the parameters of the imaged circle-an ellipse-with respect to the camera's image frame, a sequential compensation procedure was applied to the input grey-level image. The experimental results obtained once more showed the validity of the total process involved in the 3-D location estimation of circular features in general and the applicability of the analytical method developed in this paper in particular.", "title": "" }, { "docid": "3f72a668554a2cb69170055a3522c37f", "text": "In ancient times goods and services were exchanged through barter system1 Gold, valuable metals and other tangibles like stones and shells were also exploited as medium of exchange. Now Paper Currency (PC) is country-wide accepted common medium of trade. It has three major flaws. First, the holder of currency is always at risk due to theft and robbery culture in most of the societies of world. Second, counterfeit 2 currency is a challenge for currency issuing authorities. Third, printing and transferring PC causes a heavy cost. Different organizations have introduced and implemented digital currency systems but none of them is governed by any government. In this paper we introduce Official digital currency System (ODCS). Our proposed digital currency is issued and controlled by the state/central bank of a country that is why we name it Official digital currency (ODC). The process of issuing ODC is almost same as that of Conventional Paper Currency (CPC) but controlling system is different. The proposal also explains country-wide process of day to day transactions in trade through ODCS. ODC is more secure, reliable, economical and easy to use. Here we introduce just the idea and compulsory modules of ODC system and not the implementable framework. We will present the implementable framework in a separate forthcoming publication.", "title": "" }, { "docid": "844fa359828628af6006c747a1d5edaa", "text": "We use deep learning to model interactions across two or more sets of objects, such as user–movie ratings, protein–drug bindings, or ternary useritem-tag interactions. The canonical representation of such interactions is a matrix (or a higherdimensional tensor) with an exchangeability property: the encoding’s meaning is not changed by permuting rows or columns. We argue that models should hence be Permutation Equivariant (PE): constrained to make the same predictions across such permutations. We present a parameter-sharing scheme and prove that it could not be made any more expressive without violating PE. This scheme yields three benefits. First, we demonstrate state-of-the-art performance on multiple matrix completion benchmarks. Second, our models require a number of parameters independent of the numbers of objects, and thus scale well to large datasets. Third, models can be queried about new objects that were not available at training time, but for which interactions have since been observed. In experiments, our models achieved surprisingly good generalization performance on this matrix extrapolation task, both within domains (e.g., new users and new movies drawn from the same distribution used for training) and even across domains (e.g., predicting music ratings after training on movies).", "title": "" }, { "docid": "fabe9774b1cfdce04c93bdefbdd1f0ad", "text": "We consider the case in which a robot has to navigate in an unknown environment but does not have enough on-board power or payload to carry a traditional depth sensor (e.g., a 3D lidar) and thus can only acquire a few (point-wise) depth measurements. We address the following question: is it possible to reconstruct the geometry of an unknown environment using sparse and incomplete depth measurements? Reconstruction from incomplete data is not possible in general, but when the robot operates in man-made environments, the depth exhibits some regularity (e.g., many planar surfaces with only a few edges); we leverage this regularity to infer depth from a small number of measurements. Our first contribution is a formulation of the depth reconstruction problem that bridges robot perception with the compressive sensing literature in signal processing. The second contribution includes a set of formal results that ascertain the exactness and stability of the depth reconstruction in 2D and 3D problems, and completely characterize the geometry of the profiles that we can reconstruct. Our third contribution is a set of practical algorithms for depth reconstruction: our formulation directly translates into algorithms for depth estimation based on convex programming. In real-world problems, these convex programs are very large and general-purpose solvers are relatively slow. For this reason, we discuss ad-hoc solvers that enable fast depth reconstruction in real problems. The last contribution is an extensive experimental evaluation in 2D and 3D problems, including Monte Carlo runs on simulated instances and testing on multiple real datasets. Empirical results confirm that the proposed approach ensures accurate depth reconstruction, outperforms interpolation-based strategies, and performs well even when the assumption of structured environment is violated. SUPPLEMENTAL MATERIAL • Video demonstrations: https://youtu.be/vE56akCGeJQ", "title": "" }, { "docid": "020fe2e94d306482399b4d1aaa083e5f", "text": "A key analytical task across many domains is model building and exploration for predictive analysis. Data is collected, parsed and analyzed for relationships, and features are selected and mapped to estimate the response of a system under exploration. As social media data has grown more abundant, data can be captured that may potentially represent behavioral patterns in society. In turn, this unstructured social media data can be parsed and integrated as a key factor for predictive intelligence. In this paper, we present a framework for the development of predictive models utilizing social media data. We combine feature selection mechanisms, similarity comparisons and model cross-validation through a variety of interactive visualizations to support analysts in model building and prediction. In order to explore how predictions might be performed in such a framework, we present results from a user study focusing on social media data as a predictor for movie box-office success.", "title": "" }, { "docid": "d9bbc712c2f9606eb92be883f0878a30", "text": "The state-of-the-art scheduler of containerized cloud services considers load-balance as the only criterion and neglects many others such as application performance. In the era of Big Data, however, applications have evolved to be highly data-intensive thus perform poorly in existing systems. This particularly holds for Platform-as-a-Service environments that encourage an application model of stateless application instances in containers reading and writing data to services storing states, e.g., key-value stores. To this end, this work strives to improve today's cloud services by incorporating sensitivity to both load-balance and application performance. We built and analyzed theoretical models that respect both dimensions, and unlike prior studies, our model abstracts the dilemma between load-balance and application performance into an optimization problem and employs a statistical method to meet the discrepant requirements. Using heuristic algorithms and approaches we try to solve the abstracted problems. We implemented the proposed approach in Diego (an open-source cloud service scheduler) and demonstrate that it can significantly boost the performance of containerized applications while preserving a relatively high load-balance.", "title": "" }, { "docid": "0b0465490e6263cef6033e5bb1cdf78f", "text": "Lee Cronk’s book That complex whole is about a variety of different kinds of culture wars, some restricted to an academic milieu and others well-known fixtures of public discourse in the United States and beyond. Most directly, it addresses a perennial debate in cultural anthropology: how should anthropologists define human culture, its boundaries and roles in human existence? Beyond that, it looks at the disciplinary split that runs through the different sub-fields of North American anthropology, one that distinguishes researchers who define themselves as scientists from those who take a more humanistic view of anthropological goals and procedures. Finally, and most indirectly, the book offers a perspective on the arguments over cultural practises and values that periodically – or perhaps constantly – ring across Western societies. The book raises a set of important questions about the relations between evolutionary theory and cultural anthropology and is well written and accessible, so that one would expect it to be a useful text for undergraduates and the general public. Unfortunately, its treatment of anthropological theorizing about culture is weak, and creates a distorted view of the history and state of the art of this work. Such difficulties might perhaps be expected in a text written by someone outside the discipline (see for example Pinker 1997, 2002), but are less understandable when they come from the pen of an anthropologist. Cronk begins the book with an observation, and a claim. The observation is one instance of an ethnographic commonplace: people say one thing, but actually and systematically do another. The Mukogodo pastoralists in whose Kenyan communities Cronk did his fieldwork express a preference for male children over female children, but treat their daughters somewhat better than they do their sons. Examples of such contradictions can be multiplied, and Cronk cites a number of such examples, from other parts of Africa, from Asia and from the United States. Based on his research, he posits that in the Mukogodo case the favoritism shown toward daughters is an example of an evolved human tendency to favour children with the best prospects, especially in marriage, in later life. The hypothesis is an interesting and useful one. It could be – and probably is being – extended by fieldwork in other societies where similarly gender-differentiated prospects exist.", "title": "" }, { "docid": "d60c3d4a21f5d364c8d323cd08814d6a", "text": "Natural language processing (NLP) is a part of the artificial intelligence domain focused on communication between humans and computers. NLP attempts to address the inherent problem that while human communications are often ambiguous and imprecise, computers require unambiguous and precise messages to enable understanding. The accounting, auditing and finance domains frequently put forth textual documents intended to communicate a wide variety of messages, including, but not limited to, corporate financial performance, management’s assessment of current and future firm performance, analysts’ assessments of firm performance, domain standards and regulations as well as evidence of compliance with relevant standards and regulations. NLP applications have been used to mine these documents to obtain insights, make inferences and to create additional methodologies and artefacts to advance knowledge in accounting, auditing and finance. This paper synthesizes the extant literature in NLP in accounting, auditing and finance to establish the state of current knowledge and to identify paths for future research. Copyright © 2016 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "fc4ea7391c1500851ec0d37beed4cd90", "text": "As a crucial operation, routing plays an important role in various communication networks. In the context of data and sensor networks, routing strategies such as shortest-path, multi-path and potential-based (“all-path”) routing have been developed. Existing results in the literature show that the shortest path and all-path routing can be obtained from L1 and L2 flow optimization, respectively. Based on this connection between routing and flow optimization in a network, in this paper we develop a unifying theoretical framework by considering flow optimization with mixed (weighted) L1/L2-norms. We obtain a surprising result: as we vary the trade-off parameter θ, the routing graphs induced by the optimal flow solutions span from shortest-path to multi-path to all-path routing-this entire sequence of routing graphs is referred to as the routing continuum. We also develop an efficient iterative algorithm for computing the entire routing continuum. Several generalizations are also considered, with applications to traffic engineering, wireless sensor networks, and network robustness analysis.", "title": "" }, { "docid": "786f6c09777788c3456e6613729c0292", "text": "An experimental approach to studying the properties of word embeddings is proposed. Controlled experiments, achieved through modifications of the training corpus, permit the demonstration of direct relations between word properties and word vector direction and length. The approach is demonstrated using the word2vec CBOW model with experiments that independently vary word frequency and word co-occurrence noise. The experiments reveal that word vector length depends more or less linearly on both word frequency and the level of noise in the co-occurrence distribution of the word. The coefficients of linearity depend upon the word. The special point in feature space, defined by the (artificial) word with pure noise in its co-occurrence distribution, is found to be small but non-zero.", "title": "" }, { "docid": "b4d5bfc26bac32e1e1db063c3696540a", "text": "Symmetric positive semidefinite (SPSD) matrix approximation is an important problem with applications in kernel methods. However, existing SPSD matrix approximation methods such as the Nyström method only have weak error bounds. In this paper we conduct in-depth studies of an SPSD matrix approximation model and establish strong relative-error bounds. We call it the prototype model for it has more efficient and effective extensions, and some of its extensions have high scalability. Though the prototype model itself is not suitable for large-scale data, it is still useful to study its properties, on which the analysis of its extensions relies. This paper offers novel theoretical analysis, efficient algorithms, and a highly accurate extension. First, we establish a lower error bound for the prototype model, and we improve the error bound of an existing column selection algorithm to match the lower bound. In this way, we obtain the first optimal column selection algorithm for the prototype model. We also prove that the prototype model is exact under certain conditions. Second, we develop a simple column selection algorithm with a provable error bound. Third, we propose a socalled spectral shifting model to make the approximation more accurate when the spectrum of the matrix decays slowly, and the improvement is theoretically quantified. The spectral shifting method can also be applied to improve other SPSD matrix approximation models.", "title": "" }, { "docid": "f1d67673483176bd6e596e4f078c17b4", "text": "The current web suffers information overloading: it is increasingly difficult and time consuming to obtain information desired. Ontologies, the key concept behind the Semantic Web, will provide the means to overcome such problem by providing meaning to the available data. An ontology provides a shared and common understanding of a domain and information machine-processable semantics. To make the Semantic Web a reality and lift current Web to its full potential, powerful and expressive languages are required. Such web ontology languages must be able to describe and organize knowledge in the Web in a machine understandable way. However, organizing knowledge requires the facilities of a logical formalism which can deal with temporal, spatial, epistemic, and inferential aspects of knowledge. Implementations of Web ontology languages must provide these inference services, making them much more than just simple data storage and retrieval systems. This paper presents a state of the art for the most relevant Semantic Web Languages: XML, RDF(s), OIL, DAML+OIL, and OWL, together with a detailed comparison based on modeling primitives and language to language characteristics.", "title": "" }, { "docid": "d16a787399db6309ab4563f4265e91b9", "text": "The real-time information on news sites, blogs and social networking sites changes dynamically and spreads rapidly through the Web. Developing methods for handling such information at a massive scale requires that we think about how information content varies over time, how it is transmitted, and how it mutates as it spreads.\n We describe the News Information Flow Tracking, Yay! (NIFTY) system for large scale real-time tracking of \"memes\" - short textual phrases that travel and mutate through the Web. NIFTY is based on a novel highly-scalable incremental meme-clustering algorithm that efficiently extracts and identifies mutational variants of a single meme. NIFTY runs orders of magnitude faster than our previous Memetracker system, while also maintaining better consistency and quality of extracted memes.\n We demonstrate the effectiveness of our approach by processing a 20 terabyte dataset of 6.1 billion blog posts and news articles that we have been continuously collecting for the last four years. NIFTY extracted 2.9 billion unique textual phrases and identified more than 9 million memes. Our meme-tracking algorithm was able to process the entire dataset in less than five days using a single machine. Furthermore, we also provide a live deployment of the NIFTY system that allows users to explore the dynamics of online news in near real-time.", "title": "" }, { "docid": "252d6a298208337488960568c3d36ec7", "text": "The rapid development of remote sensing technology allows us to get images with high and very high resolution (VHR). VHR imagery scene classification has become an important and challenging problem. In this paper, we introduce a framework for VHR scene understanding. First, the pretrained visual geometry group network (VGG-Net) model is proposed as deep feature extractors to extract informative features from the original VHR images. Second, we select the fully connected layers constructed by VGG-Net in which each layer is regarded as separated feature descriptors. And then we combine between them to construct final representation of the VHR image scenes. Third, discriminant correlation analysis (DCA) is adopted as feature fusion strategy to further refine the original features extracting from VGG-Net, which allows a more efficient fusion approach with small cost than the traditional feature fusion strategies. We apply our approach to three challenging data sets: 1) UC MERCED data set that contains 21 different areal scene categories with submeter resolution; 2) WHU-RS data set that contains 19 challenging scene categories with various resolutions; and 3) the Aerial Image data set that has a number of 10 000 images within 30 challenging scene categories with various resolutions. The experimental results demonstrate that our proposed method outperforms the state-of-the-art approaches. Using feature fusion technique achieves a higher accuracy than solely using the raw deep features. Moreover, the proposed method based on DCA fusion produces good informative features to describe the images scene with much lower dimension.", "title": "" }, { "docid": "9d74aa736c43914c16262c6ce838d563", "text": "In this paper, we propose two level control system for a mobile robot. The first level subsystem deals with the control of the linear and angular volocities using a multivariable PI controller described with a full matrix. The position control of the mobile robot represents the second level control, which is nonlinear. The nonlinear control design is implemented by a modified backstepping algorithm whose parameters are adjusted by a genetic algorithm, which is a robust nonlinear optimization method. The performance of the proposed system is investigated using a dynamic model of a nonholonomic mobile robot with friction. We present a new dynamic model in which the angular velocities of wheels are main variables. Simulation results show the good quality of position tracking capabilities a mobile robot with the various viscous friction torques. Copyright © 2005 IFAC.", "title": "" }, { "docid": "3663d877d157c8ba589e4d699afc460f", "text": "Studies of search habits reveal that people engage in many search tasks involving collaboration with others, such as travel planning, organizing social events, or working on a homework assignment. However, current Web search tools are designed for a single user, working alone. We introduce SearchTogether, a prototype that enables groups of remote users to synchronously or asynchronously collaborate when searching the Web. We describe an example usage scenario, and discuss the ways SearchTogether facilitates collaboration by supporting awareness, division of labor, and persistence. We then discuss the findings of our evaluation of SearchTogether, analyzing which aspects of its design enabled successful collaboration among study participants.", "title": "" }, { "docid": "83b64c0cc110b24902c2e8fa68b06a26", "text": "The concept of intelligent toothbrush, capable of monitoring brushing motion, orientation through the grip axis, during toothbrushing was suggested in our previous study. In this study, we describe a tooth brushing pattern classification algorithm using three-axis accelerometer and three-axis magnetic sensor. We have found that inappropriate tooth brushing pattern showed specific moving patterns. In order to trace the position and orientation of toothbrush in a mouth, we need to know absolute coordinate information of toothbrush. By applying tilt-compensated azimuth (heading) calculation algorithm, which is generally used in small telematics devices, we could find the inclination and orientation information of toothbrush. To assess the feasibility of the proposed algorithm, 8 brushing patterns were preformed by 6 individual healthy subjects. The proposed algorithm showed the detection ratio of 98%. This study showed that the proposed monitoring system was conceived to aid dental care personnel in patient education and instruction in oral hygiene regarding brushing style.", "title": "" }, { "docid": "4b8823bffcc77968b7ac087579ab84c9", "text": "Numerous complains have been made by Android users who severely suffer from the sluggish response when interacting with their devices. However, very few studies have been conducted to understand the user-perceived latency or mitigate the UI-lagging problem. In this paper, we conduct the first systematic measurement study to quantify the user-perceived latency using typical interaction-intensive Android apps in running with and without background workloads. We reveal the insufficiency of Android system in ensuring the performance of foreground apps and therefore design a new system to address the insufficiency accordingly. We develop a lightweight tracker to accurately identify all delay-critical threads that contribute to the slow response of user interactions. We then build a resource manager that can efficiently schedule various system resources including CPU, I/O, and GPU, for optimizing the performance of these threads. We implement the proposed system on commercial smartphones and conduct comprehensive experiments to evaluate our implementation. Evaluation results show that our system is able to significantly reduce the user-perceived latency of foreground apps in running with aggressive background workloads, up to 10x, while incurring negligible system overhead of less than 3.1 percent CPU and 7 MB memory.", "title": "" }, { "docid": "2191552e347223ce8ed132125bdbc409", "text": "We introduce the task of cross-lingual lexical entailment, which aims to detect whether the meaning of a word in one language can be inferred from the meaning of a word in another language. We construct a gold standard for this task, and propose an unsupervised solution based on distributional word representations. As commonly done in the monolingual setting, we assume a word e entails a word f if the prominent context features of e are a subset of those of f . To address the challenge of comparing contexts across languages, we propose a novel method for inducing sparse bilingual word representations from monolingual and parallel texts. Our approach yields an Fscore of 70%, and significantly outperforms strong baselines based on translation and on existing word representations.", "title": "" }, { "docid": "9f2db5cf1ee0cfd0250e68bdbc78b434", "text": "A novel transverse equivalent network is developed in this letter to efficiently analyze a recently proposed leaky-wave antenna in substrate integrated waveguide (SIW) technology. For this purpose, precise modeling of the SIW posts for any distance between vias is essential to obtain accurate results. A detailed parametric study is performed resulting in leaky-mode dispersion curves as a function of the main geometrical dimensions of the antenna. Finally, design curves that directly provide the requested dimensions to synthesize the desired scanning response and leakage rate are reported and validated with experiments.", "title": "" } ]
scidocsrr
e90b6b6af3940bf928ac4e41851df399
Static and Dynamic 4-Way Handshake Solutions to Avoid Denial of Service Attack in Wi-Fi Protected Access and IEEE 802.11i
[ { "docid": "a9a08787ad4fe579e5b3aceee11f67fe", "text": "802.11i is an IEEE standard designed to provide enhanced MAC security in wireless networks. The authentication process involves three entities: the supplicant (wireless device), the authenticator (access point), and the authentication server (e.g., a backend RADIUS server). A 4-Way Handshake must be executed between the supplicant and the authenticator to derive a fresh pairwise key and/or group key for subsequent data transmissions.We analyze the 4-Way Handshake protocol using a finite-state verification tool and find a Denial-of-Service attack. The attack involves forging initial messages from the authenticator to the supplicant to produce inconsistent keys in peers. Three repairs are proposed; based on various considerations, the third one appears to be the best. The resulting improvement to the standard, adopted by the 802.11 TGi in their final deliberation, involves only a minor change in the algorithm used by the supplicant.", "title": "" }, { "docid": "8dcb99721a06752168075e6d45ee64c7", "text": "The convenience of 802.11-based wireless access networks has led to widespread deployment in the consumer, industrial and military sectors. However, this use is predicated on an implicit assumption of confidentiality and availability. While the secu­ rity flaws in 802.11’s basic confidentially mechanisms have been widely publicized, the threats to network availability are far less widely appreciated. In fact, it has been suggested that 802.11 is highly suscepti­ ble to malicious denial-of-service (DoS) attacks tar­ geting its management and media access protocols. This paper provides an experimental analysis of such 802.11-specific attacks – their practicality, their ef­ ficacy and potential low-overhead implementation changes to mitigate the underlying vulnerabilities.", "title": "" } ]
[ { "docid": "66334ca62a62a78cab72c80b9a19072b", "text": "End-to-end neural models have made significant progress in question answering, however recent studies show that these models implicitly assume that the answer and evidence appear close together in a single document. In this work, we propose the Coarse-grain Fine-grain Coattention Network (CFC), a new question answering model that combines information from evidence across multiple documents. The CFC consists of a coarse-grain module that interprets documents with respect to the query then finds a relevant answer, and a fine-grain module which scores each candidate answer by comparing its occurrences across all of the documents with the query. We design these modules using hierarchies of coattention and selfattention, which learn to emphasize different parts of the input. On the Qangaroo WikiHop multi-evidence question answering task, the CFC obtains a new stateof-the-art result of 70.6% on the blind test set, outperforming the previous best by 3% accuracy despite not using pretrained contextual encoders.", "title": "" }, { "docid": "fc7705cc3fc4b1114c4f7542ae210947", "text": "Arsenic (As) is one of the most toxic contaminants found in the environment. Development of novel detection methods for As species in water with the potential for field use has been an urgent need in recent years. In past decades, surface-enhanced Raman scattering (SERS) has gained a reputation as one of the most sensitive spectroscopic methods for chemical and biomolecular sensing. The SERS technique has emerged as an extremely promising solution for in-situ detection of arsenic species in the field, particularly when coupled with portable/handheld Raman spectrometers. In this article, the recent advances in SERS analysis of arsenic species in water media are reviewed, and the potential of this technique for fast screening and field testing of arsenic-contaminated environmental water samples is discussed. The problems that remain in the field are also discussed and an outlook for the future is featured at the end of the article.", "title": "" }, { "docid": "d1b41debabbddbbab02ae6c96635b71c", "text": "Demosaicking and denoising are among the most crucial steps of modern digital camera pipelines and their joint treatment is a highly ill-posed inverse problem where at-least two-thirds of the information are missing and the rest are corrupted by noise. This poses a great challenge in obtaining meaningful reconstructions and a special care for the efficient treatment of the problem is required. While there are several machine learning approaches that have been recently introduced to deal with joint image demosaicking-denoising, in this work we propose a novel deep learning architecture which is inspired by powerful classical image regularization methods and large-scale convex optimization techniques. Consequently, our derived network is more transparent and has a clear interpretation compared to alternative competitive deep learning approaches. Our extensive experiments demonstrate that our network outperforms any previous approaches on both noisy and noise-free data. This improvement in reconstruction quality is attributed to the principled way we design our network architecture, which also requires fewer trainable parameters than the current state-of-the-art deep network solution. Finally, we show that our network has the ability to generalize well even when it is trained on small datasets, while keeping the overall number of trainable parameters low.", "title": "" }, { "docid": "9cad72ab02778fa410a6bd1feb608283", "text": "Acoustic-based music recommender systems have received increasing interest in recent years. Due to the semantic gap between low level acoustic features and high level music concepts, many researchers have explored collaborative filtering techniques in music recommender systems. Traditional collaborative filtering music recommendation methods only focus on user rating information. However, there are various kinds of social media information, including different types of objects and relations among these objects, in music social communities such as Last.fm and Pandora. This information is valuable for music recommendation. However, there are two challenges to exploit this rich social media information: (a) There are many different types of objects and relations in music social communities, which makes it difficult to develop a unified framework taking into account all objects and relations. (b) In these communities, some relations are much more sophisticated than pairwise relation, and thus cannot be simply modeled by a graph. In this paper, we propose a novel music recommendation algorithm by using both multiple kinds of social media information and music acoustic-based content. Instead of graph, we use hypergraph to model the various objects and relations, and consider music recommendation as a ranking problem on this hypergraph. While an edge of an ordinary graph connects only two objects, a hyperedge represents a set of objects. In this way, hypergraph can be naturally used to model high-order relations. Experiments on a data set collected from the music social community Last.fm have demonstrated the effectiveness of our proposed algorithm.", "title": "" }, { "docid": "cfe31ce3a6a23d9148709de6032bd90b", "text": "I argue that Non-Photorealistic Rendering (NPR) research will play a key role in the scientific understanding of visual art and illustration. NPR can contribute to scientific understanding of two kinds of problems: how do artists create imagery, and how do observers respond to artistic imagery? I sketch out some of the open problems, how NPR can help, and what some possible theories might look like. Additionally, I discuss the thorny problem of how to evaluate NPR research and theories.", "title": "" }, { "docid": "5dad207fe80469fe2b80d1f1e967575e", "text": "As the geolocation capabilities of smartphones continue to improve, developers have continued to create more innovative applications that rely on this location information for their primary function. This can be seen with Niantic’s release of Pokémon GO, which is a massively multiplayer online role playing and augmented reality game. This game became immensely popular within just a few days of its release. However, it also had the propensity to be a distraction to drivers resulting in numerous accidents, and was used to as a tool by armed robbers to lure unsuspecting users into secluded areas. This facilitates a need for forensic investigators to be able to analyze the data within the application in order to determine if it may have been involved in these incidents. Because this application is new, limited research has been conducted regarding the artifacts that can be recovered from the application. In this paper, we aim to fill the gaps within the current research by assessing what forensically relevant information may be recovered from the application, and understanding the circumstances behind the creation of this information. Our research focuses primarily on the artifacts generated by the Upsight analytics platform, those contained within the bundles directory, and the Pokémon Go Plus accessory. Moreover, we present our new application specific analysis tool that is capable of extracting forensic artifacts from a backup of the Android application, and presenting them to an investigator in an easily readable format. This analysis tool exceeds the capabilities of UFED Physical Analyzer in processing Pokémon GO application data.", "title": "" }, { "docid": "0b21eda7c840d37a9486d8bfccfe45ba", "text": "Enterprise systems are complex and expensive and create dramatic organizational change. Implementing an enterprise system can be the \"corporate equivalent of a root canal,\" a meaningful analogy given that an ES with its single database replaces myriad special-purpose legacy systems that once operated in isolation. An ES, or enterprise resource planning system, has the Herculean task of seamlessly supporting and integrating a full range of business processes, uniting functional islands and making their data visible across the organization in real time. The authors offer guidelines based on five years of observing ES implementations that can help managers circumvent obstacles and control the tensions during and after the project.", "title": "" }, { "docid": "cb456d94420dcc3811983004a1af7c6b", "text": "A new method for deriving isolated buck-boost (IBB) converter with single-stage power conversion is proposed in this paper and novel IBB converters based on high-frequency bridgeless-interleaved boost rectifiers are presented. The semiconductors, conduction losses, and switching losses are reduced significantly by integrating the interleaved boost converters into the full-bridge diode-rectifier. Various high-frequency bridgeless boost rectifiers are harvested based on different types of interleaved boost converters, including the conventional boost converter and high step-up boost converters with voltage multiplier and coupled inductor. The full-bridge IBB converter with voltage multiplier is analyzed in detail. The voltage multiplier helps to enhance the voltage gain and reduce the voltage stresses of the semiconductors in the rectification circuit. Hence, a transformer with reduced turns ratio and parasitic parameters, and low-voltage rated MOSFETs and diodes with better switching and conduction performances can be applied to improve the efficiency. Moreover, optimized phase-shift modulation strategy is applied to the full-bridge IBB converter to achieve isolated buck and boost conversion. What's more, soft-switching performance of all of the active switches and diodes within the whole operating range is achieved. A 380-V output prototype is fabricated to verify the effectiveness of the proposed IBB converters and its control strategies.", "title": "" }, { "docid": "358faa358eb07b8c724efcdb72334dc7", "text": "We present a novel simple technique for rapidly creating and presenting interactive immersive 3D exploration experiences of 2D pictures and images of natural and artificial landscapes. Various application domains, ranging from virtual exploration of works of art to street navigation systems, can benefit from the approach. The method, dubbed PEEP, is motivated by the perceptual characteristics of the human visual system in interpreting perspective cues and detecting relative angles between lines. It applies to the common perspective images with zero or one vanishing points, and does not require the extraction of a precise geometric description of the scene. Taking as input a single image without other information, an automatic analysis technique fits a simple but perceptually consistent parametric 3D representation of the viewed space, which is used to drive an indirect constrained exploration method capable to provide the illusion of 3D exploration with realistic monocular (perspective and motion parallax) and binocular (stereo) depth cues. The effectiveness of the method is demonstrated on a variety of casual pictures and exploration configurations, including mobile devices.", "title": "" }, { "docid": "6a71031c810791b93bc06116b75c2c15", "text": "With the popularity of social media platforms such as Facebook and Twitter, the amount of useful data in these sources is rapidly increasing, making them promising places for information acquisition. This research aims at the customized organization of a social media corpus using focused topic hierarchy. It organizes the contents into different structures to meet with users' different information needs (e.g., \"iPhone 5 problem\" or \"iPhone 5 camera\"). To this end, we introduce a novel function to measure the likelihood of a topic hierarchy, by which the users' information need can be incorporated into the process of topic hierarchy construction. Using the structure information within the generated topic hierarchy, we then develop a probability based model to identify the representative contents for topics to assist users in document retrieval on the hierarchy. Experimental results on real world data illustrate the effectiveness of our method and its superiority over state-of-the-art methods for both information organization and retrieval tasks.", "title": "" }, { "docid": "3ff4feeb6edd9b07316122780931e4a5", "text": "Content shared on social media platforms has been identified to be valuable in gaining insights into people's mental health experiences. Although there has been widespread adoption of photo-sharing platforms such as Instagram in recent years, the role of visual imagery as a mechanism of self-disclosure is less understood. We study the nature of visual attributes manifested in images relating to mental health disclosures on Instagram. Employing computer vision techniques on a corpus of thousands of posts, we extract and examine three visual attributes: visual features (e.g., color), themes, and emotions in images. Our findings indicate the use of imagery for unique self-disclosure needs, quantitatively and qualitatively distinct from those shared via the textual modality: expressions of emotional distress, calls for help, and explicit display of vulnerability. We discuss the relationship of our findings to literature in visual sociology, in mental health self disclosure, and implications for the design of health interventions.", "title": "" }, { "docid": "62edabfb877e280dfe69035dc7d0f1cb", "text": "OBJECTIVES\nTo present the importance of Evidence-based Health Informatics (EBHI) and the ethical imperative of this approach; to highlight the work of the IMIA Working Group on Technology Assessment and Quality Improvement and the EFMI Working Group on Assessment of Health Information Systems; and to introduce the further important evaluation and evidence aspects being addressed.\n\n\nMETHODS\nReviews of IMIA, EFMA and other initiatives, together with literature reviews on evaluation methods and on published systematic reviews.\n\n\nRESULTS\nPresentation of the rationale for the health informatics domain to adopt a scientific approach by assessing impact, avoiding harm, and empirically demonstrating benefit and best use; reporting of the origins and rationale of the IMIA- and EQUATOR-endorsed Statement on Reporting of Evaluation Studies in Health Informatics (STARE-HI) and of the IMIA WG's Guideline for Good Evaluation Practice in Health Informatics (GEP-HI); presentation of other initiatives for objective evaluation; and outlining of further work in hand on usability and indicators; together with the case for development of relevant evaluation methods in newer applications such as telemedicine. The focus is on scientific evaluation as a reliable source of evidence, and on structured presentation of results to enable easy retrieval of evidence.\n\n\nCONCLUSIONS\nEBHI is feasible, necessary for efficiency and safety, and ethically essential. Given the significant impact of health informatics on health systems, care delivery and personal health, it is vital that cultures change to insist on evidence-based policies and investment, and that emergent global moves for this are supported.", "title": "" }, { "docid": "54f61711e0ee1d5493b08dde28594367", "text": "With the advent of faster and cheaper computers, optimization based control methodologies have become a viable candidate for control of nonlinear systems. Over the past twenty years, a group of such control schemes have been successfully used in the process control industry where the processes are either intrinsically stable or have very large time constants. The purpose of this thesis is to provide a theoretical framework for synthesis of a class of optimization based control schemes, known as receding horizon control t echniques for nonlinear systems such as unmanned aerial vehicles. It is well known that unconstrained infinite horizon optimal control may be used to construct a stabilizing controller for a nonlinear system. In t his thesis, we show that similar stabilization results may be achieved using unconstrained finite horizon optimal control. The key idea is to approximate the tail of the infinite horizon costto-go using, as terminal cost, an appropriate control Lyapunov function (eLF) . A eLF can be thought of as generalization of the concept of a Lyapunov function to systems with inputs. Roughly speaking, the terminal eLF should provide an (incremental) upper bound on the cost . In this fashion , important stability characteristics may be retained without the use of t erminal constraints such as those employed by a number of other researchers. The absence of constraints allows a significant speedup in computation. Furthermore, it is shown that in order to guarantee stability, it suffices to satisfy an improvement property, thereby relaxing the requirement that truly optimal trajectories be found. We provide a complete analysis of the stability and region of attraction/operation properties of receding horizon control strategies that utilize finite horizon approximations in the proposed class. It is shown that the guaranteed region of operation contains that of the e LF controller and may be made as large as desired by increasing", "title": "" }, { "docid": "e9e37212a793588b0e86075961ed8b9f", "text": "This paper presents a method to use View based approach in Bangla Optical Character Recognition (OCR) system providing reduced data set to the ANN classification engine rather than the traditional OCR methods. It describes how Bangla characters are processed, trained and then recognized with the use of a Backpropagation Artificial neural network. This is the first published account of using a segmentation-free optical character recognition system for Bangla using a view based approach. The methodology presented here assumes that the OCR pre-processor has presented the input images to the classification engine described here. The size and the font face used to render the characters are also significant in both training and classification. The images are first converted into greyscale and then to binary images; these images are then scaled to a fit a pre-determined area with a fixed but significant number of pixels. The feature vectors are then formed extracting the characteristics points, which in this case is simply a series of 0s and 1s of fixed length. Finally, a Artificial neural network is chosen for the training and classification process. Although the steps are simple, and the simplest network is chosen for the training and recognition process.", "title": "" }, { "docid": "2b240f414cfa35cd6b8206fbffb264a4", "text": "Purpose – Many terms commonly used in the field of knowledge management (KM) have multiple uses and sometimes conflicting definitions because they are adapted from other research streams. Discussions of the various hierarchies of data, information, knowledge, and other related terms, although of value, are limited in providing support for KM. The purpose of this this paper is to define a new set of terminology and develop a five-tier knowledge management hierarchy (5TKMH) that can provide guidance to managers involved in KM efforts. Design/methodology/approach – The 5TKMH is developed by extending the knowledge hierarchy to include an individual and an innovation tier. Findings – The 5TKMH includes all of the types of KM identified in the literature, provides a tool for evaluating the KM effort in a firm, identifies the relationships between knowledge sources, and provides an evolutionary path for KM efforts within the firm. Research limitations/implications – The 5TKMH has not been formally tested. Practical implications – The 5TKMH supports a KM life-cycle that provides guidance to the chief knowledge officer and can be employed to inventory knowledge assets, evaluate KM strategy, and plan and manage the evolution of knowledge assets in the firm. Originality/value – In this paper, a new set of terminology is defined and a 5TKMH is developed that can provide guidance to managers involved in KM efforts and determining the future path of KM in the firm.", "title": "" }, { "docid": "43fa16b19c373e2d339f45c71a0a2c22", "text": "McKusick-Kaufman syndrome is a human developmental anomaly syndrome comprising mesoaxial or postaxial polydactyly, congenital heart disease and hydrometrocolpos. This syndrome is diagnosed most frequently in the Old Order Amish population and is inherited in an autosomal recessive pattern with reduced penetrance and variable expressivity. Homozygosity mapping and linkage analyses were conducted using two pedigrees derived from a larger pedigree published in 1978. The PedHunter software query system was used on the Amish Genealogy Database to correct the previous pedigree, derive a minimal pedigree connecting those affected sibships that are in the database and determine the most recent common ancestors of the affected persons. Whole genome short tandem repeat polymorphism (STRP) screening showed homozygosity in 20p12, between D20S162 and D20S894 , an area that includes the Alagille syndrome critical region. The peak two-point LOD score was 3.33, and the peak three-point LOD score was 5.21. The physical map of this region has been defined, and additional polymorphic markers have been isolated. The region includes several genes and expressed sequence tags (ESTs), including the jagged1 gene that recently has been shown to be haploinsufficient in the Alagille syndrome. Sequencing of jagged1 in two unrelated individuals affected with McKusick-Kaufman syndrome has not revealed any disease-causing mutations.", "title": "" }, { "docid": "00ed53e43725d782b38c185faa2c8fd2", "text": "In this paper we evaluate tensegrity probes on the basis of the EDL phase performance of the probe in the context of a mission to Titan. Tensegrity probes are structurally designed around tension networks and are composed of tensile and compression elements. Such probes have unique physical force distribution properties and can be both landing and mobility platforms, allowing for dramatically simpler mission profile and reduced costs. Our concept is to develop a tensegrity probe in which the tensile network can be actively controlled to enable compact stowage for launch followed by deployment in preparation for landing. Due to their natural compliance and structural force distribution properties, tensegrity probes can safely absorb significant impact forces, enabling high speed Entry, Descent, and Landing (EDL) scenarios where the probe itself acts much like an airbag. However, unlike an airbag which must be discarded after a single use, the tensegrity probe can actively control its shape to provide compliant rolling mobility while still maintaining its ability to safely absorb impact shocks that might occur during exploration. (See Figure 1) This combination of functions from a single structure enables compact and light-weight planetary exploration missions with the capabilities of traditional wheeled rovers, but with the mass and cost similar or less than a stationary probe. In this paper we cover this new mission concept and tensegrity probe technologies for compact storage, EDL, and surface mobility, with an focus on analyzing the landing phase performance and ability to protect and deliver scientific payloads. The analysis is then supported with results from physical prototype drop-tests.", "title": "" }, { "docid": "d29cca7c16b0e5b43c85e1a8701d735f", "text": "The sparse matrix solver by LU factorization is a serious bottleneck in Simulation Program with Integrated Circuit Emphasis (SPICE)-based circuit simulators. The state-of-the-art Graphics Processing Units (GPU) have numerous cores sharing the same memory, provide attractive memory bandwidth and compute capability, and support massive thread-level parallelism, so GPUs can potentially accelerate the sparse solver in circuit simulators. In this paper, an efficient GPU-based sparse solver for circuit problems is proposed. We develop a hybrid parallel LU factorization approach combining task-level and data-level parallelism on GPUs. Work partitioning, number of active thread groups, and memory access patterns are optimized based on the GPU architecture. Experiments show that the proposed LU factorization approach on NVIDIA GTX580 attains an average speedup of 7.02× (geometric mean) compared with sequential PARDISO, and 1.55× compared with 16-threaded PARDISO. We also investigate bottlenecks of the proposed approach by a parametric performance model. The performance of the sparse LU factorization on GPUs is constrained by the global memory bandwidth, so the performance can be further improved by future GPUs with larger memory bandwidth.", "title": "" }, { "docid": "4faa5fd523361d472fc0bea8508c58f8", "text": "This paper reviews the current state of laser scanning from airborne and terrestrial platforms for geometric reconstruction of object shape and size. The current performance figures of sensor systems are presented in an overview. Next, their calibration and the orientation of the acquired point clouds is discussed. For airborne deployment this is usually one step, whereas in the terrestrial case laboratory calibration and registration of point clouds are (still) two distinct, independent steps. As laser scanning is an active measurement technology, the interaction of the emitted energy with the object surface has influences on the range measurement. This has to be considered in order to explain geometric phenomena in the data. While the problems, e.g. multiple scattering, are understood well, there is currently a lack of remedies. Then, in analogy to the processing chain, segmentation approaches for laser scanning data are reviewed. Segmentation is a task relevant for almost all applications. Likewise, DTM (digital terrain model) reconstruction is relevant for many applications of airborne laser scanning, and is therefore discussed, too. This paper reviews the main processing steps necessary for many applications of laser scanning.", "title": "" }, { "docid": "2b9e29da5ee9abd3f0f7e18cea54ae4e", "text": "This paper addresses video summarization, or the problem of distilling a raw video into a shorter form while still capturing the original story. We show that visual representations supervised by freeform language make a good fit for this application by extending a recent submodular summarization approach [9] with representativeness and interestingness objectives computed on features from a joint vision-language embedding space. We perform an evaluation on two diverse datasets, UT Egocentric [18] and TV Episodes [45], and show that our new objectives give improved summarization ability compared to standard visual features alone. Our experiments also show that the vision-language embedding need not be trained on domainspecific data, but can be learned from standard still image vision-language datasets and transferred to video. A further benefit of our model is the ability to guide a summary using freeform text input at test time, allowing user customization.", "title": "" } ]
scidocsrr
1dd2e2ac1d44cdebfd066486320bb93a
Thematic Analysis and Visualization of Textual Corpus
[ { "docid": "b77ab33226f6d643aee49d63d3485d46", "text": "A new method for automatic indexing and retrieval is described. The approach is to take advantage of implicit higher-order structure in the association of terms with documents (“semantic structure”) in order to improve the detection of relevant documents on the basis of terms found in queries. The particular technique used is singular-value decomposition, in which a large term by document matrix is decomposed into a set of ca. 100 orthogonal factors from which the original matrix can be approximated by linear combination. Documents are represented by ca. 100 item vectors of factor weights. Queries are represented as pseudo-document vectors formed from weighted combinations of terms, and documents with supra-threshold cosine values are returned. initial tests find this completely automatic method for retrieval to be promising.", "title": "" } ]
[ { "docid": "9c47d1896892c663987caa24d4a70037", "text": "Multi-pitch estimation of sources in music is an ongoing research area that has a wealth of applications in music information retrieval systems. This paper presents the systematic evaluations of over a dozen competing methods and algorithms for extracting the fundamental frequencies of pitched sound sources in polyphonic music. The evaluations were carried out as part of the Music Information Retrieval Evaluation eXchange (MIREX) over the course of two years, from 2007 to 2008. The generation of the dataset and its corresponding ground-truth, the methods by which systems can be evaluated, and the evaluation results of the different systems are presented and discussed.", "title": "" }, { "docid": "ea30c3baad2f7f74661e85c7155e6fab", "text": "Electrical stimulation of the spinal cord at C7D1 evoked triphasic descending spinal cord evoked potentials (DSCEP) from an oesophago-vertebral recording at D8D8 or D1OD1O. Ascending SCEPs (ASCEP) larger and similar in shape were also observed when the orientation of the stimulating and recording dipoles was reversed. Both SCEPs are in part generated by descending and ascending synchronous excitation of neuronal volume-conducted spinal cord dipoles.", "title": "" }, { "docid": "06ae56bc104dbcaa6c82c5b3d021d7fe", "text": "Open Innovation is a phenomenon that has become increasingly important for both practice and theory over the last few years. The reasons are to be found in shorter innovation cycles, industrial research and development’s escalating costs as well as in the dearth of resources. Subsequently, the open source phenomenon has attracted innovation researchers and practitioners. The recent era of open innovation started when practitioners realised that companies that wished to commercialise both their own ideas as well as other firms’ innovation should seek new ways to bring their in-house ideas to market. They need to deploy pathways outside their current businesses and should realise that the locus where knowledge is created does not necessarily always equal the locus of innovation they need not both be found within the company. Experience has furthermore shown that neither the locus of innovation nor exploitation need lie within companies’ own boundaries. However, emulation of the open innovation approach transforms a company’s solid boundaries into a semi-permeable membrane that enables innovation to move more easily between the external environment and the company’s internal innovation process. How far the open innovation approach is implemented in practice and whether there are identifiable patterns were the questions we investigated with our empirical study. Based on our own empirical database of 124 companies, we identified three core open innovation processes: (1) The outside-in process: Enriching a company’s own knowledge base through the integration of suppliers, customers, and external knowledge sourcing can increase a company’s innovativeness. (2) The inside-out process: The external exploitation of ideas in different markets, selling IP and multiplying technology by channelling ideas to the external environment. (3) The coupled process: Linking outside-in and inside-out by working in alliances with complementary companies during which give and take are crucial for success. Consequent thinking along the whole value chain and new business models enable this core process.", "title": "" }, { "docid": "d3fc62a9858ddef692626b1766898c9f", "text": "In order to detect the Cross-Site Script (XSS) vulnerabilities in the web applications, this paper proposes a method of XSS vulnerability detection using optimal attack vector repertory. This method generates an attack vector repertory automatically, optimizes the attack vector repertory using an optimization model, and detects XSS vulnerabilities in web applications dynamically. To optimize the attack vector repertory, an optimization model is built in this paper with a machine learning algorithm, reducing the size of the attack vector repertory and improving the efficiency of XSS vulnerability detection. Based on this method, an XSS vulnerability detector is implemented, which is tested on 50 real-world websites. The testing results show that the detector can detect a total of 848 XSS vulnerabilities effectively in 24 websites.", "title": "" }, { "docid": "b0988b5d33bf97ac4eba7365bce055bd", "text": "This research investigates audience experience of empathy with a performer during a digitally mediated performance. Theatrical performance necessitates social interaction between performers and audience. We present a performance-based study that explores audience awareness of performer's kinaesthetic activity in 2 ways: by isolating the audience's senses (visual, auditory, and kinaesthetic) and by focusing audience perception through defamiliarization. By positioning the performer behind the audience: in their 'backspace', we focus the audience's attention to the performer in an unfamiliar way. We describe two research contributions to the study of audience empathic experience during performance. The first is the development of a phenomenological interview method designed for extracting empirical evaluations of experience of audience members in a performance scenario. The second is a descriptive model for a poetics of reception. Our model is based on an empathetic audience-performer relationship that includes 3 components of audience awareness: contextual, interpersonal, and sense-based. Our research contributions are of particular benefit to performances involving digital media, and can provide insight into audience experience of empathy.", "title": "" }, { "docid": "27ed4433fad92baec6bbbfa003b591b6", "text": "The new generation of high-performance decimal floating-point units (DFUs) is demanding efficient implementations of parallel decimal multipliers. In this paper, we describe the architectures of two parallel decimal multipliers. The parallel generation of partial products is performed using signed-digit radix-10 or radix-5 recodings of the multiplier and a simplified set of multiplicand multiples. The reduction of partial products is implemented in a tree structure based on a decimal multioperand carry-save addition algorithm that uses unconventional (non BCD) decimal-coded number systems. We further detail these techniques and present the new improvements to reduce the latency of the previous designs, which include: optimized digit recoders for the generation of 2n-tuples (and 5-tuples), decimal carry-save adders (CSAs) combining different decimal-coded operands, and carry-free adders implemented by special designed bit counters. Moreover, we detail a design methodology that combines all these techniques to obtain efficient reduction trees with different area and delay trade-offs for any number of partial products generated. Evaluation results for 16-digit operands show that the proposed architectures have interesting area-delay figures compared to conventional Booth radix-4 and radix--8 parallel binary multipliers and outperform the figures of previous alternatives for decimal multiplication.", "title": "" }, { "docid": "1d8e2c9bd9cfa2ce283e01cbbcd6ca83", "text": "Deep neural networks (DNNs) are vulnerable to adversarial examples, perturbations to correctly classified examples which can cause the model to misclassify. In the image domain, these perturbations are often virtually indistinguishable to human perception, causing humans and state-of-the-art models to disagree. However, in the natural language domain, small perturbations are clearly perceptible, and the replacement of a single word can drastically alter the semantics of the document. Given these challenges, we use a black-box population-based optimization algorithm to generate semantically and syntactically similar adversarial examples that fool well-trained sentiment analysis and textual entailment models with success rates of 97% and 70%, respectively. We additionally demonstrate that 92.3% of the successful sentiment analysis adversarial examples are classified to their original label by 20 human annotators, and that the examples are perceptibly quite similar. Finally, we discuss an attempt to use adversarial training as a defense, but fail to yield improvement, demonstrating the strength and diversity of our adversarial examples. We hope our findings encourage researchers to pursue improving the robustness of DNNs in the natural language domain.", "title": "" }, { "docid": "ff0d818dfd07033fb5eef453ba933914", "text": "Hyperplastic placentas have been reported in several experimental mouse models, including animals produced by somatic cell nuclear transfer, by inter(sub)species hybridization, and by somatic cytoplasm introduction to oocytes followed by intracytoplasmic sperm injection. Of great interest are the gross and histological features common to these placental phenotypes--despite their quite different etiologies--such as the enlargement of the spongiotrophoblast layers. To find morphological clues to the pathways leading to these similar placental phenotypes, we analyzed the ultrastructure of the three different types of hyperplastic placenta. Most cells affected were of trophoblast origin and their subcellular ultrastructural lesions were common to the three groups, e.g., a heavy accumulation of cytoplasmic vacuoles in the trophoblastic cells composing the labyrinthine wall and an increased volume of spongiotrophoblastic cells with extraordinarily dilatated rough endoplasmic reticulum. Although the numbers of trophoblastic glycogen cells were greatly increased, they maintained their normal ultrastructural morphology, including a heavy glycogen deposition throughout the cytoplasm. The fetal endothelium and small vessels were nearly intact. Our ultrastructural study suggests that these three types of placental hyperplasias, with different etiologies, may have common pathological pathways, which probably exclusively affect the development of certain cell types of the trophoblastic lineage during mouse placentation.", "title": "" }, { "docid": "b26d12edbd76ab6e1c5343d75ce74590", "text": "Multilanguage information retrieval promotes users to browse documents in the form of their mother language, and more and more peoples interested in retrieves short answers rather than a full document. In this paper, we present a cross-language video QA system i.e. CLVQ, which could process the English questions, and find answers in Chinese videos. The main contribution of this research are: (1) the application of QA technology into different media; and (2) adopt a new answer finding approach without human-made rules; (3) the combination of several techniques of passage retrieval algorithms. The experimental result shows 56% of answer finding. The testing collection was consists of six discovery movies, and questions are from the School of Discovery Web site.", "title": "" }, { "docid": "8218ce22ac1cccd73b942a184c819d8c", "text": "The extended SMAS facelift techniques gave plastic surgeons the ability to correct the nasolabial fold and medial cheek. Retensioning the SMAS transmits the benefit through the multilinked fibrous support system of the facial soft tissues. The effect is to provide a recontouring of the ptotic soft tissues, which fills out the cheeks as it reduces nasolabial fullness. Indirectly, dermal tightening occurs to a lesser but more natural degree than with traditional facelift surgery. Although details of current techniques may be superseded, the emerging surgical principles are becoming more clearly defined. This article presents these principles and describes the author's current surgical technique.", "title": "" }, { "docid": "429ac6709131b648bb44a6ccaebe6a19", "text": "We highlight a practical yet rarely discussed problem in dialogue state tracking (DST), namely handling unknown slot values. Previous approaches generally assume predefined candidate lists and thus are not designed to output unknown values, especially when the spoken language understanding (SLU) module is absent as in many end-to-end (E2E) systems. We describe in this paper an E2E architecture based on the pointer network (PtrNet) that can effectively extract unknown slot values while still obtains state-of-the-art accuracy on the standard DSTC2 benchmark. We also provide extensive empirical evidence to show that tracking unknown values can be challenging and our approach can bring significant improvement with the help of an effective feature dropout technique.", "title": "" }, { "docid": "ae151d8ed9b8f99cfe22e593f381dd3b", "text": "A common assumption in studies of interruptions is that one is focused in an activity and then distracted by other stimuli. We take the reverse perspective and examine whether one might first be in an attentional state that makes one susceptible to communications typically associated with distraction. We explore the confluence of multitasking and workplace communications from three temporal perspectives -- prior to an interaction, when tasks and communications are interleaved, and at the end of the day. Using logging techniques and experience sampling, we observed 32 employees in situ for five days. We found that certain attentional states lead people to be more susceptible to particular types of interaction. Rote work is followed by more Facebook or face-to-face interaction. Focused and aroused states are followed by more email. The more time in email and face-fo-face interaction, and the more total screen switches, the less productive people feel at the day's end. We present the notion of emotional homeostasis along with new directions for multitasking research.", "title": "" }, { "docid": "5378e05d2d231969877131a011b3606a", "text": "Environmental, health, and safety (EHS) concerns are receiving considerable attention in nanoscience and nanotechnology (nano) research and development (R&D). Policymakers and others have urged that research on nano's EHS implications be developed alongside scientific research in the nano domain rather than subsequent to applications. This concurrent perspective suggests the importance of early understanding and measurement of the diffusion of nano EHS research. The paper examines the diffusion of nano EHS publications, defined through a set of search terms, into the broader nano domain using a global nanotechnology R&D database developed at Georgia Tech. The results indicate that nano EHS research is growing rapidly although it is orders of magnitude smaller than the broader nano S&T domain. Nano EHS work is moderately multidisciplinary, but gaps in biomedical nano EHS's connections with environmental nano EHS are apparent. The paper discusses the implications of these results for the continued monitoring and development of the cross-disciplinary utilization of nano EHS research.", "title": "" }, { "docid": "43831e29e62c574a93b6029409690bfe", "text": "We present a convolutional network that is equivariant to rigid body motions. The model uses scalar-, vector-, and tensor fields over 3D Euclidean space to represent data, and equivariant convolutions to map between such representations. These SE(3)-equivariant convolutions utilize kernels which are parameterized as a linear combination of a complete steerable kernel basis, which is derived analytically in this paper. We prove that equivariant convolutions are the most general equivariant linear maps between fields over R. Our experimental results confirm the effectiveness of 3D Steerable CNNs for the problem of amino acid propensity prediction and protein structure classification, both of which have inherent SE(3) symmetry.", "title": "" }, { "docid": "4b9c5c1851909ae31c4510f47cb61a60", "text": "Fraud has been very common in our society, and it affects private enterprises as well as public entities. However, in recent years, the development of new technologies has also provided criminals more sophisticated way to commit fraud and it therefore requires more advanced techniques to detect and prevent such events. The types of fraud in Telecommunication industry includes: Subscription Fraud, Clip on Fraud, Call Forwarding, Cloning Fraud, Roaming Fraud, and Calling Card. Thus, detection and prevention of these frauds is one of the main objectives of the telecommunication industry. In this research, we developed a model that detects fraud in Telecommunication sector in which a random rough subspace based neural network ensemble method was employed in the development of the model to detect subscription fraud in mobile telecoms. This study therefore presents the development of patterns that illustrate the customers’ subscription's behaviour focusing on the identification of non-payment events. This information interrelated with other features produces the rules that lead to the predictions as earlier as possible to prevent the revenue loss for the company by deployment of the appropriate actions.", "title": "" }, { "docid": "42c6eaae2cbdb850f634d987ab7d1cdb", "text": "The main aim of this paper is to solve a path planning problem for an autonomous mobile robot in static and dynamic environments by determining the collision-free path that satisfies the chosen criteria for shortest distance and path smoothness. The algorithm mimics the real world by adding the actual size of the mobile robot to that of the obstacles and formulating the problem as a moving point in the free-space. The proposed path planning algorithm consists of three modules: in the first module, the path planning algorithm forms an optimised path by conducting a hybridized Particle Swarm Optimization-Modified Frequency Bat (PSO-MFB) algorithm that minimises distance and follows path smoothness criteria; in the second module, any infeasible points generated by the proposed PSO-MFB Algorithm are detected by a novel Local Search (LS) algorithm and integrated with the PSO-MFB algorithm to be converted into feasible solutions; the third module features obstacle detection and avoidance (ODA), which is triggered when the mobile robot detects obstacles within its sensing region, allowing it to avoid collision with obstacles. Simulations have been carried out that indicated that this method generates a feasible path even in complex dynamic environments and thus overcomes the shortcomings of conventional approaches such as grid methods. Comparisons with previous examples in the literature are also included in the results.", "title": "" }, { "docid": "0846274e111ccd0867466bbda93f06e6", "text": "Encrypting Internet communications has been the subject of renewed focus in recent years. In order to add end-to-end encryption to legacy applications without losing the convenience of full-text search, ShadowCrypt and Mimesis Aegis use a new cryptographic technique called \"efficiently deployable efficiently searchable encryption\" (EDESE) that allows a standard full-text search system to perform searches on encrypted data. Compared to other recent techniques for searching on encrypted data, EDESE schemes leak a great deal of statistical information about the encrypted messages and the keywords they contain. Until now, the practical impact of this leakage has been difficult to quantify.\n In this paper, we show that the adversary's task of matching plaintext keywords to the opaque cryptographic identifiers used in EDESE can be reduced to the well-known combinatorial optimization problem of weighted graph matching (WGM). Using real email and chat data, we show how off-the-shelf WGM solvers can be used to accurately and efficiently recover hundreds of the most common plaintext keywords from a set of EDESE-encrypted messages. We show how to recover the tags from Bloom filters so that the WGM solver can be used with the set of encrypted messages that utilizes a Bloom filter to encode its search tags. We also show that the attack can be mitigated by carefully configuring Bloom filter parameters.", "title": "" }, { "docid": "cccb38dab9ead68b5c3bd88f03d75cb0", "text": "e múltiplos episódios de sangramento de varizes esofágicas e gástricas, passou por um procedimento de TIPS para controlar a hemorragia gastroesophaneal refratária e como uma ponte para transplante de fígado. Na admissão, ele estava clinicamente estável e tinha estágio final da doença hepática pontuação de 13 e bilirrubina sérica total inicial de 3,7 mg/dl. O procedimento TIPS foi realizada através da veia jugular interna direita, usando a padronização9. O stent selecionado e disponível era de metal autoexpansível Wallstent stent 10 x 68 mm (Boston Scientific Corporation, MA, EUA), que foi devidamente implantado no fígado, criando um shunt entre a veia hepática direita e um dos ramos esquerdos da veia porta. O trajeto pós-stent foi dilatada com balão de 10 mm e venograma portal de controle demonstrou patência de shunt e não opacificação significativa da circulação colateral venosa. Houve redução da pressão venosa portal de 26-16 mm Hg, e do gradiente de pressão portosistêmico 19-9 mmHg. O procedimento transcorreu sem intercorrências e paciente permaneceu no hospital para observação. Três dias depois ele apresentou icterícia súbita sem quaisquer sinais de insuficiência hepática (encefalopatia) ou sepse (febre ou hipotensão). Neste momento, os exames mostraram bilurribina nível total de 41,6 mg/dl (bilirrubina direta de 28,1 mg/dl), a relação internacional de 1/2, fosfatase alcalina de 151 UI/l, alanina aminotransferase de 60 UI/l, de aspartato aminotransferase 104 UI/l, de creatinina de 1,0 mg/dl e contagem de leucócitos totais de 6,800/ml. Doppler do fígado mostrou stent adequado, permeabilidade e fluxo anterógrado, sem evidência de dilatação das vias biliares. Tomografia computadorizada e angiografia abdominais foram realizadas e não forneceram qualquer informação adicional. Uma semana depois, o paciente estava clinicamente inalterado, com exceção de icterícia piorada. Não havia nenhuma evidência de infecção, ou encefalopatia ou hemobilia. Apesar dos testes de laboratório não serem INTRODUÇÃO", "title": "" }, { "docid": "170873ad959b33eea76e9f542c5dbff6", "text": "This paper reports on a development framework, two prototypes, and a comparative study in the area of multi-tag Near-Field Communication (NFC) interaction. By combining NFC with static and dynamic displays, such as posters and projections, services are made more visible and allow users to interact with them easily by interacting directly with the display with their phone. In this paper, we explore such interactions, in particular, the combination of the phone display and large NFC displays. We also compare static displays and dynamic displays, and present a list of deciding factors for a particular deployment situation. We discuss one prototype for each display type and developed a corresponding framework which can be used to accelerate the development of such prototypes whilst supporting a high level of versatility. The findings of a controlled comparative study indicate, among other things, that all participants preferred the dynamic display, although the static display has advantages, e.g. with respect to privacy and portability.", "title": "" } ]
scidocsrr
3f03fda5c31399cd902b2177c216fb62
Achieving Fast and Lightweight SDN Updates with Segment Routing
[ { "docid": "15727b1d059064d118269d0217c0c014", "text": "Segment Routing is a proposed IETF protocol to improve traffic engineering and online route selection in IP networks. The key idea in segment routing is to break up the routing path into segments in order to enable better network utilization. Segment routing also enables finer control of the routing paths and can be used to route traffic through middle boxes. This paper considers the problem of determining the optimal parameters for segment routing in the offline and online cases. We develop a traffic matrix oblivious algorithm for robust segment routing in the offline case and a competitive algorithm for online segment routing. We also show that both these algorithms work well in practice.", "title": "" } ]
[ { "docid": "226fdcdd185b2686e11732998dca31a2", "text": "Blockchain has received much attention in recent years. This immense popularity has raised a number of concerns, scalability of blockchain systems being a common one. In this paper, we seek to understand how Ethereum, a well-established blockchain system, would respond to sharding. Sharding is a prevalent technique to increase the scalability of distributed systems. To understand how sharding would affect Ethereum, we model Ethereum blockchain as a graph and evaluate five methods to partition the graph. We assess methods using three metrics: the balance among shards, the number of transactions that would involve multiple shards, and the amount of data that would be relocated across shards upon repartitioning of the graph.", "title": "" }, { "docid": "8d469e95232a8c4c8dce9aa8aee2f357", "text": "In this paper, a wearable hand exoskeleton with force-controllable and compact actuator modules is proposed. In order to apply force feedback accurately while allowing natural finger motions, the exoskeleton linkage structure with three degrees of freedom (DOFs) was designed, which was inspired by the muscular skeletal structure of the finger. As an actuating system, a series elastic actuator (SEA) mechanism, which consisted of a small linear motor, a manually designed motor driver, a spring and potentiometers, was applied. The friction of the motor was identified and compensated for obtaining a linearized model of the actuating system. Using a LQ (linear quadratic) tuned PD (proportional and derivative) controller and a disturbance observer (DOB), the proposed actuator module could generate the desired force accurately with actual finger movements. By integrating together the proposed exoskeleton structure, actuator modules and control algorithms, a wearable hand exoskeleton with force-controllable and compact actuator modules was developed to deliver accurate force to the fingertips for flexion/extension motions.", "title": "" }, { "docid": "0a66ced2f77134e7252d63843f59bfed", "text": "We study the extent to which online social networks can be connected to knowledge bases. The problem is referred to as learning social knowledge graphs. We propose a multi-modal Bayesian embedding model, GenVector, to learn latent topics that generate word embeddings and network embeddings simultaneously. GenVector leverages large-scale unlabeled data with embeddings and represents data of two modalities—i.e., social network users and knowledge concepts—in a shared latent topic space. Experiments on three datasets show that the proposed method clearly outperforms state-of-the-art methods. We then deploy the method on AMiner, an online academic search system to connect with a network of 38,049,189 researchers with a knowledge base with 35,415,011 concepts. Our method significantly decreases the error rate of learning social knowledge graphs in an online A/B test with live users.", "title": "" }, { "docid": "6331c1d288e8689ecc8b183294676b10", "text": "histories. However, many potential combinations of life-history traits do not actually occur in nature [1,2]. Indeed, the few major axes of life-history variation stand in stark contrast to the variety of selective pressures on life histories: physical conditions, seasonality and unpredictability of the environment, food availability, predators and disease organisms, and relationships within social and family groups. Most life-history thinking has been concerned with constrained evolutionary responses to the environment. Differences among the life histories of species are viewed commonly as having a genetic basis and reflecting the optimization of phenotypes with respect to their environments. The optimal balance between parental investment and adult self-maintenance is also influenced by the life table of the population, particularly the relative value of present and future reproduction [1,3–5]. Constraints on adaptive responses are established by the allocation of limited time, energy and nutrients among competing functions [6,7]. Relatively less attention has been paid to nongenetic responses to the environment, such as adjustment of parental investment in response to perceived risk, except for the study of phenotypic flexibility (the reaction norm) as a life-history character itself [4,8,9]. Here we argue that physiological function, including endocrine control mechanisms, mediates the relationship of the organism to its environment and therefore is essential to our understanding of the diversification of life histories. Much of the variation in life histories, particularly variation in parental investment and self-maintenance, reflects phenotypic responses of individuals to environmental stresses and perceived risks. As a result, the organization of behavioral and physiological control mechanisms might constrain individual (and evolutionary) responses and limit life-history variation among species.", "title": "" }, { "docid": "5c056ba2e29e8e33c725c2c9dd12afa8", "text": "The large amount of text data which are continuously produced over time in a variety of large scale applications such as social networks results in massive streams of data. Typically massive text streams are created by very large scale interactions of individuals, or by structured creations of particular kinds of content by dedicated organizations. An example in the latter category would be the massive text streams created by news-wire services. Such text streams provide unprecedented challenges to data mining algorithms from an efficiency perspective. In this paper, we review text stream mining algorithms for a wide variety of problems in data mining such as clustering, classification and topic modeling. A recent challenge arises in the context of social streams, which are generated by large social networks such as Twitter. We also discuss a number of future challenges in this area of research.", "title": "" }, { "docid": "aa2f0ba26e197b0c06e6ac73bac0e890", "text": "Emotion is central to the quality and range of everyday human experience. The neurobiological substrates of human emotion are now attracting increasing interest within the neurosciences motivated, to a considerable extent, by advances in functional neuroimaging techniques. An emerging theme is the question of how emotion interacts with and influences other domains of cognition, in particular attention, memory, and reasoning. The psychological consequences and mechanisms underlying the emotional modulation of cognition provide the focus of this article.", "title": "" }, { "docid": "c07a0053f43d9e1f98bb15d4af92a659", "text": "We present a zero-shot learning approach for text classification, predicting which natural language understanding domain can handle a given utterance. Our approach can predict domains at runtime that did not exist at training time. We achieve this extensibility by learning to project utterances and domains into the same embedding space while generating each domain-specific embedding from a set of attributes that characterize the domain. Our model is a neural network trained via ranking loss. We evaluate the performance of this zero-shot approach on a subset of a virtual assistant’s third-party domains and show the effectiveness of the technique on new domains not observed during training. We compare to generative baselines and show that our approach requires less storage and performs better on new domains.", "title": "" }, { "docid": "aeb79f1242608bc536ff7f79319c73dd", "text": "Analysis of fossil pigments deposited in the bottom sediments of Lake Beskie, was used to assess changes in the primary productivity during the past years. Three characteristic periods of lake development were distinguished. These periods correspond with a transformation in the lake's catchment area induced by the development of agriculture. A first period was characterized by intensive inflow of allochthonous matter into the lake, due to agriculture in the catchment area, favouring soil erosion. This erosion and the subsequent increase in mineral fertilization resulted in decrease of sorption ability of the soil. This in turn led to increased leaching of nutrients into the lake which resulted in increased primary production and hypolimnetic anoxia. These high oxygen deficits were characterized by a development of photosynthetic bacteria of the genus Chlorobium, and an intensification of the lake's enrichment, mainly with phosphorus. In a final period organic fertilizers (manure) were used in the catchment area. A noticeable improvement of sorption ability of the soil occurred, migration of nutrients to the lake was inhibited, and primary productivity decreased.", "title": "" }, { "docid": "e45ba62d473dd5926e6efa2778e567ca", "text": "This contribution introduces a novel approach to cross-calibrate automotive vision and ranging sensors. The resulting sensor alignment allows the incorporation of multiple sensor data into a detection and tracking framework. Exemplarily, we show how a realtime vehicle detection system, intended for emergency breaking or ACC applications, benefits from the low level fusion of multibeam lidar and vision sensor measurements in discrimination performance and computational complexity", "title": "" }, { "docid": "0a0e4219aa1e20886e69cb1421719c4e", "text": "A wearable two-antenna system to be integrated on a life jacket and connected to Personal Locator Beacons (PLBs) of the Cospas-Sarsat system is presented. Each radiating element is a folded meandered dipole resonating at 406 MHz and includes a planar reflector realized by a metallic foil. The folded dipole and the metallic foil are attached on the opposite sides of the floating elements of the life jacket itself, so resulting in a mechanically stable antenna. The metallic foil improves antenna radiation properties even when the latter is close to the sea surface, shields the human body from EM radiation and makes the radiating system less sensitive to the human body movements. Prototypes have been realized and a measurement campaign has been carried out. The antennas show satisfactory performance also when the life jacket is worn by a user. The proposed radiating elements are intended for the use in a two-antenna scheme in which the transmitter can switch between them in order to meet Cospas-Sarsat system specifications. Indeed, the two antennas provide complementary radiation patterns so that Cospas-Sarsat requirements (satellite constellation coverage and EIRP profile) are fully satisfied.", "title": "" }, { "docid": "7e6bbd25c49b91fd5dc4248f3af918a7", "text": "Model-driven engineering technologies offer a promising approach to address the inability of third-generation languages to alleviate the complexity of platforms and express domain concepts effectively.", "title": "" }, { "docid": "7aef73379f97b69e6559d9db3955637d", "text": "The emergence and proliferation of electronic health record (EHR) systems has incrementally resulted in large volumes of clinical free text documents available across healthcare networks. The huge amount of data inspires research and development focused on novel clinical natural language processing (NLP) solutions to optimize clinical care and improve patient outcomes. In recent years, deep learning techniques have demonstrated superior performance over traditional machine learning (ML) techniques for various general-domain NLP tasks e.g. language modeling, parts-of-speech (POS) tagging, named entity recognition, paraphrase identification, sentiment analysis etc. Clinical documents pose unique challenges compared to general-domain text due to widespread use of acronyms and non-standard clinical jargons by healthcare providers, inconsistent document structure and organization, and requirement for rigorous de-identification and anonymization to ensure patient data privacy. This tutorial chapter will present an overview of how deep learning techniques can be applied to solve NLP tasks in general, followed by a literature survey of existing deep learning algorithms applied to clinical NLP problems. Finally, we include a description of various deep learning-driven clinical NLP applications developed at the Artificial Intelligence (AI) lab in Philips Research in recent years such as diagnostic inferencing from unstructured clinical narratives, relevant biomedical article retrieval based on clinical case scenarios, clinical paraphrase generation, adverse drug event (ADE) detection from social media, and medical image caption generation. Sadid A. Hasan Artificial Intelligence Lab, Philips Research North America, Cambridge, MA, USA. e-mail: [email protected] Oladimeji Farri Artificial Intelligence Lab, Philips Research North America, Cambridge, MA, USA. e-mail: [email protected]", "title": "" }, { "docid": "96a04e4fd170642fc0973808eb217ec0", "text": "Feeding provides substrate for energy metabolism, which is vital to the survival of every living animal and therefore is subject to intense regulation by brain homeostatic and hedonic systems. Over the last decade, our understanding of the circuits and molecules involved in this process has changed dramatically, in large part due to the availability of animal models with genetic lesions. In this review, we examine the role played in homeostatic regulation of feeding by systemic mediators such as leptin and ghrelin, which act on brain systems utilizing neuropeptide Y, agouti-related peptide, melanocortins, orexins, and melanin concentrating hormone, among other mediators. We also examine the mechanisms for taste and reward systems that provide food with its intrinsically reinforcing properties and explore the links between the homeostatic and hedonic systems that ensure intake of adequate nutrition.", "title": "" }, { "docid": "f5ac489e8e387321abd9d3839d7d8ba2", "text": "Online social networks like Slashdot bring valuable information to millions of users - but their accuracy is based on the integrity of their user base. Unfortunately, there are many “trolls” on Slashdot who post misinformation and compromise system integrity. In this paper, we develop a general algorithm called TIA (short for Troll Identification Algorithm) to classify users of an online “signed” social network as malicious (e.g. trolls on Slashdot) or benign (i.e. normal honest users). Though applicable to many signed social networks, TIA has been tested on troll detection on Slashdot Zoo under a wide variety of parameter settings. Its running time is faster than many past algorithms and it is significantly more accurate than existing methods.", "title": "" }, { "docid": "390b0dbd01e88fec7f7a4b59cb753978", "text": "In this paper, we propose a segmentation method based on normalized cut and superpixels. The method relies on color and texture cues for fast computation and efficient use of memory. The method is used for food image segmentation as part of a mobile food record system we have developed for dietary assessment and management. The accurate estimate of nutrients relies on correctly labelled food items and sufficiently well-segmented regions. Our method achieves competitive results using the Berkeley Segmentation Dataset and outperforms some of the most popular techniques in a food image dataset.", "title": "" }, { "docid": "3aab215667c71c07a2937f1b3840175f", "text": "Advances in the creation of computational materials are transforming our thinking about relations between the physical and digital. In this paper we characterize this transformation as a \"material turn\" within the field of interaction design. Central to theorizing tangibility, we advocate supporting this turn by developing a vocabulary capable of articulating strategies for computational material design. By exploring the term texture, a material property signifying relations between surfaces, structures, and forms, we demonstrate how concepts spanning the physical and digital benefit interaction design. We ground texture in case study of the Icehotel, a spectacular frozen edifice. The site demonstrates how a mundane material can be re-imagined as precious and novel. By focusing on the texture of ice, designers craft its extension into the realm of computational materiality. Tracing this process of aligning the physical and digital via the material and social construction of textures speaks back to the broader field of interaction design. It demonstrates how the process of crafting alliances between new and old materials requires both taking seriously the materialities of both, and then organizing their relation in terms of commonalities rather than differences. The result is a way of speaking about computational materials through a more textured lens.", "title": "" }, { "docid": "d9f2abb9735b449b622f94e5af346364", "text": "Abstract—The goal of this paper is to present an addressing scheme that allows for assigning a unique IPv6 address to each node in the Internet of Things (IoT) network. This scheme guarantees uniqueness by extracting the clock skew of each communication device and converting it into an IPv6 address. Simulation analysis confirms that the presented scheme provides reductions in terms of energy consumption, communication overhead and response time as compared to four studied addressing schemes Strong DAD, LEADS, SIPA and CLOSA.", "title": "" }, { "docid": "8385e6934af518d84416b6f84706f681", "text": "Chronic low back pain (CLBP) is a chronic pain syndrome in the lower back region, lasting for at least 3 months. CLBP represents the second leading cause of disability worldwide being a major welfare and economic problem. The prevalence of CLBP in adults has increased more than 100% in the last decade and continues to increase dramatically in the aging population, affecting both men and women in all ethnic groups, with a significant impact on functional capacity and occupational activities. It can also be influenced by psychological factors, such as stress, depression and/or anxiety. Given this complexity, the diagnostic evaluation of patients with CLBP can be very challenging and requires complex clinical decision-making. Answering the question \"what is the pain generator\" among the several structures potentially involved in CLBP is a key factor in the management of these patients, since a mis-diagnosis can generate therapeutical mistakes. Traditionally, the notion that the etiology of 80% to 90% of LBP cases is unknown has been mistaken perpetuated across decades. In most cases, low back pain can be attributed to specific pain generator, with its own characteristics and with different therapeutical opportunity. Here we discuss about radicular pain, facet Joint pain, sacro-iliac pain, pain related to lumbar stenosis, discogenic pain. Our article aims to offer to the clinicians a simple guidance to identify pain generators in a safer and faster way, relying a correct diagnosis and further therapeutical approach.", "title": "" }, { "docid": "8b95603164671fbd0d7a0c22783e1a80", "text": "This paper discusses the addition of so-called time offsets to task sets dispatched according to fixed priorities. The motivation for this work is two-fold: firstly, direct expression of time offsets is a useful structuring approach for designing complex hard real-time systems. Secondly, analysis directly addressing time offsets can be very much less pessimistic than extant analysis. In this report we extend our current fixed priority schedulability analysis, and then present two major worked examples, illustrating the approach.", "title": "" }, { "docid": "15cc3f99f267005e9c0a5ab03f2d8475", "text": "With the outburst of ecommerce sentiment-rich resources such as online review sites and blogs, people actively use this information to understand what others think about a particular subject. This area of study helps to derive the opinion, sentiment or the outlook of a speaker mainly used when conducting market research. This paper does an evaluation of the systems such as LSA and PMI that set up semantic association between aspects and opinions found in customer reviews. PMI-IR is predicted to give better results as observed in a user study.", "title": "" } ]
scidocsrr
22adfecaea567799fbe63d0fbfedec79
An under actuated robotic arm with adjustable stiffness shape memory polymer joints
[ { "docid": "ca768eb654b323354b7d78969162cb81", "text": "Hyper-redundant manipulators can be fragile, expensive, and limited in their flexibility due to the distributed and bulky actuators that are typically used to achieve the precision and degrees of freedom (DOFs) required. Here, a manipulator is proposed that is robust, high-force, low-cost, and highly articulated without employing traditional actuators mounted at the manipulator joints. Rather, local tunable stiffness is coupled with off-board spooler motors and tension cables to achieve complex manipulator configurations. Tunable stiffness is achieved by reversible jamming of granular media, which-by applying a vacuum to enclosed grains-causes the grains to transition between solid-like states and liquid-like ones. Experimental studies were conducted to identify grains with high strength-to-weight performance. A prototype of the manipulator is presented with performance analysis, with emphasis on speed, strength, and articulation. This novel design for a manipulator-and use of jamming for robotic applications in general-could greatly benefit applications such as human-safe robotics and systems in which robots need to exhibit high flexibility to conform to their environments.", "title": "" } ]
[ { "docid": "538f1b131a9803db07ab20f202ecc96e", "text": "In this paper, we propose a direction-of-arrival (DOA) estimation method by combining multiple signal classification (MUSIC) of two decomposed linear arrays for the corresponding coprime array signal processing. The title “DECOM” means that, first, the nonlinear coprime array needs to be DECOMposed into two linear arrays, and second, Doa Estimation is obtained by COmbining the MUSIC results of the linear arrays, where the existence and uniqueness of the solution are proved. To reduce the computational complexity of DECOM, we design a two-phase adaptive spectrum search scheme, which includes a coarse spectrum search phase and then a fine spectrum search phase. Extensive simulations have been conducted and the results show that the DECOM can achieve accurate DOA estimation under different SNR conditions.", "title": "" }, { "docid": "0ef35c21af05db3f70e835dfd6564ec3", "text": "This paper details the implementation of the deep Faster R-CNN algorithm (Faster R-CNN model) for multi-object detection on Nvidia Jetson TX1 Embedded System. The Jetson TX1 device is a development board proposed by NVIDIA for applications requiring high computational performance in a low-power envelope.", "title": "" }, { "docid": "e022bcb002e2c851e697972a49c3e417", "text": "A polymer membrane-coated palladium (Pd) nanoparticle (NP)/single-layer graphene (SLG) hybrid sensor was fabricated for highly sensitive hydrogen gas (H2) sensing with gas selectivity. Pd NPs were deposited on SLG via the galvanic displacement reaction between graphene-buffered copper (Cu) and Pd ion. During the galvanic displacement reaction, graphene was used as a buffer layer, which transports electrons from Cu for Pd to nucleate on the SLG surface. The deposited Pd NPs on the SLG surface were well-distributed with high uniformity and low defects. The Pd NP/SLG hybrid was then coated with polymer membrane layer for the selective filtration of H2. Because of the selective H2 filtration effect of the polymer membrane layer, the sensor had no responses to methane, carbon monoxide, or nitrogen dioxide gas. On the contrary, the PMMA/Pd NP/SLG hybrid sensor exhibited a good response to exposure to 2% H2: on average, 66.37% response within 1.81 min and recovery within 5.52 min. In addition, reliable and repeatable sensing behaviors were obtained when the sensor was exposed to different H2 concentrations ranging from 0.025 to 2%.", "title": "" }, { "docid": "043b4305f9f3c239b0f2061b8afa0648", "text": "Proliferation of information is a major confront faced by e-commerce industry. To ease the customers from this information proliferation, Recommender Systems (RS) were introduced. To improve the computational time of a RS for large scale data, the process of recommendation can be implemented on a scalable, fault tolerant and a distributed processing framework. This paper proposes a Content-Based RS implemented on scalable, fault tolerant and distributed framework of Hadoop Map Reduce. To generate recommendations with improved computational time, the proposed technique of Map Reduce Content-Based Recommendation (MRCBR) is implemented using Hadoop Map Reduce which follows the traditional process of content-based recommendation. MRCBR technique comprises of user profiling and document feature extraction which uses the vector space model followed by computing similarity to generate recommendation for the target user. Recommendations generated for the target user is a set of Top N documents. The proposed technique of recommendation is executed on a cluster of Hadoop and is tested for News dataset. News items are collected using RSS feeds and are stored in MongoDB. Computational time of MRCBR is evaluated with a Speedup factor and performance is evaluated with the standard evaluation metric of Precision, Recall and F-Measure.", "title": "" }, { "docid": "a0840cf58ca21b738543924f6ed1a2f3", "text": "Emojis have been widely used in textual communications as a new way to convey nonverbal cues. An interesting observation is the various emoji usage patterns among different users. In this paper, we investigate the correlation between user personality traits and their emoji usage patterns, particularly on overall amounts and specific preferences. To achieve this goal, we build a large Twitter dataset which includes 352,245 users and over 1.13 billion tweets associated with calculated personality traits and emoji usage patterns. Our correlation and emoji prediction results provide insights into the power of diverse personalities that lead to varies emoji usage patterns as well as its potential in emoji recommendation", "title": "" }, { "docid": "73be48e8d9d50c04e6b3652953bc47de", "text": "Student video-watching behavior and quiz performance are studied in two Massive Open Online Courses (MOOCs). In doing so, two frameworks are presented by which video-watching clickstreams can be represented: one based on the sequence of events created, and another on the sequence of positions visited. With the event-based framework, recurring subsequences of student behavior are extracted, which contain fundamental characteristics such as reflecting (i.e., repeatedly playing and pausing) and revising (i.e., plays and skip backs). It is found that some of these behaviors are significantly correlated with changes in the likelihood that a student will be Correct on First Attempt (CFA) or not in answering quiz questions, and in ways that are not necessarily intuitive. Then, with the position-based framework, models of quiz performance are devised based on positions visited in a video. In evaluating these models through CFA prediction, it is found that three of them can substantially improve prediction quality, which underlines the ability to relate this type of behavior to quiz scores. Since this prediction considers videos individually, these benefits also suggest that these models are useful in situations where there is limited training data, e.g., for early detection or in short courses.", "title": "" }, { "docid": "ec6d6d6f8dc3db0bdae42ee0173b1639", "text": "AIMS\nWe investigated the population-level relationship between exposure to brand-specific advertising and brand-specific alcohol use among US youth.\n\n\nMETHODS\nWe conducted an internet survey of a national sample of 1031 youth, ages 13-20, who had consumed alcohol in the past 30 days. We ascertained all of the alcohol brands respondents consumed in the past 30 days, as well as which of 20 popular television shows they had viewed during that time period. Using a negative binomial regression model, we examined the relationship between aggregated brand-specific exposure to alcohol advertising on the 20 television shows [ad stock, measured in gross rating points (GRPs)] and youth brand-consumption prevalence, while controlling for the average price and overall market share of each brand.\n\n\nRESULTS\nBrands with advertising exposure on the 20 television shows had a consumption prevalence about four times higher than brands not advertising on those shows. Brand-level advertising elasticity of demand varied by exposure level, with higher elasticity in the lower exposure range. The estimated advertising elasticity of 0.63 in the lower exposure range indicates that for each 1% increase in advertising exposure, a brand's youth consumption prevalence increases by 0.63%.\n\n\nCONCLUSIONS\nAt the population level, underage youths' exposure to brand-specific advertising was a significant predictor of the consumption prevalence of that brand, independent of each brand's price and overall market share. The non-linearity of the observed relationship suggests that youth advertising exposure may need to be lowered substantially in order to decrease consumption of the most heavily advertised brands.", "title": "" }, { "docid": "4f57590f8bbf00d35b86aaa1ff476fc0", "text": "Pedestrian detection has been used in applications such as car safety, video surveillance, and intelligent vehicles. In this paper, we present a pedestrian detection scheme using HOG, LUV and optical flow features with AdaBoost Decision Stump classifier. Our experiments on Caltech-USA pedestrian dataset show that the proposed scheme achieves promising results of about 16.7% log-average miss rate.", "title": "" }, { "docid": "11d1978a3405f63829e02ccb73dcd75f", "text": "The performance of two commercial simulation codes, Ansys Fluent and Comsol Multiphysics, is thoroughly examined for a recently established two-phase flow benchmark test case. In addition, the commercial codes are directly compared with the newly developed academic code, FeatFlow TP2D. The results from this study show that the commercial codes fail to converge and produce accurate results, and leave much to be desired with respect to direct numerical simulation of flows with free interfaces. The academic code on the other hand was shown to be computationally efficient, produced very accurate results, and outperformed the commercial codes by a magnitude or more.", "title": "" }, { "docid": "d9783fc67b167c8f839f840074ac8dc1", "text": "Feature selection enables the identification of important features in data sets, contributing to an eventual increase in the quality of the knowledge extracted from them. A kind of data of growing interest is the multi-labeled one, which has more than one label for each data instance. However, there is a lack of reviews about publications of feature selection to support multi-label learning. To this end, the systematic review process can be useful to identify related publications in a wide, rigorous and replicable way. This work uses the systematic review process to answer the following research question: what are the publications of feature selection in multi-labeled data? The systematic review process carried out in this report enabled us to select 49 relevant publications and to find some gaps in the current literature, which can inspire future research in this subject.", "title": "" }, { "docid": "438093b14f983499ada7ce392ba27664", "text": "The spline under tension was introduced by Schweikert in an attempt to imitate cubic splines but avoid the spurious critical points they induce. The defining equations are presented here, together with an efficient method for determining the necessary parameters and computing the resultant spline. The standard scalar-valued curve fitting problem is discussed, as well as the fitting of open and closed curves in the plane. The use of these curves and the importance of the tension in the fitting of contour lines are mentioned as application.", "title": "" }, { "docid": "31abdea5ff0fc543ddfd382249602cda", "text": "Named Entity Recognition (NER), an information extraction task, is typically applied to spoken documents by cascading a large vocabulary continuous speech recognizer (LVCSR) and a named entity tagger. Recognizing named entities in automatically decoded speech is difficult since LVCSR errors can confuse the tagger. This is especially true of out-of-vocabulary (OOV) words, which are often named entities and always produce transcription errors. In this work, we improve speech NER by including features indicative of OOVs based on a OOV detector, allowing for the identification of regions of speech containing named entities, even if they are incorrectly transcribed. We construct a new speech NER data set and demonstrate significant improvements for this task.", "title": "" }, { "docid": "3bee9a2d5f9e328bb07c3c76c80612fa", "text": "In this paper, we construct a complexity-based morphospace wherein one can study systems-level properties of conscious and intelligent systems based on information-theoretic measures. The axes of this space labels three distinct complexity types, necessary to classify conscious machines, namely, autonomous, cognitive and social complexity. In particular, we use this morphospace to compare biologically conscious agents ranging from bacteria, bees, C. elegans, primates and humans with artificially intelligence systems such as deep networks, multi-agent systems, social robots, AI applications such as Siri and computational systems as Watson. Given recent proposals to synthesize consciousness, a generic complexitybased conceptualization provides a useful framework for identifying defining features of distinct classes of conscious and synthetic systems. Based on current clinical scales of consciousness that measure cognitive awareness and wakefulness, this article takes a perspective on how contemporary artificially intelligent machines and synthetically engineered life forms would measure on these scales. It turns out that awareness and wakefulness can be associated to computational and autonomous complexity respectively. Subsequently, building on insights from cognitive robotics, we examine the function that consciousness serves, and argue the role of consciousness as an evolutionary game-theoretic strategy. This makes the case for a third type of complexity necessary for describing consciousness, namely, social complexity. Having identified these complexity types, allows for a representation of both, biological and synthetic systems in a common morphospace. A consequence of this classification is a taxonomy of possible conscious machines. In particular, we identify four types of consciousness, based on embodiment: (i) biological consciousness, (ii) synthetic consciousness, (iii) group consciousness (resulting from group interactions), and (iv) simulated consciousness (embodied by virtual agents within a simulated reality). This taxonomy helps in the investigation of comparative signatures of consciousness across domains, in order to highlight design principles necessary to engineer conscious machines. This is particularly relevant in the light of recent developments at the ar X iv :1 70 5. 11 19 0v 3 [ qbi o. N C ] 2 4 N ov 2 01 8 The Morphospace of Consciousness 2 crossroads of cognitive neuroscience, biomedical engineering, artificial intelligence and biomimetics.", "title": "" }, { "docid": "940d18329290c990a692e866567e0025", "text": "Eye Movement Desensitization and Reprocessing (EMDR) therapy has been widely recognized as an efficacious treatment for post-traumatic stress disorder (PTSD). In the last years more insight has been gained regarding the efficacy of EMDR therapy in a broad field of mental disorders beyond PTSD. The cornerstone of EMDR therapy is its unique model of pathogenesis and change: the adaptive information processing (AIP) model. The AIP model developed by F. Shapiro has found support and differentiation in recent studies on the importance of memories in the pathogenesis of a range of mental disorders beside PTSD. However, theoretical publications or research on the application of the AIP model are still rare. The increasing acceptance of ideas that relate the origin of many mental disorders to the formation and consolidation of implicit dysfunctional memory lead to formation of the theory of pathogenic memories. Within the theory of pathogenic memories these implicit dysfunctional memories are considered to form basis of a variety of mental disorders. The theory of pathogenic memories seems compatible to the AIP model of EMDR therapy, which offers strategies to effectively access and transmute these memories leading to amelioration or resolution of symptoms. Merging the AIP model with the theory of pathogenic memories may initiate research. In consequence, patients suffering from such memory-based disorders may be earlier diagnosed and treated more effectively.", "title": "" }, { "docid": "c658e818d5f13ff939211d67bde4fc18", "text": "High-throughput studies of biological systems are rapidly accumulating a wealth of 'omics'-scale data. Visualization is a key aspect of both the analysis and understanding of these data, and users now have many visualization methods and tools to choose from. The challenge is to create clear, meaningful and integrated visualizations that give biological insight, without being overwhelmed by the intrinsic complexity of the data. In this review, we discuss how visualization tools are being used to help interpret protein interaction, gene expression and metabolic profile data, and we highlight emerging new directions.", "title": "" }, { "docid": "3e80fb154cb594dc15f5318b774cf0c3", "text": "Progressive multifocal leukoencephalopathy (PML) is a rare, subacute, demyelinating disease of the central nervous system caused by JC virus. Studies of PML from HIV Clade C prevalent countries are scarce. We sought to study the clinical, neuroimaging, and pathological features of PML in HIV Clade C patients from India. This is a prospective cum retrospective study, conducted in a tertiary care Neurological referral center in India from Jan 2001 to May 2012. Diagnosis was considered “definite” (confirmed by histopathology or JCV PCR in CSF) or “probable” (confirmed by MRI brain). Fifty-five patients of PML were diagnosed between January 2001 and May 2012. Complete data was available in 38 patients [mean age 39 ± 8.9 years; duration of illness—82.1 ± 74.7 days). PML was prevalent in 2.8 % of the HIV cohort seen in our Institute. Hemiparesis was the commonest symptom (44.7 %), followed by ataxia (36.8 %). Definitive diagnosis was possible in 20 cases. Eighteen remained “probable” wherein MRI revealed multifocal, symmetric lesions, hypointense on T1, and hyperintense on T2/FLAIR. Stereotactic biopsy (n = 11) revealed demyelination, enlarged oligodendrocytes with intranuclear inclusions and astrocytosis. Immunohistochemistry revelaed the presence of JC viral antigen within oligodendroglial nuclei and astrocytic cytoplasm. No differences in clinical, radiological, or pathological features were evident from PML associated with HIV Clade B. Clinical suspicion of PML was entertained in only half of the patients. Hence, a high index of suspicion is essential for diagnosis. There are no significant differences between clinical, radiological, and pathological picture of PML between Indian and Western countries.", "title": "" }, { "docid": "8e521a935f4cc2008146e4153a2bc3b5", "text": "The research work on supply-chain management has primarily focused on the study of materials flow and very little work has been done on the study of upstream flow of money. In this paper we study the flow of money in a supply chain from the viewpoint of a supply chain partner who receives money from the downstream partners and makes payments to the upstream partners. The objective is to schedule all payments within the constraints of the receipt of the money. A penalty is to be paid if payments are not made within the specified time. Any unused money in a given period can be invested to earn an interest. The problem is computationally complex and non-intuitive because of its dynamic nature. The incoming and outgoing monetary flows never stop and are sometimes unpredictable. For tractability purposes we first develop an integer programming model to represent the static problem, where monetary in-flows and out-flows are known before hand. We demonstrate that even the static problem is NP-Complete. First we develop a heuristic to solve this static problem. Next, the insights derived from the static problem analysis are used to develop two heuristics to solve the various level of dynamism of the problem. The performances of all these heuristics are measured and presented. 2010 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "6a2e3c783b468474ca0f67d7c5af456c", "text": "We evaluated the cytotoxic effects of four prostaglandin analogs (PGAs) used to treat glaucoma. First we established primary cultures of conjunctival stromal cells from healthy donors. Then cell cultures were incubated with different concentrations (0, 0.1, 1, 5, 25, 50 and 100%) of commercial formulations of bimatoprost, tafluprost, travoprost and latanoprost for increasing periods (5 and 30 min, 1 h, 6 h and 24 h) and cell survival was assessed with three different methods: WST-1, MTT and calcein/AM-ethidium homodimer-1 assays. Our results showed that all PGAs were associated with a certain level of cell damage, which correlated significantly with the concentration of PGA used, and to a lesser extent with culture time. Tafluprost tended to be less toxic than bimatoprost, travoprost and latanoprost after all culture periods. The results for WST-1, MTT and calcein/AM-ethidium homodimer-1 correlated closely. When the average lethal dose 50 was calculated, we found that the most cytotoxic drug was latanoprost, whereas tafluprost was the most sparing of the ocular surface in vitro. These results indicate the need to design novel PGAs with high effectiveness but free from the cytotoxic effects that we found, or at least to obtain drugs that are functional at low dosages. The fact that the commercial formulation of tafluprost used in this work was preservative-free may support the current tendency to eliminate preservatives from eye drops for clinical use.", "title": "" }, { "docid": "19339fa01942ad3bf33270aa1f6ceae2", "text": "This study investigated query formulations by users with {\\it Cognitive Search Intents} (CSIs), which are users' needs for the cognitive characteristics of documents to be retrieved, {\\em e.g. comprehensibility, subjectivity, and concreteness. Our four main contributions are summarized as follows (i) we proposed an example-based method of specifying search intents to observe query formulations by users without biasing them by presenting a verbalized task description;(ii) we conducted a questionnaire-based user study and found that about half our subjects did not input any keywords representing CSIs, even though they were conscious of CSIs;(iii) our user study also revealed that over 50\\% of subjects occasionally had experiences with searches with CSIs while our evaluations demonstrated that the performance of a current Web search engine was much lower when we not only considered users' topical search intents but also CSIs; and (iv) we demonstrated that a machine-learning-based query expansion could improve the performances for some types of CSIs.Our findings suggest users over-adapt to current Web search engines,and create opportunities to estimate CSIs with non-verbal user input.", "title": "" }, { "docid": "786d1ba82d326370684395eba5ef7cd3", "text": "A miniaturized dual-band Wilkinson power divider with a parallel LC circuit at the midpoints of two coupled-line sections is proposed in this paper. General design equations for parallel inductor L and capacitor C are derived from even- and odd-mode analysis. Generally speaking, characteristic impedances between even and odd modes are different in two coupled-line sections, and their electrical lengths are also different in inhomogeneous medium. This paper proved that a parallel LC circuit compensates for the characteristic impedance differences and the electrical length differences for dual-band operation. In other words, the proposed model provides self-compensation structure, and no extra compensation circuits are needed. Moreover, the upper limit of the frequency ratio range can be adjusted by two coupling strengths, where loose coupling for the first coupled-line section and tight coupling for the second coupled-line section are preferred for a wider frequency ratio range. Finally, an experimental circuit shows good agreement with the theoretical simulation.", "title": "" } ]
scidocsrr
c825879f8deedc2b4b35419c575119b2
Extracting sentiment as a function of discourse structure and topicality
[ { "docid": "7a180e503a0b159d545047443524a05a", "text": "We present two methods for determining the sentiment expressed by a movie review. The semantic orientation of a review can be positive, negative, or neutral. We examine the effect of valence shifters on classifying the reviews. We examine three types of valence shifters: negations, intensifiers, and diminishers. Negations are used to reverse the semantic polarity of a particular term, while intensifiers and diminishers are used to increase and decrease, respectively, the degree to which a term is positive or negative. The first method classifies reviews based on the number of positive and negative terms they contain. We use the General Inquirer to identify positive and negative terms, as well as negation terms, intensifiers, and diminishers. We also use positive and negative terms from other sources, including a dictionary of synonym differences and a very large Web corpus. To compute corpus-based semantic orientation values of terms, we use their association scores with a small group of positive and negative terms. We show that extending the term-counting method with contextual valence shifters improves the accuracy of the classification. The second method uses a Machine Learning algorithm, Support Vector Machines. We start with unigram features and then add bigrams that consist of a valence shifter and another word. The accuracy of classification is very high, and the valence shifter bigrams slightly improve it. The features that contribute to the high accuracy are the words in the lists of positive and negative terms. Previous work focused on either the term-counting method or the Machine Learning method. We show that combining the two methods achieves better results than either method alone.", "title": "" } ]
[ { "docid": "b53273d872a0b32cef8db26a5c3c8f96", "text": "Brain region segmentation or skull stripping is an essential step in neuroimaging application such as surgical, surface reconstruction, image registration etc. The accuracy of all existing methods depends on the registration and image geometry. When this fails, the probability of success is very less. In order to avoid this, Convolutional Neural Network (CNN) is used. For brain extraction which is free from geometry and registration. CNN learned the connectedness and shape of the brain. OASIS database is used which is publicly available benchmark dataset. In this method, training phase uses 30 images and 10 images are used for testing phase. The performance of CNN results is closer to the ground truth results given by experts.", "title": "" }, { "docid": "6fbf6d6357705d8d48d94ca47ca61fa9", "text": "Driven by the rapid development of Internet and digital technologies, we have witnessed the explosive growth of Web images in recent years. Seeing that labels can reflect the semantic contents of the images, automatic image annotation, which can further facilitate the procedure of image semantic indexing, retrieval, and other image management tasks, has become one of the most crucial research directions in multimedia. Most of the existing annotation methods, heavily rely on well-labeled training data (expensive to collect) and/or single view of visual features (insufficient representative power). In this paper, inspired by the promising advance of feature engineering (e.g., CNN feature and scale-invariant feature transform feature) and inexhaustible image data (associated with noisy and incomplete labels) on the Web, we propose an effective and robust scheme, termed robust multi-view semi-supervised learning (RMSL), for facilitating image annotation task. Specifically, we exploit both labeled images and unlabeled images to uncover the intrinsic data structural information. Meanwhile, to comprehensively describe an individual datum, we take advantage of the correlated and complemental information derived from multiple facets of image data (i.e., multiple views or features). We devise a robust pairwise constraint on outcomes of different views to achieve annotation consistency. Furthermore, we integrate a robust classifier learning component via $\\ell _{2,p}$ loss, which can provide effective noise identification power during the learning process. Finally, we devise an efficient iterative algorithm to solve the optimization problem in RMSL. We conduct comprehensive experiments on three different data sets, and the results illustrate that our proposed approach is promising for automatic image annotation.", "title": "" }, { "docid": "c0d365ac6bcd199643b74a5fdceb174b", "text": "Object pose estimation is one of the crucial parts in vision-based object manipulation system using standard industrial robot manipulator, particularly in pose estimation of the end effector of the robot arm to grasp the object targeted. This paper presents the utilization of stereo vision system to estimate the 3D (3 dimensional) object position and orientation to pick up and place the object targeted in an arbitrary location within the workspace. In order to accomplish this task, a calibrated stereo camera in the eye to hand configuration is used to capture the images of the object on the left and right camera. Then, the specific object feature is extracted and the 3D position and orientation of the object are calculated using image processing algorithm. Finally, the end effector of robot arm equipped with gripper will pick up the object targeted according to the object pose estimation output, and then place it to the desired location. The experimental results using 6 DOF robot arm are demonstrated and show the effectiveness of the proposed approach with good performance.", "title": "" }, { "docid": "2c38b6af96d8393660c4c700b9322f7a", "text": "According to what we call the Principle of Procreative Beneficence (PB),couples who decide to have a child have a significant moral reason to select the child who, given his or her genetic endowment, can be expected to enjoy the most well-being. In the first part of this paper, we introduce PB,explain its content, grounds, and implications, and defend it against various objections. In the second part, we argue that PB is superior to competing principles of procreative selection such as that of procreative autonomy.In the third part of the paper, we consider the relation between PB and disability. We develop a revisionary account of disability, in which disability is a species of instrumental badness that is context- and person-relative.Although PB instructs us to aim to reduce disability in future children whenever possible, it does not privilege the normal. What matters is not whether future children meet certain biological or statistical norms, but what level of well-being they can be expected to have.", "title": "" }, { "docid": "5171afa49c3990e88bd5aa877966e8c2", "text": "There is a growing interest among scientists and the lay public alike in using the South American psychedelic brew, ayahuasca, to treat psychiatric disorders like depression and anxiety. Such a practice is controversial due to a style of reasoning within conventional psychiatry that sees psychedelic-induced modified states of consciousness as pathological. This article analyzes the academic literature on ayahuasca’s psychological effects to determine how this style of reasoning is shaping formal scientific discourse on ayahuasca’s therapeutic potential as a treatment for depression and anxiety. Findings from these publications suggest that different kinds of experiments are differentially affected by this style of reasoning but can nonetheless indicate some potential therapeutic utility of the ayahuasca-induced modified state of consciousness. The article concludes by suggesting ways in which conventional psychiatry’s dominant style of reasoning about psychedelic modified states of consciousness could be reconsidered. k e yword s : ayahuasca, psychedelic, hallucinogen, psychiatry, depression", "title": "" }, { "docid": "4653e7adee2817c93bf726566427b62d", "text": "The extraction of meaningful features from videos is important as they can be used in various applications. Despite its importance, video representation learning has not been studied much, because it is challenging to deal with both content and motion information. We present a Mutual Suppression network (MSnet) to learn disentangled motion and content features in videos. The MSnet is trained in such way that content features do not contain motion information and motion features do not contain content information; this is done by suppressing each other with adversarial training. We utilize the disentangled features from the MSnet for several tasks, such as frame reproduction, pixel-level video frame prediction, and dense optical flow estimation, to demonstrate the strength of MSnet. The proposed model outperforms the state-of-the-art methods in pixel-level video frame prediction. The source code will be publicly available.", "title": "" }, { "docid": "5487ee527ef2a9f3afe7f689156e7e4d", "text": "Many NLP tasks including machine comprehension, answer selection and text entailment require the comparison between sequences. Matching the important units between sequences is a key to solve these problems. In this paper, we present a general “compare-aggregate” framework that performs word-level matching followed by aggregation using Convolutional Neural Networks. We particularly focus on the different comparison functions we can use to match two vectors. We use four different datasets to evaluate the model. We find that some simple comparison functions based on element-wise operations can work better than standard neural network and neural tensor network.", "title": "" }, { "docid": "63db9ca29b2c7cd5932e0d820070ef19", "text": "In this work we present a general framework for robust error estimation in face recognition. The proposed formulation allows the simultaneous use of various loss functions for modeling the residual in face images, which usually follows non-standard distributions, depending on the image capturing conditions. Our method extends the current vast literature offering flexibility in the selection of the residual modeling characteristics but, at the same time, considering many existing algorithms as special cases. As such, it proves robust for a range of error inducing factors, such as, varying illumination, occlusion, pixel corruption, disguise or their combinations. Extensive simulations document the superiority of selecting multiple models for representing the noise term in face recognition problems, allowing the algorithm to achieve near-optimal performance in most of the tested face databases. Finally, the multi-model residual representation offers useful insights into understanding how different noise types affect face recognition rates.", "title": "" }, { "docid": "6fdee3d247a36bc7d298a7512a11118a", "text": "Fully automatic driving is emerging as the approach to dramatically improve efficiency (throughput per unit of space) while at the same time leading to the goal of zero accidents. This approach, based on fully automated vehicles, might improve the efficiency of road travel in terms of space and energy used, and in terms of service provided as well. For such automated operation, trajectory planning methods that produce smooth trajectories, with low level associated accelerations and jerk for providing human comfort, are required. This paper addresses this problem proposing a new approach that consists of introducing a velocity planning stage in the trajectory planner. Moreover, this paper presents the design and simulation evaluation of trajectory-tracking and path-following controllers for autonomous vehicles based on sliding mode control. A new design of sliding surface is proposed, such that lateral and angular errors are internally coupled with each other (in cartesian space) in a sliding surface leading to convergence of both variables.", "title": "" }, { "docid": "17bd8497b30045267f77572c9bddb64f", "text": "0007-6813/$ see front matter D 200 doi:10.1016/j.bushor.2004.11.006 * Corresponding author. E-mail addresses: [email protected] [email protected] (J. Mair).", "title": "" }, { "docid": "834c8c425ce231a50c307df056fe7b7f", "text": "We introduce a new model for building conditional generative models in a semisupervised setting to conditionally generate data given attributes by adapting the GAN framework. The proposed semi-supervised GAN (SS-GAN) model uses a pair of stacked discriminators to learn the marginal distribution of the data, and the conditional distribution of the attributes given the data respectively. In the semi-supervised setting, the marginal distribution (which is often harder to learn) is learned from the labeled + unlabeled data, and the conditional distribution is learned purely from the labeled data. Our experimental results demonstrate that this model performs significantly better compared to existing semi-supervised conditional GAN models.", "title": "" }, { "docid": "ec4dae5e2aa5a5ef67944d82a6324c9d", "text": "Parallel collection processing based on second-order functions such as map and reduce has been widely adopted for scalable data analysis. Initially popularized by Google, over the past decade this programming paradigm has found its way in the core APIs of parallel dataflow engines such as Hadoop's MapReduce, Spark's RDDs, and Flink's DataSets. We review programming patterns typical of these APIs and discuss how they relate to the underlying parallel execution model. We argue that fixing the abstraction leaks exposed by these patterns will reduce the cost of data analysis due to improved programmer productivity. To achieve that, we first revisit the algebraic foundations of parallel collection processing. Based on that, we propose a simplified API that (i) provides proper support for nested collection processing and (ii) alleviates the need of certain second-order primitives through comprehensions -- a declarative syntax akin to SQL. Finally, we present a metaprogramming pipeline that performs algebraic rewrites and physical optimizations which allow us to target parallel dataflow engines like Spark and Flink with competitive performance.", "title": "" }, { "docid": "73a8c38d820e204c6993974fb352d33f", "text": "Many continuous control tasks have bounded action spaces. When policy gradient methods are applied to such tasks, out-of-bound actions need to be clipped before execution, while policies are usually optimized as if the actions are not clipped. We propose a policy gradient estimator that exploits the knowledge of actions being clipped to reduce the variance in estimation. We prove that our estimator, named clipped action policy gradient (CAPG), is unbiased and achieves lower variance than the conventional estimator that ignores action bounds. Experimental results demonstrate that CAPG generally outperforms the conventional estimator, indicating that it is a better policy gradient estimator for continuous control tasks. The source code is available at https: //github.com/pfnet-research/capg.", "title": "" }, { "docid": "2273ae77207d1deb2a9fe9cb778d8613", "text": "EXECUTIVE SUMMARY Entrepreneurship research has identified a number of personal characteristics believed to be instrumental in motivating entrepreneurial behavior. Two frequently cited personal traits associated with entrepreneurial potential are internal locus of control and innovativeness. Internal locus of control has been one of the most studied psychological traits in entrepreneurship research, while innovative activity is explicit in Schumpeter’s description of the entrepreneur. Entrepreneurial traits have been studied extensively in the United States. However, crosscultural studies and studies in non-U.S. contexts are rare and in most cases limited to comparisons between one or two countries or cultures. Thus the question is raised: do entrepreneurial traits vary systematically across cultures and if so, why? Culture, as the underlying system of values peculiar to a specific group or society, shapes the development of certain personality traits and motivates individuals in a society to engage in behaviors that may not be evident in other societies. Hofstede’s (1980) extensive culture study, leading to the development of four culture dimensions, provide a clear articulation of differences between countries in values, beliefs,", "title": "" }, { "docid": "757cf49ed451205b6f710953e835dfc6", "text": "We consider the problem of event-related desynchronization (ERD) estimation. In existing approaches, model parameters are usually found manually through experimentation, a tedious task that often leads to suboptimal estimates. We propose an expectation-maximization (EM) algorithm for model parameter estimation that is fully automatic and gives optimal estimates. Further, we apply a Kalman smoother to obtain ERD estimates. Results show that the EM algorithm significantly improves the performance of the Kalman smoother. Application of the proposed approach to the motor-imagery EEG data shows that useful ERD patterns can be obtained even without careful selection of frequency bands.", "title": "" }, { "docid": "e86c2af47c55a574aecf474f95fb34d3", "text": "This paper presents a novel way to address the extrinsic calibration problem for a system composed of a 3D LIDAR and a camera. The relative transformation between the two sensors is calibrated via a nonlinear least squares (NLS) problem, which is formulated in terms of the geometric constraints associated with a trihedral object. Precise initial estimates of NLS are obtained by dividing it into two sub-problems that are solved individually. With the precise initializations, the calibration parameters are further refined by iteratively optimizing the NLS problem. The algorithm is validated on both simulated and real data, as well as a 3D reconstruction application. Moreover, since the trihedral target used for calibration can be either orthogonal or not, it is very often present in structured environments, making the calibration convenient.", "title": "" }, { "docid": "24dce115334261ff4561ffd3b40c4fa9", "text": "Facial expressions play a major role in psychiatric diagnosis, monitoring and treatment adjustment. We recorded 34 schizophrenia patients and matched controls during a clinical interview, and extracted the activity level of 23 facial Action Units (AUs), using 3D structured light cameras and dedicated software. By defining dynamic and intensity AUs activation characteristic features, we found evidence for blunted affect and reduced positive emotional expressions in patients. Further, we designed learning algorithms which achieved up to 85% correct schizophrenia classification rate, and significant correlation with negative symptoms severity. Our results emphasize the clinical importance of facial dynamics, and illustrate the possible advantages of employing affective computing tools in clinical settings.", "title": "" }, { "docid": "fa42192f3ffd08332e35b98019e622ff", "text": "Human immunodeficiency virus 1 (HIV-1) and other retroviruses synthesize a DNA copy of their genome after entry into the host cell. Integration of this DNA into the host cell's genome is an essential step in the viral replication cycle. The viral DNA is synthesized in the cytoplasm and is associated with viral and cellular proteins in a large nucleoprotein complex. Before integration into the host genome can occur, this complex must be transported to the nucleus and must cross the nuclear envelope. This Review summarizes our current knowledge of how this journey is accomplished.", "title": "" }, { "docid": "2d8f92f752bd1b4756e991a1f7e70926", "text": "We present a new method to auto-adjust camera exposure for outdoor robotics. In outdoor environments, scene dynamic range may be wider than the dynamic range of the cameras due to sunlight and skylight. This can results in failures of vision-based algorithms because important image features are missing due to under-/over-saturation. To solve the problem, we adjust camera exposure to maximize image features in the gradient domain. By exploiting the gradient domain, our method naturally determines the proper exposure needed to capture important image features in a manner that is robust against illumination conditions. The proposed method is implemented using an off-the-shelf machine vision camera and is evaluated using outdoor robotics applications. Experimental results demonstrate the effectiveness of our method, which improves the performance of robot vision algorithms.", "title": "" }, { "docid": "87ea9ac29f561c26e4e6e411f5bb538c", "text": "Personalized predictive medicine necessitates the modeling of patient illness and care processes, which inherently have long-term temporal dependencies. Healthcare observations, recorded in electronic medical records, are episodic and irregular in time. We introduce DeepCare, an end-toend deep dynamic neural network that reads medical records, stores previous illness history, infers current illness states and predicts future medical outcomes. At the data level, DeepCare represents care episodes as vectors in space, models patient health state trajectories through explicit memory of historical records. Built on Long Short-Term Memory (LSTM), DeepCare introduces time parameterizations to handle irregular timed events by moderating the forgetting and consolidation of memory cells. DeepCare also incorporates medical interventions that change the course of illness and shape future medical risk. Moving up to the health state level, historical and present health states are then aggregated through multiscale temporal pooling, before passing through a neural network that estimates future outcomes. We demonstrate the efficacy of DeepCare for disease progression modeling, intervention recommendation, and future risk prediction. On two important cohorts with heavy social and economic burden – diabetes and mental health – the results show improved modeling and risk prediction accuracy.", "title": "" } ]
scidocsrr
964ae2f311e885c421c54d3ca6f8bcdf
THE SIMILARITY OF XML-BASED DOCUMENTS IN FINDING THE LEGAL INFORMATION
[ { "docid": "220acd23ebb9c69cfb9ee00b063468c6", "text": "This paper provides a brief survey of document structural similarity algorithms, including the optimal Tree Edit Distance algorithm and various approximation algorithms. The approximation algorithms include the simple weighted tag similarity algorithm, Fourier transforms of the structure, and a new application of the shingle technique to structural similarity. We show three surprising results. First, the Fourier transform technique proves to be the least accurate of any of approximation algorithms, while also being slowest. Second, optimal Tree Edit Distance algorithms may not be the best technique for clustering pages from different sites. Third, the simplest approximation to structure may be the most effective and efficient mechanism for many applications.", "title": "" } ]
[ { "docid": "f845508acabb985dd80c31774776e86b", "text": "In this paper, we introduce two input devices for wearable computers, called GestureWrist and GesturePad. Both devices allow users to interact with wearable or nearby computers by using gesture-based commands. Both are designed to be as unobtrusive as possible, so they can be used under various social contexts. The first device, called GestureWrist, is a wristband-type input device that recognizes hand gestures and forearm movements. Unlike DataGloves or other hand gesture-input devices, all sensing elements are embedded in a normal wristband. The second device, called GesturePad, is a sensing module that can be attached on the inside of clothes, and users can interact with this module from the outside. It transforms conventional clothes into an interactive device without changing their appearance.", "title": "" }, { "docid": "d02aa6e16a8d9d4fd0592b9c4c7fbad5", "text": "This paper proposes a novel neural network (NN) training method that employs the hybrid exponential smoothing method and the Levenberg-Marquardt (LM) algorithm, which aims to improve the generalization capabilities of previously used methods for training NNs for short-term traffic flow forecasting. The approach uses exponential smoothing to preprocess traffic flow data by removing the lumpiness from collected traffic flow data, before employing a variant of the LM algorithm to train the NN weights of an NN model. This approach aids NN training, as the preprocessed traffic flow data are more smooth and continuous than the original unprocessed traffic flow data. The proposed method was evaluated by forecasting short-term traffic flow conditions on the Mitchell freeway in Western Australia. With regard to the generalization capabilities for short-term traffic flow forecasting, the NN models developed using the proposed approach outperform those that are developed based on the alternative tested algorithms, which are particularly designed either for short-term traffic flow forecasting or for enhancing generalization capabilities of NNs.", "title": "" }, { "docid": "2b2398bf61847843e18d1f9150a1bccc", "text": "We present a robust method for capturing articulated hand motions in realtime using a single depth camera. Our system is based on a realtime registration process that accurately reconstructs hand poses by fitting a 3D articulated hand model to depth images. We register the hand model using depth, silhouette, and temporal information. To effectively map low-quality depth maps to realistic hand poses, we regularize the registration with kinematic and temporal priors, as well as a data-driven prior built from a database of realistic hand poses. We present a principled way of integrating such priors into our registration optimization to enable robust tracking without severely restricting the freedom of motion. A core technical contribution is a new method for computing tracking correspondences that directly models occlusions typical of single-camera setups. To ensure reproducibility of our results and facilitate future research, we fully disclose the source code of our implementation.", "title": "" }, { "docid": "9acb22396046a27e5318ab4ae08f6030", "text": "Interest in graphene centres on its excellent mechanical, electrical, thermal and optical properties, its very high specific surface area, and our ability to influence these properties through chemical functionalization. There are a number of methods for generating graphene and chemically modified graphene from graphite and derivatives of graphite, each with different advantages and disadvantages. Here we review the use of colloidal suspensions to produce new materials composed of graphene and chemically modified graphene. This approach is both versatile and scalable, and is adaptable to a wide variety of applications.", "title": "" }, { "docid": "2e3319cf6daead166c94345c52a8389a", "text": "Due to their high energy density and low material cost, lithium-sulfur batteries represent a promising energy storage system for a multitude of emerging applications, ranging from stationary grid storage to mobile electric vehicles. This review aims to summarize major developments in the field of lithium-sulfur batteries, starting from an overview of their electrochemistry, technical challenges and potential solutions, along with some theoretical calculation results to advance our understanding of the material interactions involved. Next, we examine the most extensively-used design strategy: encapsulation of sulfur cathodes in carbon host materials. Other emerging host materials, such as polymeric and inorganic materials, are discussed as well. This is followed by a survey of novel battery configurations, including the use of lithium sulfide cathodes and lithium polysulfide catholytes, as well as recent burgeoning efforts in the modification of separators and protection of lithium metal anodes. Finally, we conclude with an outlook section to offer some insight on the future directions and prospects of lithium-sulfur batteries.", "title": "" }, { "docid": "3a47a157127d32094a20a895d4c2d8e2", "text": "In this paper we present an optimisation model for airport taxi scheduling. We introduce a mixed-integer programming formulation to represent the movement of aircraft on the surface of the airport. In the optimal schedule delays due to taxi conflicts are minimised. We discuss implementation issues for solving this optimisation problem. Numerical results with real data of Amsterdam Airport Schiphol demonstrate that the algorithms lead to significant improvements of the efficiency with reasonable computational effort.", "title": "" }, { "docid": "884ea5137f9eefa78030608097938772", "text": "In this paper, we propose a new concept - the \"Reciprocal Velocity Obstacle\"- for real-time multi-agent navigation. We consider the case in which each agent navigates independently without explicit communication with other agents. Our formulation is an extension of the Velocity Obstacle concept [3], which was introduced for navigation among (passively) moving obstacles. Our approach takes into account the reactive behavior of the other agents by implicitly assuming that the other agents make a similar collision-avoidance reasoning. We show that this method guarantees safe and oscillation- free motions for each of the agents. We apply our concept to navigation of hundreds of agents in densely populated environments containing both static and moving obstacles, and we show that real-time and scalable performance is achieved in such challenging scenarios.", "title": "" }, { "docid": "a0e536ed04d802ee5b9a6afb171995a2", "text": "This paper presents a novel method for Speaker Identification based on Vector Quantization. The Speaker Identification system consists of two phases: training phase and testing phase. Vector Quantization (VQ) is used for feature extraction in both the training and testing phases. Two variations have been used. In method A, codebooks are generated from the speech samples, which are converted into 16 dimensional vectors by taking a overlap of 4. In method B, codebooks are generated from the speech samples, which are converted into 16 dimensional vectors without any overlap. For speaker identification, the codebook of the test sample is similarly generated and compared with the codebooks of the reference samples stored in the database. The results obtained for both the schemes have been compared. The results show that method 2 gives slightly better results than method 1.", "title": "" }, { "docid": "81385958cac7df4cc51b35762e6c2806", "text": "DDoS attacks remain a serious threat not only to the edge of the Internet but also to the core peering links at Internet Exchange Points (IXPs). Currently, the main mitigation technique is to blackhole traffic to a specific IP prefix at upstream providers. Blackholing is an operational technique that allows a peer to announce a prefix via BGP to another peer, which then discards traffic destined for this prefix. However, as far as we know there is only anecdotal evidence of the success of blackholing. Largely unnoticed by research communities, IXPs have deployed blackholing as a service for their members. In this first-of-its-kind study, we shed light on the extent to which blackholing is used by the IXP members and what effect it has on traffic. Within a 12 week period we found that traffic to more than 7, 864 distinct IP prefixes was blackholed by 75 ASes. The daily patterns emphasize that there are not only a highly variable number of new announcements every day but, surprisingly, there are a consistently high number of announcements (> 1000). Moreover, we highlight situations in which blackholing succeeds in reducing the DDoS attack traffic.", "title": "" }, { "docid": "8dfd91ceadfcceea352975f9b5958aaf", "text": "The bag-of-words representation commonly used in text analysis can be analyzed very efficiently and retains a great deal of useful information, but it is also troublesome because the same thought can be expressed using many different terms or one term can have very different meanings. Dimension reduction can collapse together terms that have the same semantics, to identify and disambiguate terms with multiple meanings and to provide a lower-dimensional representation of documents that reflects concepts instead of raw terms. In this chapter, we survey two influential forms of dimension reduction. Latent semantic indexing uses spectral decomposition to identify a lower-dimensional representation that maintains semantic properties of the documents. Topic modeling, including probabilistic latent semantic indexing and latent Dirichlet allocation, is a form of dimension reduction that uses a probabilistic model to find the co-occurrence patterns of terms that correspond to semantic topics in a collection of documents. We describe the basic technologies in detail and expose the underlying mechanism. We also discuss recent advances that have made it possible to apply these techniques to very large and evolving text collections and to incorporate network structure or other contextual information.", "title": "" }, { "docid": "457f2508c59daaae9af818f8a6a963d1", "text": "Robotic systems hold great promise to assist with household, educational, and research tasks, but the difficulties of designing and building such robots often are an inhibitive barrier preventing their development. This paper presents a framework in which simple robots can be easily designed and then rapidly fabricated and tested, paving the way for greater proliferation of robot designs. The Python package presented in this work allows for the scripted generation of mechanical elements, using the principles of hierarchical structure and modular reuse to simplify the design process. These structures are then manufactured using an origami-inspired method in which precision cut sheets of plastic film are folded to achieve desired geometries. Using these processes, lightweight, low cost, rapidly built quadrotors were designed and fabricated. Flight tests compared the resulting robots against similar micro air vehicles (MAVs) generated using other processes. Despite lower tolerance and precision, robots generated using the process presented in this work took significantly less time and cost to design and build, and yielded lighter, lower power MAVs.", "title": "" }, { "docid": "a98d158c4621ee83c537dd7449db4251", "text": "This paper presents a design of simultaneous localization and mapping (SLAM) for an omni-directional mobile robot using an omni-directional camera. A method is proposed to realize visual SLAM of the omni-directional mobile robot based on extended Kalman filter (EKF). Taking advantage of the 360° view of omni-directional images, visual reference scan approach is adopted in the SLAM design. Features of previously visited places can be used repeatedly to reduce the complexity of EKF calculation. Practical experiments of the proposed self-localization and control algorithms have been carried out by using a self-constructed omni-directional mobile robot. The localization error between the start point and target point is less than 0.15m and 1° after traveling more than 40 meters in an indoor environment.", "title": "" }, { "docid": "35b668eeecb71fc1931e139a90f2fd1f", "text": "In this article we present novel learning methods for estimating the quality of results returned by a search engine in response to a query. Estimation is based on the agreement between the top results of the full query and the top results of its sub-queries. We demonstrate the usefulness of quality estimation for several applications, among them improvement of retrieval, detecting queries for which no relevant content exists in the document collection, and distributed information retrieval. Experiments on TREC data demonstrate the robustness and the effectiveness of our learning algorithms.", "title": "" }, { "docid": "e5155f7df0bc1025dcd2864b2ed53a8e", "text": "Unlike standard object classification, where the image to be classified contains one or multiple instances of the same object, indoor scene classification is quite different since the image consists of multiple distinct objects. Furthermore, these objects can be of varying sizes and are present across numerous spatial locations in different layouts. For automatic indoor scene categorization, large-scale spatial layout deformations and scale variations are therefore two major challenges and the design of rich feature descriptors which are robust to these challenges is still an open problem. This paper introduces a new learnable feature descriptor called “spatial layout and scale invariant convolutional activations” to deal with these challenges. For this purpose, a new convolutional neural network architecture is designed which incorporates a novel “spatially unstructured” layer to introduce robustness against spatial layout deformations. To achieve scale invariance, we present a pyramidal image representation. For feasible training of the proposed network for images of indoor scenes, this paper proposes a methodology, which efficiently adapts a trained network model (on a large-scale data) for our task with only a limited amount of available training data. The efficacy of the proposed approach is demonstrated through extensive experiments on a number of data sets, including MIT-67, Scene-15, Sports-8, Graz-02, and NYU data sets.", "title": "" }, { "docid": "ff83e090897ed7b79537392801078ffb", "text": "Component-based software engineering has had great impact in the desktop and server domain and is spreading to other domains as well, such as embedded systems. Agile software development is another approach which has gained much attention in recent years, mainly for smaller-scale production of less critical systems. Both of them promise to increase system quality, development speed and flexibility, but so far little has been published on the combination of the two approaches. This paper presents a comprehensive analysis of the applicability of the agile approach in the development processes of 1) COTS components and 2) COTS-based systems. The study method is a systematic theoretical examination and comparison of the fundamental concepts and characteristics of these approaches. The contributions are: first, an enumeration of identified contradictions between the approaches, and suggestions how to bridge these incompatibilities to some extent. Second, the paper provides some more general comments, considerations, and application guidelines concerning the introduction of agile principles into the development of COTS components or COTS-based systems. This study thus forms a framework which will guide further empirical studies.", "title": "" }, { "docid": "3d14fb9884827c13207c135a297bc147", "text": "Clavibacter michiganensis subsp. michiganensis is a Gram-positive bacterial pathogen causing bacterial wilt and canker of tomato (Solanum lycopersicum), producing economic losses worldwide. In this study, gene expression analysis was conducted using several resistant tomato-related wild species, including Solanum peruvianum LA2157, S. peruvianum LA2172, and Solanum habrochaites LA2128, and a tomato susceptible species, to identify genes involved in disease response. Using cDNA-amplified fragment length polymorphism (AFLP), 403 differentially expressed transcripts were identified. Among those, several genes showed contrasting expression patterns among resistant and susceptible species, including genes involved in the ubiquitin-mediated protein degradation pathway and secretory peroxidase. These genes were up-regulated in resistant species, but down-regulated in susceptible species, suggesting their likely involvement in early plant defense responses following C. michiganensis subsp. michiganensis infection. These identified genes would serve as new candidate bacterial wilt disease resistance genes and should be subjected to further functional analyses to determine the molecular basis of incompatibility between wild species of tomato and C. michiganensis subsp. michiganensis. This would then contribute to the development of more effective and sustainable C. michiganensis subsp. michiganensis control methods.", "title": "" }, { "docid": "185eef07170ace88d3d66593d3c5bd1b", "text": "A compact triple-band H-shaped slot antenna fed by microstrip coupling is proposed. Four resonant modes are excited, including a monopole mode, a slot mode, and their higher-order modes, to cover GPS (1.575 GHz) and Wi-Fi (2.4-2.485 GHz and 5.15-5.85 GHz), respectively. Sensitivity study of the slot geometry upon the resonant modes have been conducted. The measured gains at these four resonant frequencies are 0.2 dBi, 3.5 dBi, 2.37 dBi, and 3.7 dBi, respectively, and the total efficiencies are -2.5 dB, -1.07 dB, -3.06 dB, and -2.7 dB, respectively. The size of this slot antenna is only 0.24λ0×0.034λ0, where λ0 is the free-space wavelength at 1.575 GHz, hence is suitable to install on notebook PC's and handheld devices.", "title": "" }, { "docid": "91617f4ed1fbd5d37368caa326a91154", "text": "Different evaluation measures assess different character istics of machine learning algorithms. The empirical evaluation of alg orithms and classifiers is a matter of on-going debate among researchers. Most measu res in use today focus on a classifier’s ability to identify classes correctl y. We note other useful properties, such as failure avoidance or class discrimi nation, and we suggest measures to evaluate such properties. These measures – Youd en’s index, likelihood, Discriminant power – are used in medical diagnosis. We show that they are interrelated, and we apply them to a case study from the fie ld of electronic negotiations. We also list other learning problems which ma y benefit from the application of these measures.", "title": "" }, { "docid": "e67b9b48507dcabae92debdb9df9cb08", "text": "This paper presents an annotation scheme for events that negatively or positively affect entities (benefactive/malefactive events) and for the attitude of the writer toward their agents and objects. Work on opinion and sentiment tends to focus on explicit expressions of opinions. However, many attitudes are conveyed implicitly, and benefactive/malefactive events are important for inferring implicit attitudes. We describe an annotation scheme and give the results of an inter-annotator agreement study. The annotated corpus is available online.", "title": "" }, { "docid": "ec5bdd52fa05364923cb12b3ff25a49f", "text": "A system to prevent subscription fraud in fixed telecommunications with high impact on long-distance carriers is proposed. The system consists of a classification module and a prediction module. The classification module classifies subscribers according to their previous historical behavior into four different categories: subscription fraudulent, otherwise fraudulent, insolvent and normal. The prediction module allows us to identify potential fraudulent customers at the time of subscription. The classification module was implemented using fuzzy rules. It was applied to a database containing information of over 10,000 real subscribers of a major telecom company in Chile. In this database, a subscription fraud prevalence of 2.2% was found. The prediction module was implemented as a multilayer perceptron neural network. It was able to identify 56.2% of the true fraudsters, screening only 3.5% of all the subscribers in the test set. This study shows the feasibility of significantly preventing subscription fraud in telecommunications by analyzing the application information and the customer antecedents at the time of application. q 2005 Elsevier Ltd. All rights reserved.", "title": "" } ]
scidocsrr
cd5e56b8253cdd3e2ebbb86a3fdb0a99
Aff2Vec: Affect-Enriched Distributional Word Representations
[ { "docid": "4ef27b194f8446065e6d336f649c0e40", "text": "Vector space representations of words capture many aspects of word similarity, but such methods tend to produce vector spaces in which antonyms (as well as synonyms) are close to each other. For spectral clustering using such word embeddings, words are points in a vector space where synonyms are linked with positive weights, while antonyms are linked with negative weights. We present a new signed spectral normalized graph cut algorithm, signed clustering, that overlays existing thesauri upon distributionally derived vector representations of words, so that antonym relationships between word pairs are represented by negative weights. Our signed clustering algorithm produces clusters of words that simultaneously capture distributional and synonym relations. By using randomized spectral decomposition (Halko et al., 2011) and sparse matrices, our method is both fast and scalable. We validate our clusters using datasets containing human judgments of word pair similarities and show the benefit of using our word clusters for sentiment prediction.", "title": "" }, { "docid": "5b3e9895359948d2190f5d8223a47045", "text": "Inferring the emotional content of words is important for text-based sentiment analysis, dialogue systems and psycholinguistics, but word ratings are expensive to collect at scale and across languages or domains. We develop a method that automatically extends word-level ratings to unrated words using signed clustering of vector space word representations along with affect ratings. We use our method to determine a word’s valence and arousal, which determine its position on the circumplex model of affect, the most popular dimensional model of emotion. Our method achieves superior out-of-sample word rating prediction on both affective dimensions across three different languages when compared to state-of-theart word similarity based methods. Our method can assist building word ratings for new languages and improve downstream tasks such as sentiment analysis and emotion detection.", "title": "" } ]
[ { "docid": "9d33565dbd5148730094a165bb2e968f", "text": "The demand for greater battery life in low-power consumer electronics and implantable medical devices presents a need for improved energy efficiency in the management of small rechargeable cells. This paper describes an ultra-compact analog lithium-ion (Li-ion) battery charger with high energy efficiency. The charger presented here utilizes the tanh basis function of a subthreshold operational transconductance amplifier to smoothly transition between constant-current and constant-voltage charging regimes without the need for additional area- and power-consuming control circuitry. Current-domain circuitry for end-of-charge detection negates the need for precision-sense resistors in either the charging path or control loop. We show theoretically and experimentally that the low-frequency pole-zero nature of most battery impedances leads to inherent stability of the analog control loop. The circuit was fabricated in an AMI 0.5-μm complementary metal-oxide semiconductor process, and achieves 89.7% average power efficiency and an end voltage accuracy of 99.9% relative to the desired target 4.2 V, while consuming 0.16 mm2 of chip area. To date and to the best of our knowledge, this design represents the most area-efficient and most energy-efficient battery charger circuit reported in the literature.", "title": "" }, { "docid": "307e6aba44394f9a31d75464d1facc20", "text": "Combinatorial features are essential for the success of many commercial models. Manually crafting these features usually comes with high cost due to the variety, volume and velocity of raw data in web-scale systems. Factorization based models, which measure interactions in terms of vector product, can learn patterns of combinatorial features automatically and generalize to unseen features as well. With the great success of deep neural networks (DNNs) in various fields, recently researchers have proposed several DNN-based factorization model to learn both low- and high-order feature interactions. Despite the powerful ability of learning an arbitrary function from data, plain DNNs generate feature interactions implicitly and at the bit-wise level. In this paper, we propose a novel Compressed Interaction Network (CIN), which aims to generate feature interactions in an explicit fashion and at the vector-wise level. We show that the CIN share some functionalities with convolutional neural networks (CNNs) and recurrent neural networks (RNNs). We further combine a CIN and a classical DNN into one unified model, and named this new model eXtreme Deep Factorization Machine (xDeepFM). On one hand, the xDeepFM is able to learn certain bounded-degree feature interactions explicitly; on the other hand, it can learn arbitrary low- and high-order feature interactions implicitly. We conduct comprehensive experiments on three real-world datasets. Our results demonstrate that xDeepFM outperforms state-of-the-art models. We have released the source code of xDeepFM at https://github.com/Leavingseason/xDeepFM.", "title": "" }, { "docid": "285a1c073ec4712ac735ab84cbcd1fac", "text": "During a survey of black yeasts of marine origin, some isolates of Hortaea werneckii were recovered from scuba diving equipment, such as silicone masks and snorkel mouthpieces, which had been kept under poor storage conditions. These yeasts were unambiguously identified by phenotypic and genotypic methods. Phylogenetic analysis of both the D1/D2 regions of 26S rRNA gene and ITS-5.8S rRNA gene sequences showed three distinct genetic types. This species is the agent of tinea nigra which is a rarely diagnosed superficial mycosis in Europe. In fact this mycosis is considered an imported fungal infection being much more prevalent in warm, humid parts of the world such as the Central and South Americas, Africa, and Asia. Although H. werneckii has been found in hypersaline environments in Europe, this is the first instance of the isolation of this halotolerant species from scuba diving equipment made with silicone rubber which is used in close contact with human skin and mucous membranes. The occurrence of this fungus in Spain is also an unexpected finding because cases of tinea nigra in this country are practically not seen.", "title": "" }, { "docid": "bffe2e95ed170506de2b18e206b8e404", "text": "Gliomas are the most common primary brain malignancies, with different degrees of aggressiveness, variable prognosis and various heterogeneous histologic sub-regions, i.e., peritumoral edematous/invaded tissue, necrotic core, active and non-enhancing core. This intrinsic heterogeneity is also portrayed in their radio-phenotype, as their sub-regions are depicted by varying intensity profiles disseminated across multi-parametric magnetic resonance imaging (mpMRI) scans, reflecting varying biological properties. Their heterogeneous shape, extent, and location are some of the factors that make these tumors difficult to resect, and in some cases inoperable. The amount of resected tumor is a factor also considered in longitudinal scans, when evaluating the apparent tumor for potential diagnosis of progression. Furthermore, there is mounting evidence that accurate segmentation of the various tumor sub-regions can offer the basis for quantitative image analysis towards prediction of patient overall survival. This study assesses the state-of-the-art machine learning (ML) methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i.e., 2012-2018. Specifically, we focus on i) evaluating segmentations of the various glioma sub-regions in pre-operative mpMRI scans, ii) assessing potential tumor progression by virtue of longitudinal growth of tumor sub-regions, beyond use of the RECIST/RANO criteria, and iii) predicting the overall survival from pre-operative mpMRI scans of patients that underwent gross total resection. Finally, we investigate the challenge of identifying the best ML algorithms for each of these tasks, considering that apart from being diverse on each instance of the challenge, the multi-institutional mpMRI BraTS dataset has also been a continuously evolving/growing dataset.", "title": "" }, { "docid": "5d46da41ac5eedf68de261654dc63fd9", "text": "Convolutional Neural Networks (CNNs) need large amounts of data with ground truth annotation, which is a challenging problem that has limited the development and fast deployment of CNNs for many computer vision tasks. We propose a novel framework for depth estimation from monocular images with corresponding confidence in a selfsupervised manner. A fully differential patch-based cost function is proposed by using the Zero-Mean Normalized Cross Correlation (ZNCC) that takes multi-scale patches as a matching strategy. This approach greatly increases the accuracy and robustness of the depth learning. In addition, the proposed patch-based cost function can provide a 0 to 1 confidence, which is then used to supervise the training of a parallel network for confidence map learning and estimation. Evaluation on KITTI dataset shows that our method outperforms the state-of-the-art results.", "title": "" }, { "docid": "0872240a9df85e190bddc4d3f037381f", "text": "This study presents a unique synthesized set of data for community college students entering the university with the intention of earning a degree in engineering. Several cohorts of longitudinal data were combined with transcript-level data from both the community college and the university to measure graduation rates in engineering. The emphasis of the study is to determine academic variables that had significant correlations with graduation in engineering, and levels of these academic variables. The article also examines the utility of data mining methods for understanding the academic variables related to achievement in science, technology, engineering, and mathematics. The practical purpose of each model is to develop a useful strategy for policy, based on success variables, that relates to the preparation and achievement of this important group of students as they move through the community college pathway.", "title": "" }, { "docid": "fc8ab3792af939fd982fbc3a95ecb364", "text": "A critical step in treating or eradicating weed infestations amongst vegetable crops is the ability to accurately and reliably discriminate weeds from crops. In recent times, high spatial resolution hyperspectral imaging data from ground based platforms have shown particular promise in this application. Using spectral vegetation signatures to discriminate between crop and weed species has been demonstrated on several occasions in the literature over the past 15 years. A number of authors demonstrated successful per-pixel classification with accuracies of over 80%. However, the vast majority of the related literature uses supervised methods, where training datasets have been manually compiled. In practice, static training data can be particularly susceptible to temporal variability due to physiological or environmental change. A self-supervised training method that leverages prior knowledge about seeding patterns in vegetable fields has recently been introduced in the context of RGB imaging, allowing the classifier to continually update weed appearance models as conditions change. This paper combines and extends these methods to provide a self-supervised framework for hyperspectral crop/weed discrimination with prior knowledge of seeding patterns using an autonomous mobile ground vehicle. Experimental results in corn crop rows demonstrate the system's performance and limitations.", "title": "" }, { "docid": "dd05084594640b9ab87c702059f7a366", "text": "Researchers and theorists have proposed that feelings of attachment to subgroups within a larger online community or site can increase users' loyalty to the site. They have identified two types of attachment, with distinct causes and consequences. With bond-based attachment, people feel connections to other group members, while with identity-based attachment they feel connections to the group as a whole. In two experiments we show that these feelings of attachment to subgroups increase loyalty to the larger community. Communication with other people in a subgroup but not simple awareness of them increases attachment to the larger community. By varying how the communication is structured, between dyads or with all group members simultaneously, the experiments show that bond- and identity-based attachment have different causes. But the experiments show no evidence that bond and identity attachment have different consequences. We consider both theoretical and methodological reasons why the consequences of bond-based and identity-based attachment are so similar.", "title": "" }, { "docid": "59597ab549189c744aae774259f84745", "text": "This paper addresses the problem of multi-view people occupancy map estimation. Existing solutions either operate per-view, or rely on a background subtraction preprocessing. Both approaches lessen the detection performance as scenes become more crowded. The former does not exploit joint information, whereas the latter deals with ambiguous input due to the foreground blobs becoming more and more interconnected as the number of targets increases. Although deep learning algorithms have proven to excel on remarkably numerous computer vision tasks, such a method has not been applied yet to this problem. In large part this is due to the lack of large-scale multi-camera data-set. The core of our method is an architecture which makes use of monocular pedestrian data-set, available at larger scale than the multi-view ones, applies parallel processing to the multiple video streams, and jointly utilises it. Our end-to-end deep learning method outperforms existing methods by large margins on the commonly used PETS 2009 data-set. Furthermore, we make publicly available a new three-camera HD data-set.", "title": "" }, { "docid": "5e1d615dde71c4ca09578152e39e6741", "text": "Cognitive radio is a promising technology aiming to solve the spectrum scarcity problem by allocating the spectrum dynamically to unlicensed users. It uses the free spectrum bands which are not being used by the licensed users without causing interference to the incumbent transmission. So, spectrum sensing is the essential mechanism on which the entire communication depends. If the spectrum sensing result is violated, the entire networks activities will be disrupted. Primary User Emulation Attack (PUEA) is one of the major threats to the spectrum sensing, which decreases the spectrum access probability. In this paper, our objectives are to give the various security issues in cognitive radio networks and then to discuss the PUEA with the existing techniques to mitigate it. Keywords-cognitive radio; spectrum sensing; PUEA", "title": "" }, { "docid": "1cbd70bddd09be198f6695209786438d", "text": "In this research work a neural network based technique to be applied on condition monitoring and diagnosis of rotating machines equipped with hydrostatic self levitating bearing system is presented. Based on fluid measured data, such pressures and temperature, vibration analysis based diagnosis is being carried out by determining the vibration characteristics of the rotating machines on the basis of signal processing tasks. Required signals are achieved by conversion of measured data (fluid temperature and pressures) into virtual data (vibration magnitudes) by means of neural network functional approximation techniques.", "title": "" }, { "docid": "f414db165723f75a4991035d4dd2055d", "text": "In data centers, caches work both to provide low IO latencies and to reduce the load on the back-end network and storage. But they are not designed for multi-tenancy; system-level caches today cannot be configured to match tenant or provider objectives. Exacerbating the problem is the increasing number of un-coordinated caches on the IO data plane. The lack of global visibility on the control plane to coordinate this distributed set of caches leads to inefficiencies, increasing cloud provider cost.\n We present Moirai, a tenant- and workload-aware system that allows data center providers to control their distributed caching infrastructure. Moirai can help ease the management of the cache infrastructure and achieve various objectives, such as improving overall resource utilization or providing tenant isolation and QoS guarantees, as we show through several use cases. A key benefit of Moirai is that it is transparent to applications or VMs deployed in data centers. Our prototype runs unmodified OSes and databases, providing immediate benefit to existing applications.", "title": "" }, { "docid": "b756b71200a3d6be92526de18007aa2e", "text": "This paper describes the result of a thorough analysis and evaluation of the so-called FIWARE platform from a smart application development point of view. FIWARE is the result of a series of wellfunded EU projects that is currently intensively promoted throughout public agencies in Europe and world-wide. The goal was to figure out how services provided by FIWARE facilitate the development of smart applications. It was conducted first by an analysis of the central components that make up the service stack, followed by the implementation of a pilot project that aimed on using as many of these services as possible.", "title": "" }, { "docid": "20d96905880332d6ef5a33b4dd0d8827", "text": "In spite of the fact that equal opportunities for men and women have been a priority in many countries, enormous gender differences prevail in most competitive high-ranking positions. We conduct a series of controlled experiments to investigate whether women might react differently than men to competitive incentive schemes commonly used in job evaluation and promotion. We observe no significant gender difference in mean performance when participants are paid proportional to their performance. But in the competitive environment with mixed gender groups we observe a significant gender difference: the mean performance of men has a large and significant, that of women is unchanged. This gap is not due to gender differences in risk aversion. We then run the same test with homogeneous groups, to investigate whether women under-perform only when competing against men. Women do indeed increase their performance and gender differences in mean performance are now insignificant. These results may be due to lower skill of women, or more likely to the fact that women dislike competition, or alternatively that they feel less competent than their male competitors, which depresses their performance in mixed tournaments. Our last experiment provides support for this hypothesis.", "title": "" }, { "docid": "c633668d5933118db60ea1c9b79333ea", "text": "A robot exoskeleton which is inspired by the human musculoskeletal system has been developed for lower limb rehabilitation. The device was manufactured using a novel technique employing 3D printing and fiber reinforcement to make one-of-a-kind form fitting human-robot connections. Actuation of the exoskeleton is achieved using PMAs (pneumatic air muscles) and cable actuation to give the system inherent compliance while maintaining a very low mass. The entire system was modeled including a new hybrid model for PMAs. Simulation and experimental results for a force and impedance based trajectory tracking controller demonstrate the feasibility for using the HuREx system for gait and rehabilitation training.", "title": "" }, { "docid": "09436ce5064f5e828a0d1f1656608de3", "text": "Psychometric modeling using digital data traces is a growing field of research with a breadth of potential applications in marketing, personalization and psychological assessment. We present a novel form of digital traces for user modeling: temporal patterns of smartphone and personal computer activity. We show that some temporal activity metrics are highly correlated with certain Big Five personality metrics. We then present a machine learning method for binary classification of each Big Five personality trait using these temporal activity patterns of both computer and smartphones as model features. Our initial findings suggest that Extroversion, Openness, Agreeableness, and Neuroticism can be classified using temporal patterns of digital traces at a similar accuracy to previous research that classified personality traits using different types of digital traces.", "title": "" }, { "docid": "00fa68c8e80e565c6fc4e0fdf053bac8", "text": "This work partially reports the results of a study aiming at the design and analysis of the performance of a multi-cab metropolitan transportation system. In our model we investigate a particular multi-vehicle many-to-many dynamic request dial-a-ride problem. We present a heuristic algorithm for this problem and some preliminary results. The algorithm is based on iteratively solving a singlevehicle subproblem at optimality: a pretty efficient dynamic programming routine has been devised for this purpose. This work has been carried out by researchers from both University of Rome “Tor Vergata” and Italian Energy Research Center ENEA as a line of a reasearch program, regarding urban mobility optimization, funded by ENEA and the Italian Ministry of Environment.", "title": "" }, { "docid": "05ca0864a0d3fd56d6758d0680cd6b39", "text": "Chance constrained programming is an effective and convenient approach to control risk in decision making under uncertainty. However, due to unknown probability distributions of random parameters, the solution obtained from a chance constrained optimization problem can be biased. In addition, instead of knowing the true distributions of random parameters, in practice, only a series of historical data, which can be considered as samples taken from the true (while ambiguous) distribution, can be observed and stored. In this paper, we derive stochastic programs with data-driven chance constraints (DCCs) to tackle these problems and develop equivalent reformulations. For a given historical data set, we construct two types of confidence sets for the ambiguous distribution through nonparametric statistical estimation of its moments and density functions, depending on the amount of available data. We then formulate DCCs from the perspective of robust feasibility, by allowing the ambiguous distribution to run adversely within its confidence set. After deriving equivalent reformulations, we provide exact and approximate solution approaches for stochastic programs with DCCs under both momentbased and density-based confidence sets. In addition, we derive the relationship between the conservatism of DCCs and the sample size of historical data, which shows quantitatively what we call the value of data.", "title": "" }, { "docid": "2a4201c5789a546edf8944acbcf99546", "text": "Relation extraction models based on deep learning have been attracting a lot of attention recently. Little research is carried out to reduce their need of labeled training data. In this work, we propose an unsupervised pre-training method based on the sequence-to-sequence model for deep relation extraction models. The pre-trained models need only half or even less training data to achieve equivalent performance as the same models without pre-training.", "title": "" }, { "docid": "5c0d74be236f8836017dc2c1f6de16df", "text": "Person re-identification is the problem of recognizing people across images or videos from non-overlapping views. Although there has been much progress in person re-identification for the last decade, it still remains a challenging task because of severe appearance changes of a person due to diverse camera viewpoints and person poses. In this paper, we propose a novel framework for person reidentification by analyzing camera viewpoints and person poses, so-called Pose-aware Multi-shot Matching (PaMM), which robustly estimates target poses and efficiently conducts multi-shot matching based on the target pose information. Experimental results using public person reidentification datasets show that the proposed methods are promising for person re-identification under diverse viewpoints and pose variances.", "title": "" } ]
scidocsrr
a299be643ae9462a5fd6754a1d1e961d
Design of a high frequency low voltage CMOS operational amplifier
[ { "docid": "6d55978aa80f177f6a859a55380ffed8", "text": "This paper investigates the effect of lowering the supply and threshold voltages on the energy efficiency of CMOS circuits. Using a first-order model of the energy and delay of a CMOS circuit, we show that lowering the supply and threshold voltage is generally advantageous, especially when the transistors are velocity saturated and the nodes have a high activity factor. In fact, for modern submicron technologies, this simple analysis suggests optimal energy efficiency at supply voltages under 0.5 V. Other process and circuit parameters have almost no effect on this optimal operating point. If there is some uncertainty in the value of the threshold or supply voltage, however, the power advantage of this very low voltage operation diminishes. Therefore, unless active feedback is used to control the uncertainty, in the future the supply and threshold voltage will not decrease drastically, but rather will continue to scale down to maintain constant electric fields.", "title": "" } ]
[ { "docid": "d297360f609e4b03c9d70fda7cc04123", "text": "This paper describes an FPGA implementation of a single-precision floating-point multiply-accumulator (FPMAC) that supports single-cycle accumulation while maintaining high clock frequencies. A non-traditional internal representation reduces the cost of mantissa alignment within the accumulator. The FPMAC is evaluated on an Altera Stratix III FPGA.", "title": "" }, { "docid": "20cb30a452bf20c9283314decfb7eb6e", "text": "In this paper, we apply bidirectional training to a long short term memory (LSTM) network for the first time. We also present a modified, full gradient version of the LSTM learning algorithm. We discuss the significance of framewise phoneme classification to continuous speech recognition, and the validity of using bidirectional networks for online causal tasks. On the TIMIT speech database, we measure the framewise phoneme classification scores of bidirectional and unidirectional variants of both LSTM and conventional recurrent neural networks (RNNs). We find that bidirectional LSTM outperforms both RNNs and unidirectional LSTM.", "title": "" }, { "docid": "c998a930d1c1eb4c3bfd53dfc752539b", "text": "We propose a new method for semantic instance segmentation, by first computing how likely two pixels are to belong to the same object, and then by grouping similar pixels together. Our similarity metric is based on a deep, fully convolutional embedding model. Our grouping method is based on selecting all points that are sufficiently similar to a set of “seed points’, chosen from a deep, fully convolutional scoring model. We show competitive results on the Pascal VOC instance segmentation benchmark.", "title": "" }, { "docid": "4d69fbb950ffe534ace5fdbcc2951f0c", "text": "In this paper we introduce a novel single-document summarization method based on a hidden semi-Markov model. This model can naturally model single-document summarization as the optimization problem of selecting the best sequence from among the sentences in the input document under the given objective function and knapsack constraint. This advantage makes it possible for sentence selection to take the coherence of the summary into account. In addition our model can also incorporate sentence compression into the summarization process. To demonstrate the effectiveness of our method, we conduct an experimental evaluation with a large-scale corpus consisting of 12,748 pairs of a document and its reference. The results show that our method significantly outperforms the competitive baselines in terms of ROUGE evaluation, and the linguistic quality of summaries is also improved. Our method successfully mimicked the reference summaries, about 20 percent of the summaries generated by our method were completely identical to their references. Moreover, we show that large-scale training samples are quite effective for training a summarizer.", "title": "" }, { "docid": "9180fe4fc7020bee9a52aa13de3adf54", "text": "A new Depth Image Layers Separation (DILS) algorithm for synthesizing inter-view images based on disparity depth map layers representation is presented. The approach is to separate the depth map into several layers identified through histogram-based clustering. Each layer is extracted using inter-view interpolation to create objects based on location and depth. DILS is a new paradigm in selecting interesting image locations based on depth, but also in producing new image representations that allow objects or parts of an image to be described without the need of segmentation and identification. The image view synthesis can reduce the configuration complexity of multi-camera arrays in 3D imagery and free-viewpoint applications. The simulation results show that depth layer separation is able to create inter-view images that may be integrated with other techniques such as occlusion handling processes. The DILS algorithm can be implemented using both simple as well as sophisticated stereo matching methods to synthesize inter-view images.", "title": "" }, { "docid": "b321f3b5e814f809221bc618b99b95bb", "text": "Abstract: Polymer processes often contain state variables whose distributions are multimodal; in addition, the models for these processes are often complex and nonlinear with uncertain parameters. This presents a challenge for Kalman-based state estimators such as the ensemble Kalman filter. We develop an estimator based on a Gaussian mixture model (GMM) coupled with the ensemble Kalman filter (EnKF) specifically for estimation with multimodal state distributions. The expectation maximization algorithm is used for clustering in the Gaussian mixture model. The performance of the GMM-based EnKF is compared to that of the EnKF and the particle filter (PF) through simulations of a polymethyl methacrylate process, and it is seen that it clearly outperforms the other estimators both in state and parameter estimation. While the PF is also able to handle nonlinearity and multimodality, its lack of robustness to model-plant mismatch affects its performance significantly.", "title": "" }, { "docid": "9e451fe70d74511d2cc5a58b667da526", "text": "Convolutional Neural Networks (CNNs) are propelling advances in a range of different computer vision tasks such as object detection and object segmentation. Their success has motivated research in applications of such models for medical image analysis. If CNN-based models are to be helpful in a medical context, they need to be precise, interpretable, and uncertainty in predictions must be well understood. In this paper, we develop and evaluate recent advances in uncertainty estimation and model interpretability in the context of semantic segmentation of polyps from colonoscopy images. We evaluate and enhance several architectures of Fully Convolutional Networks (FCNs) for semantic segmentation of colorectal polyps and provide a comparison between these models. Our highest performing model achieves a 76.06% mean IOU accuracy on the EndoScene dataset, a considerable improvement over the previous state-of-the-art.", "title": "" }, { "docid": "210fc6e14c63c6945682cd03c984eaff", "text": "The effect of L regularization can be analyzed by doing a quadratic approximation of the objective function around the optimum (see, e.g. Goodfellow et al., 2017, Section 7.1.1). This analysis shows that L regularization rescales the parameters along the directions defined by the eigenvectors of the Hessian matrix. This scaling is equal to λi λi+α for the i-th eigenvector of eigenvalue λi. A similar analysis can be used for the L-SP regularization.", "title": "" }, { "docid": "aa234355d0b0493e1d8c7a04e7020781", "text": "Cancer is associated with mutated genes, and analysis of tumour-linked genetic alterations is increasingly used for diagnostic, prognostic and treatment purposes. The genetic profile of solid tumours is currently obtained from surgical or biopsy specimens; however, the latter procedure cannot always be performed routinely owing to its invasive nature. Information acquired from a single biopsy provides a spatially and temporally limited snap-shot of a tumour and might fail to reflect its heterogeneity. Tumour cells release circulating free DNA (cfDNA) into the blood, but the majority of circulating DNA is often not of cancerous origin, and detection of cancer-associated alleles in the blood has long been impossible to achieve. Technological advances have overcome these restrictions, making it possible to identify both genetic and epigenetic aberrations. A liquid biopsy, or blood sample, can provide the genetic landscape of all cancerous lesions (primary and metastases) as well as offering the opportunity to systematically track genomic evolution. This Review will explore how tumour-associated mutations detectable in the blood can be used in the clinic after diagnosis, including the assessment of prognosis, early detection of disease recurrence, and as surrogates for traditional biopsies with the purpose of predicting response to treatments and the development of acquired resistance.", "title": "" }, { "docid": "7e2bbd260e58d84a4be8b721cdf51244", "text": "Obesity is characterised by altered gut microbiota, low-grade inflammation and increased endocannabinoid (eCB) system tone; however, a clear connection between gut microbiota and eCB signalling has yet to be confirmed. Here, we report that gut microbiota modulate the intestinal eCB system tone, which in turn regulates gut permeability and plasma lipopolysaccharide (LPS) levels. The impact of the increased plasma LPS levels and eCB system tone found in obesity on adipose tissue metabolism (e.g. differentiation and lipogenesis) remains unknown. By interfering with the eCB system using CB(1) agonist and antagonist in lean and obese mouse models, we found that the eCB system controls gut permeability and adipogenesis. We also show that LPS acts as a master switch to control adipose tissue metabolism both in vivo and ex vivo by blocking cannabinoid-driven adipogenesis. These data indicate that gut microbiota determine adipose tissue physiology through LPS-eCB system regulatory loops and may have critical functions in adipose tissue plasticity during obesity.", "title": "" }, { "docid": "4cb475f264a8773dc502c9bfdd7b260c", "text": "Thinking about intelligent robots involves consideration of how such systems can be enabled to perceive, interpret and act in arbitrary and dynamic environments. While sensor perception and model interpretation focus on the robot's internal representation of the world rather passively, robot grasping capabilities are needed to actively execute tasks, modify scenarios and thereby reach versatile goals. These capabilities should also include the generation of stable grasps to safely handle even objects unknown to the robot. We believe that the key to this ability is not to select a good grasp depending on the identification of an object (e.g. as a cup), but on its shape (e.g. as a composition of shape primitives). In this paper, we envelop given 3D data points into primitive box shapes by a fit-and-split algorithm that is based on an efficient Minimum Volume Bounding Box implementation. Though box shapes are not able to approximate arbitrary data in a precise manner, they give efficient clues for planning grasps on arbitrary objects. We present the algorithm and experiments using the 3D grasping simulator Grasplt!.", "title": "" }, { "docid": "fa9abc74d3126e0822e7e815e135e845", "text": "Semantic interaction offers an intuitive communication mechanism between human users and complex statistical models. By shielding the users from manipulating model parameters, they focus instead on directly manipulating the spatialization, thus remaining in their cognitive zone. However, this technique is not inherently scalable past hundreds of text documents. To remedy this, we present the concept of multi-model semantic interaction, where semantic interactions can be used to steer multiple models at multiple levels of data scale, enabling users to tackle larger data problems. We also present an updated visualization pipeline model for generalized multi-model semantic interaction. To demonstrate multi-model semantic interaction, we introduce StarSPIRE, a visual text analytics prototype that transforms user interactions on documents into both small-scale display layout updates as well as large-scale relevancy-based document selection.", "title": "" }, { "docid": "46d239e66c1de735f80312d8458b131d", "text": "Cloud computing is a dynamic, scalable and payper-use distributed computing model empowering designers to convey applications amid job designation and storage distribution. Cloud computing encourages to impart a pool of virtualized computer resource empowering designers to convey applications amid job designation and storage distribution. The cloud computing mainly aims to give proficient access to remote and geographically distributed resources. As cloud technology is evolving day by day and confronts numerous challenges, one of them being uncovered is scheduling. Scheduling is basically a set of constructs constructed to have a controlling hand over the order of work to be performed by a computer system. Algorithms are vital to schedule the jobs for execution. Job scheduling algorithms is one of the most challenging hypothetical problems in the cloud computing domain area. Numerous deep investigations have been carried out in the domain of job scheduling of cloud computing. This paper intends to present the performance comparison analysis of various pre-existing job scheduling algorithms considering various parameters. This paper discusses about cloud computing and its constructs in section (i). In section (ii) job scheduling concept in cloud computing has been elaborated. In section (iii) existing algorithms for job scheduling are discussed, and are compared in a tabulated form with respect to various parameters and lastly section (iv) concludes the paper giving brief summary of the work.", "title": "" }, { "docid": "3655e688c58a719076f3605d5a9c9893", "text": "The performance of a generic pedestrian detector may drop significantly when it is applied to a specific scene due to mismatch between the source dataset used to train the detector and samples in the target scene. In this paper, we investigate how to automatically train a scene-specific pedestrian detector starting with a generic detector in video surveillance without further manually labeling any samples under a novel transfer learning framework. It tackles the problem from three aspects. (1) With a graphical representation and through exploring the indegrees from target samples to source samples, the source samples are properly re-weighted. The indegrees detect the boundary between the distributions of the source dataset and the target dataset. The re-weighted source dataset better matches the target scene. (2) It takes the context information from motions, scene structures and scene geometry as the confidence scores of samples from the target scene to guide transfer learning. (3) The confidence scores propagate among samples on a graph according to the underlying visual structures of samples. All these considerations are formulated under a single objective function called Confidence-Encoded SVM. At the test stage, only the appearance-based detector is used without the context cues. The effectiveness of the proposed framework is demonstrated through experiments on two video surveillance datasets. Compared with a generic pedestrian detector, it significantly improves the detection rate by 48% and 36% at one false positive per image on the two datasets respectively.", "title": "" }, { "docid": "f3275d9a400307f67101957ad00cce84", "text": "Stem cell biology has come of age. Unequivocal proof that stem cells exist in the haematopoietic system has given way to the prospective isolation of several tissue-specific stem and progenitor cells, the initial delineation of their properties and expressed genetic programmes, and the beginnings of their utility in regenerative medicine. Perhaps the most important and useful property of stem cells is that of self-renewal. Through this property, striking parallels can be found between stem cells and cancer cells: tumours may often originate from the transformation of normal stem cells, similar signalling pathways may regulate self-renewal in stem cells and cancer cells, and cancer cells may include 'cancer stem cells' — rare cells with indefinite potential for self-renewal that drive tumorigenesis.", "title": "" }, { "docid": "b4fa57fec99131cdf0cb6fc4795fce43", "text": "We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scale-specific control of the synthesis. The new generator improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture. Finally, we introduce a new, highly varied and high-quality dataset of human faces.", "title": "" }, { "docid": "fd5b9187c6720c3408b5c2324b03905d", "text": "Recent anchor-based deep face detectors have achieved promising performance, but they are still struggling to detect hard faces, such as small, blurred and partially occluded faces. A reason is that they treat all images and faces equally, without putting more effort on hard ones; however, many training images only contain easy faces, which are less helpful to achieve better performance on hard images. In this paper, we propose that the robustness of a face detector against hard faces can be improved by learning small faces on hard images. Our intuitions are (1) hard images are the images which contain at least one hard face, thus they facilitate training robust face detectors; (2) most hard faces are small faces and other types of hard faces can be easily converted to small faces by shrinking. We build an anchor-based deep face detector, which only output a single feature map with small anchors, to specifically learn small faces and train it by a novel hard image mining strategy. Extensive experiments have been conducted on WIDER FACE, FDDB, Pascal Faces, and AFW datasets to show the effectiveness of our method. Our method achieves APs of 95.7, 94.9 and 89.7 on easy, medium and hard WIDER FACE val dataset respectively, which surpass the previous state-of-the-arts, especially on the hard subset. Code and model are available at https://github.com/bairdzhang/smallhardface.", "title": "" }, { "docid": "0db1e1304ec2b5d40790677c9ce07394", "text": "Neural sequence-to-sequence model has achieved great success in abstractive summarization task. However, due to the limit of input length, most of previous works can only utilize lead sentences as the input to generate the abstractive summarization, which ignores crucial information of the document. To alleviate this problem, we propose a novel approach to improve neural sentence summarization by using extractive summarization, which aims at taking full advantage of the document information as much as possible. Furthermore, we present both of streamline strategy and system combination strategy to achieve the fusion of the contents in different views, which can be easily adapted to other domains. Experimental results on CNN/Daily Mail dataset demonstrate both our proposed strategies can significantly improve the performance of neural sentence summarization.", "title": "" }, { "docid": "dade322206eeab84bfdae7d45fe043ca", "text": "Lung cancer has the highest death rate among all cancers in the USA. In this work we focus on improving the ability of computer-aided diagnosis (CAD) systems to predict the malignancy of nodules from cropped CT images of lung nodules. We evaluate the effectiveness of very deep convolutional neural networks at the task of expert-level lung nodule malignancy classification. Using the state-of-the-art ResNet architecture as our basis, we explore the effect of curriculum learning, transfer learning, and varying network depth on the accuracy of malignancy classification. Due to a lack of public datasets with standardized problem definitions and train/test splits, studies in this area tend to not compare directly against other existing work. This makes it hard to know the relative improvement in the new solution. In contrast, we directly compare our system against two state-of-the-art deep learning systems for nodule classification on the LIDC/IDRI dataset using the same experimental setup and data set. The results show that our system achieves the highest performance in terms of all metrics measured including sensitivity, specificity, precision, AUROC, and accuracy. The proposed method of combining deep residual learning, curriculum learning, and transfer learning translates to high nodule classification accuracy. This reveals a promising new direction for effective pulmonary nodule CAD systems that mirrors the success of recent deep learning advances in other image-based application domains.", "title": "" }, { "docid": "274186e87674920bfe98044aa0208320", "text": "Message routing in mobile delay tolerant networks inherently relies on the cooperation between nodes. In most existing routing protocols, the participation of nodes in the routing process is taken as granted. However, in reality, nodes can be unwilling to participate. We first show in this paper the impact of the unwillingness of nodes to participate in existing routing protocols through a set of experiments. Results show that in the presence of even a small proportion of nodes that do not forward messages, performance is heavily degraded. We then analyze two major reasons of the unwillingness of nodes to participate, i.e., their rational behavior (also called selfishness) and their wariness of disclosing private mobility information. Our main contribution in this paper is to survey the existing related research works that overcome these two issues. We provide a classification of the existing approaches for protocols that deal with selfish behavior. We then conduct experiments to compare the performance of these strategies for preventing different types of selfish behavior. For protocols that preserve the privacy of users, we classify the existing approaches and provide an analytical comparison of their security guarantees.", "title": "" } ]
scidocsrr
3d7e14291d4b3780f7ab429129058124
Sparse Phase Retrieval via Truncated Amplitude Flow
[ { "docid": "51d0ebd5fb727524810646c23487bbb1", "text": "We consider the problem of phase retrieval, namely, recovery of a signal from the magnitude of its Fourier transform, or of any other linear transform. Due to the loss of Fourier phase information, this problem is ill-posed. Therefore, prior information on the signal is needed in order to enable its recovery. In this work we consider the case in which the signal is known to be sparse, i.e., it consists of a small number of nonzero elements in an appropriate basis. We propose a fast local search method for recovering a sparse signal from measurements of its Fourier transform (or other linear transform) magnitude which we refer to as GESPAR: GrEedy Sparse PhAse Retrieval. Our algorithm does not require matrix lifting, unlike previous approaches, and therefore is potentially suitable for large scale problems such as images. Simulation results indicate that GESPAR is fast and more accurate than existing techniques in a variety of settings.", "title": "" }, { "docid": "5d527ad4493860a8d96283a5c58c3979", "text": "Phase retrieval problems involve solving linear equations, but with missing sign (or phase, for complex numbers) information. More than four decades after it was first proposed, the seminal error reduction algorithm of Gerchberg and Saxton and Fienup is still the popular choice for solving many variants of this problem. The algorithm is based on alternating minimization; i.e., it alternates between estimating the missing phase information, and the candidate solution. Despite its wide usage in practice, no global convergence guarantees for this algorithm are known. In this paper, we show that a (resampling) variant of this approach converges geometrically to the solution of one such problem-finding a vector x from y, A, where y = |ATx| and |z| denotes a vector of element-wise magnitudes of z-under the assumption that A is Gaussian. Empirically, we demonstrate that alternating minimization performs similar to recently proposed convex techniques for this problem (which are based on “lifting” to a convex matrix problem) in sample complexity and robustness to noise. However, it is much more efficient and can scale to large problems. Analytically, for a resampling version of alternating minimization, we show geometric convergence to the solution, and sample complexity that is off by log factors from obvious lower bounds. We also establish close to optimal scaling for the case when the unknown vector is sparse. Our work represents the first theoretical guarantee for alternating minimization (albeit with resampling) for any variant of phase retrieval problems in the non-convex setting.", "title": "" } ]
[ { "docid": "8698c9a18ed9173b132d122237294963", "text": "We propose Deep Feature Interpolation (DFI), a new data-driven baseline for automatic high-resolution image transformation. As the name suggests, DFI relies only on simple linear interpolation of deep convolutional features from pre-trained convnets. We show that despite its simplicity, DFI can perform high-level semantic transformations like make older/younger, make bespectacled, add smile, among others, surprisingly well&#x2013;sometimes even matching or outperforming the state-of-the-art. This is particularly unexpected as DFI requires no specialized network architecture or even any deep network to be trained for these tasks. DFI therefore can be used as a new baseline to evaluate more complex algorithms and provides a practical answer to the question of which image transformation tasks are still challenging after the advent of deep learning.", "title": "" }, { "docid": "d390b0e5b1892297af37659fb92c03b5", "text": "Encouraged by recent waves of successful applications of deep learning, some researchers have demonstrated the effectiveness of applying convolutional neural networks (CNN) to time series classification problems. However, CNN and other traditional methods require the input data to be of the same dimension which prevents its direct application on data of various lengths and multi-channel time series with different sampling rates across channels. Long short-term memory (LSTM), another tool in the deep learning arsenal and with its design nature, is more appropriate for problems involving time series such as speech recognition and language translation. In this paper, we propose a novel model incorporating a sequence-to-sequence model that consists two LSTMs, one encoder and one decoder. The encoder LSTM accepts input time series of arbitrary lengths, extracts information from the raw data and based on which the decoder LSTM constructs fixed length sequences that can be regarded as discriminatory features. For better utilization of the raw data, we also introduce the attention mechanism into our model so that the feature generation process can peek at the raw data and focus its attention on the part of the raw data that is most relevant to the feature under construction. We call our model S2SwA, as the short for Sequence-to-Sequence with Attention. We test S2SwA on both uni-channel and multi-channel time series datasets and show that our model is competitive with the state-of-the-art in real world tasks such as human activity recognition.", "title": "" }, { "docid": "7e6fafe512ccb0a9760fab1b14aa374f", "text": "Studying execution of concurrent real-time online systems, to identify far-reaching and hard to reproduce latency and performance problems, requires a mechanism able to cope with voluminous information extracted from execution traces. Furthermore, the workload must not be disturbed by tracing, thereby causing the problematic behavior to become unreproducible.\n In order to satisfy this low-disturbance constraint, we created the LTTng kernel tracer. It is designed to enable safe and race-free attachment of probes virtually anywhere in the operating system, including sites executed in non-maskable interrupt context.\n In addition to being reentrant with respect to all kernel execution contexts, LTTng offers good performance and scalability, mainly due to its use of per-CPU data structures, local atomic operations as main buffer synchronization primitive, and RCU (Read-Copy Update) mechanism to control tracing.\n Given that kernel infrastructure used by the tracer could lead to infinite recursion if traced, and typically requires non-atomic synchronization, this paper proposes an asynchronous mechanism to inform the kernel that a buffer is ready to read. This ensures that tracing sites do not require any kernel primitive, and therefore protects from infinite recursion.\n This paper presents the core of LTTng's buffering algorithms and measures its performance.", "title": "" }, { "docid": "ac07682e0fa700a8f0c9df025feb2c53", "text": "Today's web applications run inside a complex browser environment that is buggy, ill-specified, and implemented in different ways by different browsers. Thus, web applications that desire robustness must use a variety of conditional code paths and ugly hacks to deal with the vagaries of their runtime. Our new exokernel browser, called Atlantis, solves this problem by providing pages with an extensible execution environment. Atlantis defines a narrow API for basic services like collecting user input, exchanging network data, and rendering images. By composing these primitives, web pages can define custom, high-level execution environments. Thus, an application which does not want a dependence on Atlantis'predefined web stack can selectively redefine components of that stack, or define markup formats and scripting languages that look nothing like the current browser runtime. Unlike prior microkernel browsers like OP, and unlike compile-to-JavaScript frameworks like GWT, Atlantis is the first browsing system to truly minimize a web page's dependence on black box browser code. This makes it much easier to develop robust, secure web applications.", "title": "" }, { "docid": "652912f2cc5b2e93525cb25aec8d7c8d", "text": "This paper presents a slotted-rectangular patch antenna with proximity-coupled feed operated at dual band of millimeter-wave (mmV) frequencies, 28GHz and 38GHz. The antenna was built in multilayer substrate construct by 10-layers Low temperature Co-fiber Ceramic (LTCC) with 5 mils thickness each. The slotted-patch and thick substrate are configured to enhance the bandwidth and obtain a good result in gain as well. The bandwidth using are 21.3% at 28GHz and 13.0% at 38GHz with the direction gains are 8.63dBi and 8.62dBi at 28GHz and 38GHz respectively.", "title": "" }, { "docid": "65a8c1faa262cd428045854ffcae3fae", "text": "Extracting named entities in text and linking extracted names to a given knowledge base are fundamental tasks in applications for text understanding. Existing systems typically run a named entity recognition (NER) model to extract entity names first, then run an entity linking model to link extracted names to a knowledge base. NER and linking models are usually trained separately, and the mutual dependency between the two tasks is ignored. We propose JERL, Joint Entity Recognition and Linking, to jointly model NER and linking tasks and capture the mutual dependency between them. It allows the information from each task to improve the performance of the other. To the best of our knowledge, JERL is the first model to jointly optimize NER and linking tasks together completely. In experiments on the CoNLL’03/AIDA data set, JERL outperforms state-of-art NER and linking systems, and we find improvements of 0.4% absolute F1 for NER on CoNLL’03, and 0.36% absolute precision@1 for linking on AIDA.", "title": "" }, { "docid": "554d234697cd98bf790444fe630c179b", "text": "This paper presents a novel approach for search engine results clustering that relies on the semantics of the retrieved documents rather than the terms in those documents. The proposed approach takes into consideration both lexical and semantics similarities among documents and applies activation spreading technique in order to generate semantically meaningful clusters. This approach allows documents that are semantically similar to be clustered together rather than clustering documents based on similar terms. A prototype is implemented and several experiments are conducted to test the prospered solution. The result of the experiment confirmed that the proposed solution achieves remarkable results in terms of precision.", "title": "" }, { "docid": "233cb91d9d3b6aefbeb065f6ad6d8e80", "text": "This thesis addresses the problem of verifying the geographic locations of Internet clients. First, we demonstrate how current state-of-the-art delay-based geolocation techniques are susceptible to evasion through delay manipulations, which involve both increasing and decreasing the Internet delays that are observed between a client and a remote measuring party. We find that delay-based techniques generally lack appropriate mechanisms to measure delays in an integrity-preserving manner. We then discuss different strategies enabling an adversary to benefit from being able to manipulate the delays. Upon analyzing the effect of these strategies on three representative delay-based techniques, we found that the strategies combined with the ability of full delay manipulation can allow an adversary to (fraudulently) control the location returned by those geolocation techniques accurately. We then propose Client Presence Verification (CPV) as a delay-based technique to verify an assertion about a client’s physical presence in a prescribed geographic region. Three verifiers geographically encapsulating a client’s asserted location are used to corroborate that assertion by measuring the delays between themselves and the client. CPV infers geographic distances from these delays and thus, using the smaller of the forward and reverse one-way delay between each verifier and the client is expected to result in a more accurate distance inference than using the conventional round-trip times. Accordingly, we devise a novel protocol for accurate one-way delay measurements between the client and the three verifiers to be used by CPV, taking into account that the client could manipulate the measurements to defeat the verification process. We evaluate CPV through extensive real-world experiments with legitimate clients (those truly present at where they asserted to be) modeled to use both wired and wireless access networks. Wired evaluation is done using the PlanetLab testbed, during which we examine various factors affecting CPV’s efficacy, such as the client’s geographical nearness to the verifiers. For wireless evaluation, we leverage the Internet delay information collected for wired clients from PlanetLab, and model additional delays representing the last-mile wireless link. The additional delays were generated following wireless delay distribution models studied in the literature. Again, we examine various factors that affect CPV’s efficacy, including the number of devices actively competing for the wireless media in the vicinity of a wireless legitimate CPV client. Finally, we reinforce CPV against a (hypothetical) middlebox that an adversary specifically customizes to defeat CPV (i.e., assuming an adversary that is aware of how CPV operates). We postulate that public middlebox service providers (e.g., in the form of Virtual Private Networks) would be motivated to defeat CPV if it is to be widely adopted in practice. To that end, we propose to use a Proof-ofWork mechanism that allows CPV to impose constraints, which effectively limit the number of clients (now adversaries) simultaneously colluding with that middlebox; beyond that number, CPV detects the middlebox.", "title": "" }, { "docid": "7bde5b5c0980eb2be0827cd29803e542", "text": "Image authentication verifies the originality of an image by detecting malicious manipulations. This goal is different from that of image watermarking which embeds into the image a signature surviving most manipulations. Most existing methods for image authentication treat all types of manipulation equally (i.e., as unacceptable). However, some applications demand techniques that can distinguish acceptable manipulations (e.g., compression) from malicious ones. In this paper, we describe an effective technique for image authentication, which can prevent malicious manipulations but allow JPEG lossy compression. The authentication signature is based on the invariance of the relationship between the DCT coefficients at the same position in separate blocks of an image. This relationship will be preserved when these coefficients are quantized in a JPEG compression process. Our proposed method can distinguish malicious manipulations from JPEG lossy compression regardless of how high the compression ratio is. We also show that, in different practical cases, the design of the authenticator depends on the number of recompression times, and whether the image is decoded into integral values in the pixel domain during the recompression process. Theoretical and experimental results indicate that this technique is effective for image authentication.", "title": "" }, { "docid": "a817c58408d1623cd82e243147c498ca", "text": "Very few attempts, if any, have been made to use visible light in corneal reflection approaches to the problem of gaze tracking. The reasons usually given to justify the limited application of this type of illumination are that the required image features are less accurately depicted, and that visible light may disturb the user. The aim of this paper is to show that it is possible to overcome these difficulties and build an accurate and robust gaze tracker under these circumstances. For this purpose, visible light is used to obtain the corneal reflection or glint in a way analogous to the well-known pupil center corneal reflection technique. Due to the lack of contrast, the center of the iris is tracked instead of the center of the pupil. The experiments performed in our laboratory have shown very satisfactory results, allowing free-head movement and no need of recalibration.", "title": "" }, { "docid": "26f393df2f3e7c16db2ee10d189efb37", "text": "Recently a few systems for automatically solving math word problems have reported promising results. However, the datasets used for evaluation have limitations in both scale and diversity. In this paper, we build a large-scale dataset which is more than 9 times the size of previous ones, and contains many more problem types. Problems in the dataset are semiautomatically obtained from community question-answering (CQA) web pages. A ranking SVM model is trained to automatically extract problem answers from the answer text provided by CQA users, which significantly reduces human annotation cost. Experiments conducted on the new dataset lead to interesting and surprising results.", "title": "" }, { "docid": "1436e4fddc73d33a6cf83abfa5c9eb02", "text": "The aim of our study was to provide a contribution to the research field of the critical success factors (CSFs) of ERP projects, with specific focus on smaller enterprises (SMEs). Therefore, we conducted a systematic literature review in order to update the existing reviews of CSFs. On the basis of that review, we led several interviews with ERP consultants experienced with ERP implementations in SMEs. As a result, we showed that all factors found in the literature also affected the success of ERP projects in SMEs. However, within those projects, technological factors gained much more importance compared to the factors that most influence the success of larger ERP projects. For SMEs, factors like the Organizational fit of the ERP system as well as ERP system tests were even more important than Top management support or Project management, which were the most important factors for large-scale companies.", "title": "" }, { "docid": "56c5ec77f7b39692d8b0d5da0e14f82a", "text": "Using tweets extracted from Twitter during the Australian 2010-2011 floods, social network analysis techniques were used to generate and analyse the online networks that emerged at that time. The aim was to develop an understanding of the online communities for the Queensland, New South Wales and Victorian floods in order to identify active players and their effectiveness in disseminating critical information. A secondary goal was to identify important online resources disseminated by these communities. Important and effective players during the Queensland floods were found to be: local authorities (mainly the Queensland Police Services), political personalities (Queensland Premier, Prime Minister, Opposition Leader, Member of Parliament), social media volunteers, traditional media reporters, and people from not-for-profit, humanitarian, and community associations. A range of important resources were identified during the Queensland flood; however, they appeared to be of a more general information nature rather than vital information and updates on the disaster. Unlike Queensland, there was no evidence of Twitter activity from the part of local authorities and the government in the New South Wales and Victorian floods. Furthermore, the level of Twitter activity during the NSW floods was almost nil. Most of the active players during the NSW and Victorian floods were volunteers who were active during the Queensland floods. Given the positive results obtained by the active involvement of the local authorities and government officials in Queensland, and the increasing adoption of Twitter in other parts of the world for emergency situations, it seems reasonable to push for greater adoption of Twitter from local and federal authorities Australia-wide during periods of mass emergencies.", "title": "" }, { "docid": "5b786dee43f6b2b15a53bb4f633aefb6", "text": "Deep learning based on artificial neural networks is a very popular approach to modeling, classifying, and recognizing complex data such as images, speech, and text. The unprecedented accuracy of deep learning methods has turned them into the foundation of new AI-based services on the Internet. Commercial companies that collect user data on a large scale have been the main beneficiaries of this trend since the success of deep learning techniques is directly proportional to the amount of data available for training. Massive data collection required for deep learning presents obvious privacy issues. Users' personal, highly sensitive data such as photos and voice recordings is kept indefinitely by the companies that collect it. Users can neither delete it, nor restrict the purposes for which it is used. Furthermore, centrally kept data is subject to legal subpoenas and extra-judicial surveillance. Many data owners--for example, medical institutions that may want to apply deep learning methods to clinical records--are prevented by privacy and confidentiality concerns from sharing the data and thus benefitting from large-scale deep learning.\n In this paper, we design, implement, and evaluate a practical system that enables multiple parties to jointly learn an accurate neural-network model for a given objective without sharing their input datasets. We exploit the fact that the optimization algorithms used in modern deep learning, namely, those based on stochastic gradient descent, can be parallelized and executed asynchronously. Our system lets participants train independently on their own datasets and selectively share small subsets of their models' key parameters during training. This offers an attractive point in the utility/privacy tradeoff space: participants preserve the privacy of their respective data while still benefitting from other participants' models and thus boosting their learning accuracy beyond what is achievable solely on their own inputs. We demonstrate the accuracy of our privacy-preserving deep learning on benchmark datasets.", "title": "" }, { "docid": "bab246f8b15931501049862066fde77f", "text": "The upcoming Internet of Things will introduce large sensor networks including devices with very different propagation characteristics and power consumption demands. 5G aims to fulfill these requirements by demanding a battery lifetime of at least 10 years. To integrate smart devices that are located in challenging propagation conditions, IoT communication technologies furthermore have to support very deep coverage. NB-IoT and eMTC are designed to meet these requirements and thus paving the way to 5G. With the power saving options extended Discontinuous Reception and Power Saving Mode as well as the usage of large numbers of repetitions, NB-IoT and eMTC introduce new techniques to meet the 5G IoT requirements. In this paper, the performance of NB-IoT and eMTC is evaluated. Therefore, data rate, power consumption, latency and spectral efficiency are examined in different coverage conditions. Although both technologies use the same power saving techniques as well as repetitions to extend the communication range, the analysis reveals a different performance in the context of data size, rate and coupling loss. While eMTC comes with a 4% better battery lifetime than NB-IoT when considering 144 dB coupling loss, NB-IoT battery lifetime raises to 18% better performance in 164 dB coupling loss scenarios. The overall analysis shows that in coverage areas with a coupling loss of 155 dB or less, eMTC performs better, but requires much more bandwidth. Taking the spectral efficiency into account, NB-IoT is in all evaluated scenarios the better choice and more suitable for future networks with massive numbers of devices.", "title": "" }, { "docid": "08faae46f98a8eab45049c9d3d7aa48e", "text": "One of the assumptions of attachment theory is that individual differences in adult attachment styles emerge from individuals' developmental histories. To examine this assumption empirically, the authors report data from an age 18 follow-up (Booth-LaForce & Roisman, 2012) of the National Institute of Child Health and Human Development Study of Early Child Care and Youth Development, a longitudinal investigation that tracked a cohort of children and their parents from birth to age 15. Analyses indicate that individual differences in adult attachment can be traced to variations in the quality of individuals' caregiving environments, their emerging social competence, and the quality of their best friendship. Analyses also indicate that assessments of temperament and most of the specific genetic polymorphisms thus far examined in the literature on genetic correlates of attachment styles are essentially uncorrelated with adult attachment, with the exception of a polymorphism in the serotonin receptor gene (HTR2A rs6313), which modestly predicted higher attachment anxiety and which revealed a Gene × Environment interaction such that changes in maternal sensitivity across time predicted attachment-related avoidance. The implications of these data for contemporary perspectives and debates concerning adult attachment theory are discussed.", "title": "" }, { "docid": "242cc9922b120057fe9f9066f257fb44", "text": "ion Yes No Partly Availability / Mobility No No No Fault tolerance Partly No Partly Flexibility / Event based Yes Partly Partly Uncertainty of information No No No", "title": "" }, { "docid": "36e6bf8dc6d693ca7297e20033ca6af5", "text": "The type III secretion system (TTSS) of gram-negative bacteria is responsible for delivering bacterial proteins, termed effectors, from the bacterial cytosol directly into the interior of host cells. The TTSS is expressed predominantly by pathogenic bacteria and is usually used to introduce deleterious effectors into host cells. While biochemical activities of effectors vary widely, the TTSS apparatus used to deliver these effectors is conserved and shows functional complementarity for secretion and translocation. This review focuses on proteins that constitute the TTSS apparatus and on mechanisms that guide effectors to the TTSS apparatus for transport. The TTSS apparatus includes predicted integral inner membrane proteins that are conserved widely across TTSSs and in the basal body of the bacterial flagellum. It also includes proteins that are specific to the TTSS and contribute to ring-like structures in the inner membrane and includes secretin family members that form ring-like structures in the outer membrane. Most prominently situated on these coaxial, membrane-embedded rings is a needle-like or pilus-like structure that is implicated as a conduit for effector translocation into host cells. A short region of mRNA sequence or protein sequence in effectors acts as a signal sequence, directing proteins for transport through the TTSS. Additionally, a number of effectors require the action of specific TTSS chaperones for efficient and physiologically meaningful translocation into host cells. Numerous models explaining how effectors are transported into host cells have been proposed, but understanding of this process is incomplete and this topic remains an active area of inquiry.", "title": "" }, { "docid": "012ac031a519d6e96d479b25a41afcdb", "text": "is one of the most comprehensively studied ingredients in the food supply. Yet, despite our considerable knowledge of caffeine and centuries of safe consumption in foods and beverages, questions and misperceptions about the potential health effects associated with caffeine persist. This Review provides up-to-date information on caffeine, examines its safety and summarizes the most recent key research conducted on caffeine and health. EXECUTIVE SUMMARY Caffeine is added to soft drinks as a flavoring agent; it imparts a bitterness that modifies the flavors of other components, both sour and sweet. Although there has been controversy as to its effectiveness in this role, a review of the literature suggests that caffeine does, in fact, contribute to the sensory appeal of soft drinks. [Drewnowski, 2001] Moderate intake of 300 mg/day (about three cups of coffee per day) of caffeine does not cause adverse health effects in healthy adults, although some groups, including those with hypertension and the elderly, may be more vulnerable. Also, regular consumers of coffee and other caffeinated beverages may experience some undesirable, but mild, short-lived symptoms if they stop consuming caffeine , particularly if the cessation is abrupt. However, there is little evidence of health risks of caffeine consumption. In fact, some evidence of health benefits exists for adults who consume moderate amounts of caffeine. Caffeine consumption may help reduce the risk of several chronic diseases, including diabetes, Parkinson's disease, liver disease, and colorectal cancer, as well as improve immune function. Large prospective cohort studies in the Netherlands, Finland, Sweden, and the United States have found caffeine consumption is associated with reduced risk of developing type 2 diabetes, although the mechanisms are unclear. Several other cohort studies have found that caffeine consumption from coffee and other beverages decreases the risk of Parkinson's Disease in men, as well as in women who have never used post-menopausal hormone replacement therapy. Epidemiological studies also suggest that coffee consumption may decrease the risk of liver injury, cirrhosis and hepatocellular carcinoma (liver cancer), although the reasons for these results have not been determined. In addition, coffee consumption appears to reduce the risk of colorectal cancer, but this has not generally been confirmed in prospective cohort studies. An anti-inflammatory effect has also been observed in a number of studies on caffeine's impact on the immune system. Most studies have found that caffeine consumption does not significantly increase the risk of coronary heart disease (CHD) or stroke. …", "title": "" } ]
scidocsrr
3eaab5353c939c20ace70ac2b83bcfed
Feature location in source code: a taxonomy and survey
[ { "docid": "7e788eb9ff8fd10582aa94a89edb10a2", "text": "This paper recasts the problem of feature location in source code as a decision-making problem in the presence of uncertainty. The solution to the problem is formulated as a combination of the opinions of different experts. The experts in this work are two existing techniques for feature location: a scenario-based probabilistic ranking of events and an information-retrieval-based technique that uses latent semantic indexing. The combination of these two experts is empirically evaluated through several case studies, which use the source code of the Mozilla Web browser and the Eclipse integrated development environment. The results show that the combination of experts significantly improves the effectiveness of feature location as compared to each of the experts used independently", "title": "" } ]
[ { "docid": "2a487ff4b9218900e9a0e480c23e4c25", "text": "5.1 CONVENTIONAL ACTUATORS, SHAPE MEMORY ALLOYS, AND ELECTRORHEOLOGICAL FLUIDS ............................................................................................................................................................. 1 5.1.", "title": "" }, { "docid": "2bf9e347e163d97c023007f4cc88ab02", "text": "State-of-the-art graph kernels do not scale to large graphs with hundreds of nodes and thousands of edges. In this article we propose to compare graphs by counting graphlets, i.e., subgraphs with k nodes where k ∈ {3, 4, 5}. Exhaustive enumeration of all graphlets being prohibitively expensive, we introduce two theoretically grounded speedup schemes, one based on sampling and the second one specifically designed for bounded degree graphs. In our experimental evaluation, our novel kernels allow us to efficiently compare large graphs that cannot be tackled by existing graph kernels.", "title": "" }, { "docid": "e1adfaf4af1e4fb5d0101a157039ccfe", "text": "Platelet-rich fibrin (PRF) belongs to a new generation of platelet concentrates, with simplified processing and without biochemical blood handling. In this second article, we investigate the platelet-associated features of this biomaterial. During PRF processing by centrifugation, platelets are activated and their massive degranulation implies a very significant cytokine release. Concentrated platelet-rich plasma platelet cytokines have already been quantified in many technologic configurations. To carry out a comparative study, we therefore undertook to quantify PDGF-BB, TGFbeta-1, and IGF-I within PPP (platelet-poor plasma) supernatant and PRF clot exudate serum. These initial analyses revealed that slow fibrin polymerization during PRF processing leads to the intrinsic incorporation of platelet cytokines and glycanic chains in the fibrin meshes. This result would imply that PRF, unlike the other platelet concentrates, would be able to progressively release cytokines during fibrin matrix remodeling; such a mechanism might explain the clinically observed healing properties of PRF.", "title": "" }, { "docid": "a33d1e37fc8c9ceccf67b65902a6366a", "text": "Invariant representations in object recognition systems are generally obtained by pooling feature vectors over spatially local neighborhoods. But pooling is not local in the feature vector space, so that widely dissimilar features may be pooled together if they are in nearby locations. Recent approaches rely on sophisticated encoding methods and more specialized codebooks (or dictionaries), e.g., learned on subsets of descriptors which are close in feature space, to circumvent this problem. In this work, we argue that a common trait found in much recent work in image recognition or retrieval is that it leverages locality in feature space on top of purely spatial locality. We propose to apply this idea in its simplest form to an object recognition system based on the spatial pyramid framework, to increase the performance of small dictionaries with very little added engineering. State-of-the-art results on several object recognition benchmarks show the promise of this approach.", "title": "" }, { "docid": "825888e4befcbf6b492143a13928a34e", "text": "Sentiment analysis is one of the prominent fields of data mining that deals with the identification and analysis of sentimental contents generally available at social media. Twitter is one of such social medias used by many users about some topics in the form of tweets. These tweets can be analyzed to find the viewpoints and sentiments of the users by using clustering-based methods. However, due to the subjective nature of the Twitter datasets, metaheuristic-based clustering methods outperforms the traditional methods for sentiment analysis. Therefore, this paper proposes a novel metaheuristic method (CSK) which is based on K-means and cuckoo search. The proposed method has been used to find the optimum cluster-heads from the sentimental contents of Twitter dataset. The efficacy of proposed method has been tested on different Twitter datasets and compared with particle swarm optimization, differential evolution, cuckoo search, improved cuckoo search, gauss-based cuckoo search, and two n-grams methods. Experimental results and statistical analysis validate that the proposed method outperforms the existing methods. The proposed method has theoretical implications for the future research to analyze the data generated through social networks/medias. This method has also very generalized practical implications for designing a system that can provide conclusive reviews on any social issues. © 2017 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "934bdd758626ec37241cffba8e2cbeb9", "text": "The combination of GPS/INS provides an ideal navigation system of full capability of continuously outputting position, velocity, and attitude of the host platform. However, the accuracy of INS degrades with time when GPS signals are blocked in environments such as tunnels, dense urban canyons and indoors. To dampen down the error growth, the INS sensor errors should be properly estimated and compensated before the inertial data are involved in the navigation computation. Therefore appropriate modelling of the INS sensor errors is a necessity. Allan Variance (AV) is a simple and efficient method for verifying and modelling these errors by representing the root mean square (RMS) random drift error as a function of averaging time. The AV can be used to determine the characteristics of different random processes. This paper applies the AV to analyse and model different types of random errors residing in the measurements of MEMS inertial sensors. The derived error model will be further applied to a low-cost GPS/MEMS-INS system once the correctness of the model is verified. The paper gives the detail of the AV analysis as well as presents the test results.", "title": "" }, { "docid": "902ca8c9a7cd8384143654ee302eca82", "text": "The Paper presents the outlines of the Field Programmable Gate Array (FPGA) implementation of Real Time speech enhancement by Spectral Subtraction of acoustic noise using Dynamic Moving Average Method. It describes an stand alone algorithm for Speech Enhancement and presents a architecture for the implementation. The traditional Spectral Subtraction method can only suppress stationary acoustic noise from speech by subtracting the spectral noise bias calculated during non-speech activity, while adding the unique option of dynamic moving averaging to it, it can now periodically upgrade the estimation and cope up with changes in noise level. Signal to Noise Ratio (SNR) has been tested at different noisy environment and the improvement in SNR certifies the effectiveness of the algorithm. The FPGA implementation presented in this paper, works on streaming speech signals and can be used in factories, bus terminals, Cellular Phones, or in outdoor conferences where a large number of people have gathered. The Table in the Experimental Result section consolidates our claim of optimum resouce usage.", "title": "" }, { "docid": "68a6edfafb8e7dab899f8ce1f76d311c", "text": "Networks such as social networks, airplane networks, and citation networks are ubiquitous. The adjacency matrix is often adopted to represent a network, which is usually high dimensional and sparse. However, to apply advanced machine learning algorithms to network data, low-dimensional and continuous representations are desired. To achieve this goal, many network embedding methods have been proposed recently. The majority of existing methods facilitate the local information i.e. local connections between nodes, to learn the representations, while completely neglecting global information (or node status), which has been proven to boost numerous network mining tasks such as link prediction and social recommendation. Hence, it also has potential to advance network embedding. In this paper, we study the problem of preserving local and global information for network embedding. In particular, we introduce an approach to capture global information and propose a network embedding framework LOG, which can coherently model LOcal and Global information. Experimental results demonstrate the ability to preserve global information of the proposed framework. Further experiments are conducted to demonstrate the effectiveness of learned representations of the proposed framework.", "title": "" }, { "docid": "8e50613e8aab66987d650cd8763811e5", "text": "Along with the great increase of internet and e-commerce, the use of credit card is an unavoidable one. Due to the increase of credit card usage, the frauds associated with this have also increased. There are a lot of approaches used to detect the frauds. In this paper, behavior based classification approach using Support Vector Machines are employed and efficient feature extraction method also adopted. If any discrepancies occur in the behaviors transaction pattern then it is predicted as suspicious and taken for further consideration to find the frauds. Generally credit card fraud detection problem suffers from a large amount of data, which is rectified by the proposed method. Achieving finest accuracy, high fraud catching rate and low false alarms are the main tasks of this approach.", "title": "" }, { "docid": "d22c69d0c546dfb4ee5d38349bf7154f", "text": "Investigation of functional brain connectivity patterns using functional MRI has received significant interest in the neuroimaging domain. Brain functional connectivity alterations have widely been exploited for diagnosis and prediction of various brain disorders. Over the last several years, the research community has made tremendous advancements in constructing brain functional connectivity from timeseries functional MRI signals using computational methods. However, even modern machine learning techniques rely on conventional correlation and distance measures as a basic step towards the calculation of the functional connectivity. Such measures might not be able to capture the latent characteristics of raw time-series signals. To overcome this shortcoming, we propose a novel convolutional neural network based model, FCNet, that extracts functional connectivity directly from raw fMRI time-series signals. The FCNet consists of a convolutional neural network that extracts features from time-series signals and a fully connected network that computes the similarity between the extracted features in a Siamese architecture. The functional connectivity computed using FCNet is combined with phenotypic information and used to classify individuals as healthy controls or neurological disorder subjects. Experimental results on the publicly available ADHD-200 dataset demonstrate that this innovative framework can improve classification accuracy, which indicates that the features learnt from FCNet have superior discriminative power.", "title": "" }, { "docid": "8fbbeeae48118cfd2f77e6a7bb224c0c", "text": "JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. American Educational Research Association is collaborating with JSTOR to digitize, preserve and extend access to Educational Researcher.", "title": "" }, { "docid": "9d93df3f1db8466283f51a0ae2d79bc8", "text": "We show that the image representations in a deep neural network (DNN) can be manipulated to mimic those of other natural images, with only minor, imperceptible perturbations to the original image. Previous methods for generating adversarial images focused on image perturbations designed to produce erroneous class labels. Here we instead concentrate on the internal layers of DNN representations, to produce a new class of adversarial images that differs qualitatively from others. While the adversary is perceptually similar to one image, its internal representation appears remarkably similar to a different image, from a different class and bearing little if any apparent similarity to the input. Further, they appear generic and consistent with the space of natural images. This phenomenon demonstrates the possibility to trick a DNN to confound almost any image with any other chosen image, and raises questions about DNN representations, as well as the properties of natural images themselves.", "title": "" }, { "docid": "7b999aaaa1374499b910c3f7d0918484", "text": "Research in face recognition has largely been divided between those projects concerned with front-end image processing and those projects concerned with memory for familiar people. These perceptual and cognitive programmes of research have proceeded in parallel, with only limited mutual influence. In this paper we present a model of human face recognition which combines both a perceptual and a cognitive component. The perceptual front-end is based on principal components analysis of images, and the cognitive back-end is based on a simple interactive activation and competition architecture. We demonstrate that this model has a much wider predictive range than either perceptual or cognitive models alone, and we show that this type of combination is necessary in order to analyse some important effects in human face recognition. In sum, the model takes varying images of \"known\" faces and delivers information about these people.", "title": "" }, { "docid": "61243568f7d06ee7791307df31310ae2", "text": "As data represent a key asset for today’s organizations, the problem of how to protect this data from theft and misuse is at the forefront of these organizations’ minds. Even though today several data security techniques are available to protect data and computing infrastructures, many such techniques—such as firewalls and network security tools—are unable to protect data from attacks posed by those working on an organization’s “inside.” These “insiders” usually have authorized access to relevant information systems, making it extremely challenging to block the misuse of information while still allowing them to do their jobs. This book discusses several techniques that can provide effective protection against attacks posed by people working on the inside of an organization. Chapter 1 introduces the notion of insider threat and reports some data about data breaches due to insider threats. Chapter 2 covers authentication and access control techniques, and Chapter 3 shows how these general security techniques can be extended and used in the context of protection from insider threats. Chapter 4 addresses anomaly detection techniques that are used to determine anomalies in data accesses by insiders. These anomalies are often indicative of potential insider data attacks and therefore play an important role in protection from these attacks. Security information and event management (SIEM) tools and fine-grained auditing are discussed in Chapter 5. These tools aim at collecting, analyzing, and correlating—in real-time—any information and event that may be relevant for the security of an organization. As such, they can be a key element in finding a solution to such undesirable insider threats. Chapter 6 goes on to provide a survey of techniques for separation-of-duty (SoD). SoD is an important principle that, when implemented in systems and tools, can strengthen data protection from malicious insiders. However, to date, very few approaches have been proposed for implementing SoD in systems. In Chapter 7, a short survey of a commercial product is presented, which provides different techniques for protection from malicious users with system privileges—such as a DBA in database management systems. Finally, in Chapter 8, the book concludes with a few remarks and additional research directions.", "title": "" }, { "docid": "2e2e8219b7870529e8ca17025190aa1b", "text": "M multitasking competes with television advertising for consumers’ attention, but may also facilitate immediate and measurable response to some advertisements. This paper explores whether and how television advertising influences online shopping. We construct a massive data set spanning $3.4 billion in spending by 20 brands, measures of brands’ website traffic and transactions, and ad content measures for 1,224 commercials. We use a quasi-experimental design to estimate whether and how TV advertising influences changes in online shopping within two-minute pre/post windows of time. We use nonadvertising competitors’ online shopping in a difference-in-differences approach to measure the same effects in two-hour windows around the time of the ad. The findings indicate that television advertising does influence online shopping and that advertising content plays a key role. Action-focus content increases direct website traffic and sales. Information-focus and emotion-focus ad content actually reduce website traffic while simultaneously increasing purchases, with a positive net effect on sales for most brands. These results imply that brands seeking to attract multitaskers’ attention and dollars must select their advertising copy carefully.", "title": "" }, { "docid": "63e3be30835fd8f544adbff7f23e13ab", "text": "Deaths due to plastic bag suffocation or plastic bag asphyxia are not reported in Malaysia. In the West many suicides by plastic bag asphyxia, particularly in the elderly and those who are chronically and terminally ill, have been reported. Accidental deaths too are not uncommon in the West, both among small children who play with shopping bags and adolescents who are solvent abusers. Another well-known but not so common form of accidental death from plastic bag asphyxia is sexual asphyxia, which is mostly seen among adult males. Homicide by plastic bag asphyxia too is reported in the West and the victims are invariably infants or adults who are frail or terminally ill and who cannot struggle. Two deaths due to plastic bag asphyxia are presented. Both the autopsies were performed at the University Hospital Mortuary, Kuala Lumpur. Both victims were 50-year old married Chinese males. One death was diagnosed as suicide and the other as sexual asphyxia. Sexual asphyxia is generally believed to be a problem associated exclusively with the West. Specific autopsy findings are often absent in deaths due to plastic bag asphyxia and therefore such deaths could be missed when some interested parties have altered the scene and most importantly have removed the plastic bag. A visit to the scene of death is invariably useful.", "title": "" }, { "docid": "5201def766ed9d3b5dd7a707ab102dba", "text": "The automatic identification of security vulnerabilities is a critical issue in the development of web-based applications. We present a methodology and tool for vulnerability identification based on symbolic code execution exploiting Static Taint Analysis to improve the efficiency of the analysis. The tool targets PHP web applications, and demonstrates the effectiveness of our approach in identifying cross-site scripting and SQL injection vulnerabilities on both NIST synthetic benchmarks and real world applications. It proves to be faster and more effective than its main competitors, both open source and commercial.", "title": "" }, { "docid": "32373f4f2852531c02026ffe35dd8729", "text": "VSL#3 probiotics can be effective on induction and maintenance of the remission of clinical ulcerative colitis. However, the mechanisms are not fully understood. The aim of this study was to examine the effects of VSL#3 probiotics on dextran sulfate sodium (DSS)-induced colitis in rats. Acute colitis was induced by administration of DSS 3.5 % for 7 days in rats. Rats in two groups were treated with either 15 mg VSL#3 or placebo via gastric tube once daily after induction of colitis; rats in other two groups were treated with either the wortmannin (1 mg/kg) via intraperitoneal injection or the wortmannin + VSL#3 after induction of colitis. Anti-inflammatory activity was assessed by myeloperoxidase (MPO) activity. Expression of inflammatory related mediators (iNOS, COX-2, NF-κB, Akt, and p-Akt) and cytokines (TNF-α, IL-6, and IL-10) in colonic tissue were assessed. TNF-α, IL-6, and IL-10 serum levels were also measured. Our results demonstrated that VSL#3 and wortmannin have anti-inflammatory properties by the reduced disease activity index and MPO activity. In addition, administration of VSL#3 and wortmannin for 7 days resulted in a decrease of iNOS, COX-2, NF-κB, TNF-α, IL-6, and p-Akt and an increase of IL-10 expression in colonic tissue. At the same time, administration of VSL#3 and wortmannin resulted in a decrease of TNF-α and IL-6 and an increase of IL-10 serum levels. VSL#3 probiotics therapy exerts the anti-inflammatory activity in rat model of DSS-induced colitis by inhibiting PI3K/Akt and NF-κB pathway.", "title": "" }, { "docid": "50708eb1617b59f605b926583d9215bf", "text": "Due to filmmakers focusing on violence, traumatic events, and hallucinations when depicting characters with schizophrenia, critics have scrutinized the representation of mental disorders in contemporary films for years. This study compared previous research on schizophrenia with the fictional representation of the disease in contemporary films. Through content analysis, this study examined 10 films featuring a schizophrenic protagonist, tallying moments of violence and charting if they fell into four common stereotypes. Results showed a high frequency of violent behavior in films depicting schizophrenic characters, implying that those individuals are overwhelmingly dangerous and to be feared.", "title": "" }, { "docid": "7b552767a37a7d63591471195b2e002b", "text": "Point-of-interest (POI) recommendation, which helps mobile users explore new places, has become an important location-based service. Existing approaches for POI recommendation have been mainly focused on exploiting the information about user preferences, social influence, and geographical influence. However, these approaches cannot handle the scenario where users are expecting to have POI recommendation for a specific time period. To this end, in this paper, we propose a unified recommender system, named the 'Where and When to gO' (WWO) recommender system, to integrate the user interests and their evolving sequential preferences with temporal interval assessment. As a result, the WWO system can make recommendations dynamically for a specific time period and the traditional POI recommender system can be treated as the special case of the WWO system by setting this time period long enough. Specifically, to quantify users' sequential preferences, we consider the distributions of the temporal intervals between dependent POIs in the historical check-in sequences. Then, to estimate the distributions with only sparse observations, we develop the low-rank graph construction model, which identifies a set of bi-weighted graph bases so as to learn the static user preferences and the dynamic sequential preferences in a coherent way. Finally, we evaluate the proposed approach using real-world data sets from several location-based social networks (LBSNs). The experimental results show that our method outperforms the state-of-the-art approaches for POI recommendation in terms of various metrics, such as F-measure and NDCG, with a significant margin.", "title": "" } ]
scidocsrr
e1ee3768df5a989e7aaf61ed66ca7c4d
Learning to Skim Text
[ { "docid": "2f20bca0134eb1bd9d65c4791f94ddcc", "text": "We present an attention-based model for recognizing multiple objects in images. The proposed model is a deep recurrent neural network trained with reinforcement learning to attend to the most relevant regions of the input image. We show that the model learns to both localize and recognize multiple objects despite being given only class labels during training. We evaluate the model on the challenging task of transcribing house number sequences from Google Street View images and show that it is both more accurate than the state-of-the-art convolutional networks and uses fewer parameters and less computation.", "title": "" }, { "docid": "1fe8f55e2d402c5fe03176cbf83a16c3", "text": "This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying sequences of binary logic operations, adding sequences of integers, and sorting sequences of real numbers. Overall performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. When applied to character-level language modelling on the Hutter prize Wikipedia dataset, ACT yields intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could be used to infer segment boundaries in sequence data.", "title": "" } ]
[ { "docid": "0d13b8a8f7a4584bc7c1402137e79a2c", "text": "Different methods are proposed to learn phrase embedding, which can be mainly divided into two strands. The first strand is based on the distributional hypothesis to treat a phrase as one non-divisible unit and to learn phrase embedding based on its external context similar to learn word embedding. However, distributional methods cannot make use of the information embedded in component words and they also face data spareness problem. The second strand is based on the principle of compositionality to infer phrase embedding based on the embedding of its component words. Compositional methods would give erroneous result if a phrase is non-compositional. In this paper, we propose a hybrid method by a linear combination of the distributional component and the compositional component with an individualized phrase compositionality constraint. The phrase compositionality is automatically computed based on the distributional embedding of the phrase and its component words. Evaluation on five phrase level semantic tasks and experiments show that our proposed method has overall best performance. Most importantly, our method is more robust as it is less sensitive to datasets.", "title": "" }, { "docid": "8af3b1f6b06ff91dee4473bfb50c420d", "text": "Crowdsensing technologies are rapidly evolving and are expected to be utilized on commercial applications such as location-based services. Crowdsensing collects sensory data from daily activities of users without burdening users, and the data size is expected to grow into a population scale. However, quality of service is difficult to ensure for commercial use. Incentive design in crowdsensing with monetary rewards or gamifications is, therefore, attracting attention for motivating participants to collect data to increase data quantity. In contrast, we propose Steered Crowdsensing, which controls the incentives of users by using the game elements on location-based services for directly improving the quality of service rather than data size. For a feasibility study of steered crowdsensing, we deployed a crowdsensing system focusing on application scenarios of building processes on wireless indoor localization systems. In the results, steered crowdsensing realized deployments faster than non-steered crowdsensing while having half as many data.", "title": "" }, { "docid": "20ebefc5be0e91e15e4773c633624224", "text": "Effects of different levels of Biomin® IMBO synbiotic, including Enterococcus faecium (as probiotic), and fructooligosaccharides (as prebiotic) on survival, growth performance, and digestive enzyme activities of common carp fingerlings (Cyprinus carpio) were evaluated. The experiment was carried out in four treatments (each with 3 replicates), including T1 = control with non-synbiotic diet, T2 = 0.5 g/kg synbiotic diet, T3 = 1 g/kg synbiotic diet, and T4 = 1.5 g/kg synbiotic diet. In total 300 fish with an average weight of 10 ± 1 g were distributed in 12 tanks (25 animals per 300 l) and were fed experimental diets over a period of 60 days. The results showed that synbiotic could significantly enhance growth parameters (weight gain, length gain, specific growth rate, percentage weight gain) (P < 0.05), but did not exhibit any effect on survival rate (P > 0.05) compared with the control. An assay of the digestive enzyme activities demonstrated that the trypsin and chymotrypsin activities of synbiotic groups were considerably increased than those in the control (P < 0.05), but there was no significant difference in the levels of α-amylase, lipase, or alkaline phosphatase (P > 0.05). This study indicated that different levels of synbiotic have the capability to enhance probiotic substitution, to improve digestive enzyme activity which leads to digestive system efficiency, and finally to increase growth. It seems that the studied synbiotic could serve as a good diet supplement for common carp cultures.", "title": "" }, { "docid": "a02cd3bccf9c318f0c7a01fa84bc0f8e", "text": "In the last several years, differential privacy has become the leading framework for private data analysis. It provides bounds on the amount that a randomized function can change as the result of a modification to one record of a database. This requirement can be satisfied by using the exponential mechanism to perform a weighted choice among the possible alternatives, with better options receiving higher weights. However, in some situations the number of possible outcomes is too large to compute all weights efficiently. We present the subsampled exponential mechanism, which scores only a sample of the outcomes. We show that it still preserves differential privacy, and fulfills a similar accuracy bound. Using a clustering application, we show that the subsampled exponential mechanism outperforms a previously published private algorithm and is comparable to the full exponential mechanism but more scalable.", "title": "" }, { "docid": "a1486f866b7db99328b40be2d6e1ba41", "text": "Graphology or Handwriting analysis is a scientific method of identifying, evaluating and understanding of anyone personality through the strokes and pattern revealed by handwriting. Handwriting reveals the true personality including emotional outlay, honesty, fears and defenses and etc. Handwriting stroke reflects the written trace of each individual's rhythm and Style. The image split into two areas: the signature based on three features and application form of letters digit area. In this research performance evaluation is done by calculating mean square error using Back Propagation Neural Network (BPNN).Human behaviour is analyzed on the basis of signature by using neural", "title": "" }, { "docid": "4f6b8ea6fb0884bbcf6d4a6a4f658e52", "text": "Ballistocardiography (BCG) enables the recording of heartbeat, respiration, and body movement data from an unconscious human subject. In this paper, we propose a new heartbeat detection algorithm for calculating heart rate (HR) and heart rate variability (HRV) from the BCG signal. The proposed algorithm consists of a moving dispersion calculation method to effectively highlight the respective heartbeat locations and an adaptive heartbeat peak detection method that can set a heartbeat detection window by automatically predicting the next heartbeat location. To evaluate the proposed algorithm, we compared it with other reference algorithms using a filter, waveform analysis and envelope calculation of signal by setting the ECG lead I as the gold standard. The heartbeat detection in BCG should be able to measure sensitively in the regions for lower and higher HR. However, previous detection algorithms are optimized mainly in the region of HR range (60~90 bpm) without considering the HR range of lower (40~60 bpm) and higher (90~110 bpm) HR. Therefore, we proposed an improved method in wide HR range that 40~110 bpm. The proposed algorithm detected the heartbeat greater stability in varying and wider heartbeat intervals as comparing with other previous algorithms. Our proposed algorithm achieved a relative accuracy of 98.29% with a root mean square error (RMSE) of 1.83 bpm for HR, as well as coverage of 97.63% and relative accuracy of 94.36% for HRV. And we obtained the root mean square (RMS) value of 1.67 for separated ranges in HR.", "title": "" }, { "docid": "87a256b5e67b97cf4a11b5664a150295", "text": "This paper presents a method for speech emotion recognition using spectrograms and deep convolutional neural network (CNN). Spectrograms generated from the speech signals are input to the deep CNN. The proposed model consisting of three convolutional layers and three fully connected layers extract discriminative features from spectrogram images and outputs predictions for the seven emotions. In this study, we trained the proposed model on spectrograms obtained from Berlin emotions dataset. Furthermore, we also investigated the effectiveness of transfer learning for emotions recognition using a pre-trained AlexNet model. Preliminary results indicate that the proposed approach based on freshly trained model is better than the fine-tuned model, and is capable of predicting emotions accurately and efficiently.", "title": "" }, { "docid": "2876086e4431e8607d5146f14f0c29dc", "text": "Vascular ultrasonography has an important role in the diagnosis and management of venous disease. The venous system, however, is more complex and variable compared to the arterial system due to its frequent anatomical variations. This often becomes quite challenging for sonographers. This paper discusses the anatomy of the long saphenous vein and its anatomical variations accompanied by sonograms and illustrations.", "title": "" }, { "docid": "ffee60d5f6d862115b7d7d2442e1a1b9", "text": "Preventing accidents caused by drowsiness has become a major focus of active safety driving in recent years. It requires an optimal technique to continuously detect drivers' cognitive state related to abilities in perception, recognition, and vehicle control in (near-) real-time. The major challenges in developing such a system include: 1) the lack of significant index for detecting drowsiness and 2) complicated and pervasive noise interferences in a realistic and dynamic driving environment. In this paper, we develop a drowsiness-estimation system based on electroencephalogram (EEG) by combining independent component analysis (ICA), power-spectrum analysis, correlation evaluations, and linear regression model to estimate a driver's cognitive state when he/she drives a car in a virtual reality (VR)-based dynamic simulator. The driving error is defined as deviations between the center of the vehicle and the center of the cruising lane in the lane-keeping driving task. Experimental results demonstrate the feasibility of quantitatively estimating drowsiness level using ICA-based multistream EEG spectra. The proposed ICA-based method applied to power spectrum of ICA components can successfully (1) remove most of EEG artifacts, (2) suggest an optimal montage to place EEG electrodes, and estimate the driver's drowsiness fluctuation indexed by the driving performance measure. Finally, we present a benchmark study in which the accuracy of ICA-component-based alertness estimates compares favorably to scalp-EEG based.", "title": "" }, { "docid": "d4da4c9bc129a15a8f7b7094216bc4b2", "text": "This paper presents a physical description of two specific aspects in drain-extended MOS transistors, i.e., quasi-saturation and impact-ionization effects. The 2-D device simulator Medici provides the physical insights, and both the unique features are originally attributed to the Kirk effect. The transistor dc model is derived from regional analysis of carrier transport in the intrinsic MOS and the drift region. The substrate-current equations, considering extra impact-ionization factors in the drift region, are also rigorously derived. The proposed model is primarily validated by MATLAB program and exhibits excellent scalability for various transistor dimensions, drift-region doping concentration, and voltage-handling capability.", "title": "" }, { "docid": "9f066ec1613ebea914e635c3505a2728", "text": "Class imbalance is often a problem in various real-world data sets, where one class (i.e. the minority class) contains a small number of data points and the other (i.e. the majority class) contains a large number of data points. It is notably difficult to develop an effective model using current data mining and machine learning algorithms without considering data preprocessing to balance the imbalanced data sets. Random undersampling and oversampling have been used in numerous studies to ensure that the different classes contain the same number of data points. A classifier ensemble (i.e. a structure containing several classifiers) can be trained on several different balanced data sets for later classification purposes. In this paper, we introduce two undersampling strategies in which a clustering technique is used during the data preprocessing step. Specifically, the number of clusters in the majority class is set to be equal to the number of data points in the minority class. The first strategy uses the cluster centers to represent the majority class, whereas the second strategy uses the nearest neighbors of the cluster centers. A further study was conducted to examine the effect on performance of the addition or deletion of 5 to 10 cluster centers in the majority class. The experimental results obtained using 44 small-scale and 2 large-scale data sets revealed that the clustering-based undersampling approach with the second strategy outperformed five state-of-the-art approaches. Specifically, this approach combined with a single multilayer perceptron classifier and C4.5 decision tree classifier ensembles delivered optimal performance over both smalland large-scale data sets. © 2017 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "f31fa4bfc30cc4f0eff4399d16a077dd", "text": "BACKGROUND:Immunohistochemistry allowed recent recognition of a distinct focal gastritis in Crohn's disease. Following reports of lymphocytic colitis and small bowel enteropathy in children with regressive autism, we aimed to see whether similar changes were seen in the stomach. We thus studied gastric antral biopsies in 25 affected children, in comparison to 10 with Crohn's disease, 10 with Helicobacter pylori infection, and 10 histologically normal controls. All autistic, Crohn's, and normal patients were H. pylori negative.METHODS:Snap-frozen antral biopsies were stained for CD3, CD4, CD8, γδ T cells, HLA-DR, IgG, heparan sulphate proteoglycan, IgM, IgA, and C1q. Cell proliferation was assessed with Ki67.RESULTS:Distinct patterns of gastritis were seen in the disease states: diffuse, predominantly CD4+ infiltration in H. pylori, and focal-enhanced gastritis in Crohn's disease and autism, the latter distinguished by striking dominance of CD8+ cells, together with increased intraepithelial lymphocytes in surface, foveolar and glandular epithelium. Proliferation of foveolar epithelium was similarly increased in autism, Crohn's disease and H. pylori compared to controls. A striking finding, seen only in 20/25 autistic children, was colocalized deposition of IgG and C1q on the subepithelial basement membrane and the surface epithelium.CONCLUSIONS:These findings demonstrate a focal CD8-dominated gastritis in autistic children, with novel features. The lesion is distinct from the recently recognized focal gastritis of Crohn's disease, which is not CD8-dominated. As in the small intestine, there is epithelial deposition of IgG.", "title": "" }, { "docid": "1708974f940677a9242d23d12e02046d", "text": "Previous algorithms for supervised sequence learning are based on dynamic recurrent networks. This paper describes an alternative class of gradient-based systems consisting of two feedforward nets that learn to deal with temporal sequences using fast weights: The first net learns to produce context-dependent weight changes for the second net whose weights may vary very quickly. The method offers the potential for STM storage efficiency: A single weight (instead of a full-fledged unit) may be sufficient for storing temporal information. Various learning methods are derived. Two experiments with unknown time delays illustrate the approach. One experiment shows how the system can be used for adaptive temporary variable binding.", "title": "" }, { "docid": "df679dcd213842a786c1ad9587c66f77", "text": "The statistics of professional sports, including players and teams, provide numerous opportunities for research. Cricket is one of the most popular team sports, with billions of fans all over the world. In this thesis, we address two problems related to the One Day International (ODI) format of the game. First, we propose a novel method to predict the winner of ODI cricket matches using a team-composition based approach at the start of the match. Second, we present a method to quantitatively assess the performances of individual players in a match of ODI cricket which incorporates the game situations under which the players performed. The player performances are further used to predict the player of the match award. Players are the fundamental unit of a team. Players of one team work against the players of the opponent team in order to win a match. The strengths and abilities of the players of a team play a key role in deciding the outcome of a match. However, a team changes its composition depending on the match conditions, venue, and opponent team, etc. Therefore, we propose a novel dynamic approach which takes into account the varying strengths of the individual players and reflects the changes in player combinations over time. Our work suggests that the relative team strength between the competing teams forms a distinctive feature for predicting the winner. Modeling the team strength boils down to modeling individual players’ batting and bowling performances, forming the basis of our approach. We use career statistics as well as the recent performances of a player to model him. Using the relative strength of one team versus the other, along with two player-independent features, namely, the toss outcome and the venue of the match, we evaluate multiple supervised machine learning algorithms to predict the winner of the match. We show that, for our approach, the k-Nearest Neighbor (kNN) algorithm yields better results as compared to other classifiers. Players have multiple roles in a game of cricket, predominantly as batsmen and bowlers. Over the generations, statistics such as batting and bowling averages, and strike and economy rates have been used to judge the performance of individual players. These measures, however, do not take into consideration the context of the game in which a player performed across the course of a match. Further, these types of statistics are incapable of comparing the performance of players across different roles. Therefore, we present an approach to quantitatively assess the performances of individual players in a single match of ODI cricket. We have developed a new measure, called the Work Index, which represents the amount of work that is yet to be done by a team to achieve its target. Our approach incorporates game situations and the team strengths to measure the player contributions. This not only helps us in", "title": "" }, { "docid": "90fdac33a73d1615db1af0c94016da5b", "text": "AIM OF THE STUDY\nThe purpose of this study was to define antidiabetic effects of fruit of Vaccinium arctostaphylos L. (Ericaceae) which is traditionally used in Iran for improving of health status of diabetic patients.\n\n\nMATERIALS AND METHODS\nFirstly, we examined the effect of ethanolic extract of Vaccinium arctostaphylos fruit on postprandial blood glucose (PBG) after 1, 3, 5, 8, and 24h following a single dose administration of the extract to alloxan-diabetic male Wistar rats. Also oral glucose tolerance test was carried out. Secondly, PBG was measured at the end of 1, 2 and 3 weeks following 3 weeks daily administration of the extract. At the end of treatment period the pancreatic INS and cardiac GLUT-4 mRNA expression and also the changes in the plasma lipid profiles and antioxidant enzymes activities were assessed. Finally, we examined the inhibitory activity of the extract against rat intestinal α-glucosidase.\n\n\nRESULTS\nThe obtained results showed mild acute (18%) and also significant chronic (35%) decrease in the PBG, significant reduction in triglyceride (47%) and notable rising of the erythrocyte superoxide dismutase (57%), glutathione peroxidase (35%) and catalase (19%) activities due to treatment with the extract. Also we observed increased expression of GLUT-4 and INS genes in plant extract treated Wistar rats. Furthermore, in vitro studies displayed 47% and 56% inhibitory effects of the extract on activity of intestinal maltase and sucrase enzymes, respectively.\n\n\nCONCLUSIONS\nFindings of this study allow us to establish scientifically Vaccinium arctostaphylos fruit as a potent antidiabetic agent with antihyperglycemic, antioxidant and triglyceride lowering effects.", "title": "" }, { "docid": "ea739d96ee0558fb23f0a5a020b92822", "text": "Text and structural data mining of web and social media (WSM) provides a novel disease surveillance resource and can identify online communities for targeted public health communications (PHC) to assure wide dissemination of pertinent information. WSM that mention influenza are harvested over a 24-week period, 5 October 2008 to 21 March 2009. Link analysis reveals communities for targeted PHC. Text mining is shown to identify trends in flu posts that correlate to real-world influenza-like illness patient report data. We also bring to bear a graph-based data mining technique to detect anomalies among flu blogs connected by publisher type, links, and user-tags.", "title": "" }, { "docid": "5d848875f6aa3c37898b0ac10b5accca", "text": "Eliciting security requirements Security requirements exist because people and the negative agents that they create (such as computer viruses) pose real threats to systems. Security differs from all other specification areas in that someone is deliberately threatening to break the system. Employing use and misuse cases to model and analyze scenarios in systems under design can improve security by helping to mitigate threats. Some misuse cases occur in highly specific situations, whereas others continually threaten systems. For instance, a car is most likely to be stolen when parked and unattended, whereas a Web server might suffer a denial-of-service attack at any time. You can develop misuse and use cases recursively, going from system to subsystem levels or lower as necessary. Lower-level cases can highlight aspects not considered at higher levels, possibly forcing another analysis. The approach offers rich possibilities for exploring, understanding, and validating the requirements in any direction. Drawing the agents and misuse cases explicitly helps focus attention on the elements of the scenario. Let’s compare Figure 1 to games such as chess or Go. A team’s best strategy consists of thinking ahead to the other team’s best move and acting to block it. In Figure 1, the use cases appear on the left; the misuse cases are on the right. The misuse threat is car theft, the use-case player is the lawful driver, and the misuse-case player the car thief. The driver’s freedom to drive the car is at risk if the thief can steal the car. The driver must be able to lock the car—a derived requirement—to mitigate the threat. This is at the top level of analysis. The next level begins when you consider the thief’s response. If he breaks the door lock and shorts the ignition, this requires another mitigating approach, such as locking the transmission. In this focus", "title": "" }, { "docid": "162f080444935117c5125ae8b7c3d51e", "text": "The named concepts and compositional operators present in natural language provide a rich source of information about the kinds of abstractions humans use to navigate the world. Can this linguistic background knowledge improve the generality and efficiency of learned classifiers and control policies? This paper aims to show that using the space of natural language strings as a parameter space is an effective way to capture natural task structure. In a pretraining phase, we learn a language interpretation model that transforms inputs (e.g. images) into outputs (e.g. labels) given natural language descriptions. To learn a new concept (e.g. a classifier), we search directly in the space of descriptions to minimize the interpreter’s loss on training examples. Crucially, our models do not require language data to learn these concepts: language is used only in pretraining to impose structure on subsequent learning. Results on image classification, text editing, and reinforcement learning show that, in all settings, models with a linguistic parameterization outperform those without.1", "title": "" }, { "docid": "2578607ec2e7ae0d2e34936ec352ff6e", "text": "AI Innovation in Industry is a new department for IEEE Intelligent Systems, and this paper examines some of the basic concerns and uses of AI for big data (AI has been used in several different ways to facilitate capturing and structuring big data, and it has been used to analyze big data for key insights).", "title": "" }, { "docid": "e9497a16e9d12ea837c7a0ec44d71860", "text": "This article surveys existing and emerging disaggregation techniques for energy-consumption data and highlights signal features that might be used to sense disaggregated data in an easily installed and cost-effective manner.", "title": "" } ]
scidocsrr
85f5d5607fe21dfdaa8559785f884c4c
A Dual-polarized Planar Antenna Using Four Folded Dipoles and Its Array for Base Stations
[ { "docid": "410b40322406d7724b8d4720a075a7f3", "text": "A new 45 degrees dual-polarized magnetoelectric dipole antenna is proposed. The antenna is excited by two -shaped probes placed at a convenient location. The measured overlapped impedance bandwidth is 48% with standing-wave ratio (SWR) 1.5 from 1.69 to 2.76 GHz. The measured gains vary from 7.6 to 9.3 dBi and from 7.6 to 9.4 dBi for port 1 and port 2, respectively. The isolation between the two ports is larger than 30 dB. The proposed antenna achieves a low cross-polarization level of less than 21 dB and a low back radiation level of less than 29 dB over the operating frequency range. With a broadband 90 phase shifter and a power divider, the proposed antenna can radiate circularly-polarized (CP) wave and exhibit a wide impedance bandwidth (SWR 2) of 90% from 1.23 to 3.23 GHz, which covers the whole 3-dB axial-ratio (AR) bandwidth of 82% from 1.28 to 3.05 GHz. In this operation frequency band, the proposed CP antenna has a broadside gain of larger than 5 dBi above 1.45 GHz. Considering the common overlapped bandwidth limited by the impedance, AR, and gain, the proposed antenna exhibits an effective bandwidth of 71%.", "title": "" } ]
[ { "docid": "4c5700a65040c08534d6d8cbac449073", "text": "The proliferation of social media in the recent past has provided end users a powerful platform to voice their opinions. Businesses (or similar entities) need to identify the polarity of these opinions in order to understand user orientation and thereby make smarter decisions. One such application is in the field of politics, where political entities need to understand public opinion and thus determine their campaigning strategy. Sentiment analysis on social media data has been seen by many as an effective tool to monitor user preferences and inclination. Popular text classification algorithms like Naive Bayes and SVM are Supervised Learning Algorithms which require a training data set to perform Sentiment analysis. The accuracy of these algorithms is contingent upon the quantity as well as the quality (features and contextual relevance) of the labeled training data. Since most applications suffer from lack of training data, they resort to cross domain sentiment analysis which misses out on features relevant to the target data. This, in turn, takes a toll on the overall accuracy of text classification. In this paper, we propose a two stage framework which can be used to create a training data from the mined Twitter data without compromising on features and contextual relevance. Finally, we propose a scalable machine learning model to predict the election results using our two stage framework.", "title": "" }, { "docid": "04e4c1b80bcf1a93cafefa73563ea4d3", "text": "The last decade has produced an explosion in neuroscience research examining young children's early processing of language. Noninvasive, safe functional brain measurements have now been proven feasible for use with children starting at birth. The phonetic level of language is especially accessible to experimental studies that document the innate state and the effect of learning on the brain. The neural signatures of learning at the phonetic level can be documented at a remarkably early point in development. Continuity in linguistic development from infants' earliest brain responses to phonetic stimuli is reflected in their language and prereading abilities in the second, third, and fifth year of life, a finding with theoretical and clinical impact. There is evidence that early mastery of the phonetic units of language requires learning in a social context. Neuroscience on early language learning is beginning to reveal the multiple brain systems that underlie the human language faculty.", "title": "" }, { "docid": "be297173873f20298dc96bd80d5bf50f", "text": "In recent years, vector-based machine learning algorithms, such as random forests, support vector machines, and 1-D convolutional neural networks, have shown promising results in hyperspectral image classification. Such methodologies, nevertheless, can lead to information loss in representing hyperspectral pixels, which intrinsically have a sequence-based data structure. A recurrent neural network (RNN), an important branch of the deep learning family, is mainly designed to handle sequential data. Can sequence-based RNN be an effective method of hyperspectral image classification? In this paper, we propose a novel RNN model that can effectively analyze hyperspectral pixels as sequential data and then determine information categories via network reasoning. As far as we know, this is the first time that an RNN framework has been proposed for hyperspectral image classification. Specifically, our RNN makes use of a newly proposed activation function, parametric rectified tanh (PRetanh), for hyperspectral sequential data analysis instead of the popular tanh or rectified linear unit. The proposed activation function makes it possible to use fairly high learning rates without the risk of divergence during the training procedure. Moreover, a modified gated recurrent unit, which uses PRetanh for hidden representation, is adopted to construct the recurrent layer in our network to efficiently process hyperspectral data and reduce the total number of parameters. Experimental results on three airborne hyperspectral images suggest competitive performance in the proposed mode. In addition, the proposed network architecture opens a new window for future research, showcasing the huge potential of deep recurrent networks for hyperspectral data analysis.", "title": "" }, { "docid": "ef31658d20741eb963125336d9861198", "text": "In the context of performance optimizations in multitasking, a central debate has unfolded in multitasking research around whether cognitive processes related to different tasks proceed only sequentially (one at a time), or can operate in parallel (simultaneously). This review features a discussion of theoretical considerations and empirical evidence regarding parallel versus serial task processing in multitasking. In addition, we highlight how methodological differences and theoretical conceptions determine the extent to which parallel processing in multitasking can be detected, to guide their employment in future research. Parallel and serial processing of multiple tasks are not mutually exclusive. Therefore, questions focusing exclusively on either task-processing mode are too simplified. We review empirical evidence and demonstrate that shifting between more parallel and more serial task processing critically depends on the conditions under which multiple tasks are performed. We conclude that efficient multitasking is reflected by the ability of individuals to adjust multitasking performance to environmental demands by flexibly shifting between different processing strategies of multiple task-component scheduling.", "title": "" }, { "docid": "2e0da6288ec95c989afa84811a0aea6e", "text": "Graph keyword search has drawn many research interests, since graph models can generally represent both structured and unstructured databases and keyword searches can extract valuable information for users without the knowledge of the underlying schema and query language. In practice, data graphs can be extremely large, e.g., a Web-scale graph containing billions of vertices. The state-of-the-art approaches employ centralized algorithms to process graph keyword searches, and thus they are infeasible for such large graphs, due to the limited computational power and storage space of a centralized server. To address this problem, we investigate keyword search for Web-scale graphs deployed in a distributed environment. We first give a naive search algorithm to answer the query efficiently. However, the naive search algorithm uses a flooding search strategy that incurs large time and network overhead. To remedy this shortcoming, we then propose a signature-based search algorithm. Specifically, we design a vertex signature that encodes the shortest-path distance from a vertex to any given keyword in the graph. As a result, we can find query answers by exploring fewer paths, so that the time and communication costs are low. Moreover, we reorganize the graph data in the cluster after its initial random partitioning so that the signature-based techniques are more effective. Finally, our experimental results demonstrate the feasibility of our proposed approach in performing keyword searches over Web-scale graph data.", "title": "" }, { "docid": "c6cdc9a18c1e3dc0c58331fc6995c42e", "text": "There is no universal gold standard classification system for mandibular condylar process fractures. A clinically relevant mandibular condyle classification system should be easy to understand, and be easy to recall, for implementation into the management of a condylar fracture. An accurate appreciation of the location of the mandibular condylar fracture assists with the determination of either an operative or nonoperative management regimen.", "title": "" }, { "docid": "ade2fd7f83a78a5a7d78c7e8286aeb18", "text": "We present a method for solving the independent set formulation of the graph coloring problem (where there is one variable for each independent set in the graph). We use a column generation method for implicit optimization of the linear program at each node of the branch-and-bound tree. This approach, while requiring the solution of a diicult subproblem as well as needing sophisticated branching rules, solves small to moderate size problems quickly. We have also implemented an exact graph coloring algorithm based on DSATUR for comparison. Implementation details and computational experience are presented.", "title": "" }, { "docid": "b0b193c3c72bb1543b62545d496cdbe0", "text": "The Generalized Traveling Salesman Problem is a variation of the well-known Traveling Salesman Problem in which the set of nodes is divided into clusters; the objective is to find a minimum-cost tour passing through one node from each cluster. We present an effective heuristic for this problem. The method combines a genetic algorithm (GA) with a local tour improvement heuristic. Solutions are encoded using random keys, which circumvent the feasibility problems encountered when using traditional GA encodings. On a set of 41 standard test problems with up to 442 nodes, the heuristic found solutions that were optimal in most cases and were within 1% of optimality in all but the largest problems, with computation times generally within 10 seconds for the smaller problems and a few minutes for the larger ones. The heuristic outperforms all other heuristics published to date in both solution quality and computation time.", "title": "" }, { "docid": "33f84fb174c722f8c8405f474875cab6", "text": "Contrary to most traditional approaches, ideologies are defined here within a multidisciplinary framework that combines a social, cognitive and discursive component. As 'systems of ideas', ideologies are sociocognitively defined as shared representations of social groups, and more specifically as the `axiomatic ' principies of such representations. As the basis of a social group's selfimage, ideologies organize its identity, actions, aims, norms and values, and resources as well as its relations to other social groups. Ideologies are distinct from the sociocognitive basis of broader cultural communities, within which different ideological groups share fundamental beliefs such as their cultural knowledge. Ideologies are expressed and generally reproduced in the social practices of their members, and more particularly acquired, confirmed, changed and perpetuated through discourse. Although general properties of language and discourse are not, as such, ideologically marked, systematic discourse analysis offers powerful methods to study the structures and functions of underlying' ideologies. The ideological polarization between ingroups and outgroups— a prominent feature of the structure of ideologies—may also be systematically studied at all levels of text and talk, e.g. by analysing how members of ingroups typically emphasize their own good deeds and properties and the bad ones of the outgroup, and mitigate or deny their own bad ones and the good ones of the outgroup.", "title": "" }, { "docid": "058a128a15c7d0e343adb3ada80e18d3", "text": "PURPOSE OF REVIEW\nOdontogenic causes of sinusitis are frequently missed; clinicians often overlook odontogenic disease whenever examining individuals with symptomatic rhinosinusitis. Conventional treatments for chronic rhinosinusitis (CRS) will often fail in odontogenic sinusitis. There have been several recent developments in the understanding of mechanisms, diagnosis, and treatment of odontogenic sinusitis, and clinicians should be aware of these advances to best treat this patient population.\n\n\nRECENT FINDINGS\nThe majority of odontogenic disease is caused by periodontitis and iatrogenesis. Notably, dental pain or dental hypersensitivity is very commonly absent in odontogenic sinusitis, and symptoms are very similar to those seen in CRS overall. Unilaterality of nasal obstruction and foul nasal drainage are most suggestive of odontogenic sinusitis, but computed tomography is the gold standard for diagnosis. Conventional panoramic radiographs are very poorly suited to rule out odontogenic sinusitis, and cannot be relied on to identify disease. There does not appear to be an optimal sequence of treatment for odontogenic sinusitis; the dental source should be addressed and ESS is frequently also necessary to alleviate symptoms.\n\n\nSUMMARY\nOdontogenic sinusitis has distinct pathophysiology, diagnostic considerations, microbiology, and treatment strategies whenever compared with chronic rhinosinusitis. Clinicians who can accurately identify odontogenic sources can increase efficacy of medical and surgical treatments and improve patient outcomes.", "title": "" }, { "docid": "3bfb0d2304880065227c4563c6646ce1", "text": "We propose an automatic video inpainting algorithm which relies on the optimisation of a global, patch-based functional. Our algorithm is able to deal with a variety of challenging situations which naturally arise in video inpainting, such as the correct reconstruction of dynamic textures, multiple moving objects and moving background. Furthermore, we achieve this in an order of magnitude less execution time with respect to the state-of-the-art. We are also able to achieve good quality results on high definition videos. Finally, we provide specific algorithmic details to make implementation of our algorithm as easy as possible. The resulting algorithm requires no segmentation or manual input other than the definition of the inpainting mask, and can deal with a wider variety of situations than is handled by previous work.", "title": "" }, { "docid": "3e5e7e38068da120639c3fcc80227bf8", "text": "The ferric reducing antioxidant power (FRAP) assay was recently adapted to a microplate format. However, microplate-based FRAP (mFRAP) assays are affected by sample volume and composition. This work describes a calibration process for mFRAP assays which yields data free of volume effects. From the results, the molar absorptivity (ε) for the mFRAP assay was 141,698 M(-1) cm(-1) for gallic acid, 49,328 M(-1) cm(-1) for ascorbic acid, and 21,606 M(-1) cm(-1) for ammonium ferrous sulphate. The significance of ε (M(-1) cm(-1)) is discussed in relation to mFRAP assay sensitivity, minimum detectable concentration, and the dimensionless FRAP-value. Gallic acid showed 6.6 mol of Fe(2+) equivalents compared to 2.3 mol of Fe(+2) equivalents for ascorbic acid. Application of the mFRAP assay to Manuka honey samples (rated 5+, 10+, 15+, and 18+ Unique Manuka Factor; UMF) showed that FRAP values (0.54-0.76 mmol Fe(2+) per 100g honey) were strongly correlated with UMF ratings (R(2)=0.977) and total phenols content (R(2) = 0.982)whilst the UMF rating was correlated with the total phenols (R(2) = 0.999). In conclusion, mFRAP assay results were successfully standardised to yield data corresponding to 1-cm spectrophotometer which is useful for quality assurance purposes. The antioxidant capacity of Manuka honey was found to be directly related to the UMF rating.", "title": "" }, { "docid": "d2430788229faccdeedd080b97d1741c", "text": "Potentially, empowerment has much to offer health promotion. However, some caution needs to be exercised before the notion is wholeheartedly embraced as the major goal of health promotion. The lack of a clear theoretical underpinning, distortion of the concept by different users, measurement ambiguities, and structural barriers make 'empowerment' difficult to attain. To further discussion, th is paper proposes several assertions about the definition, components, process and outcome of 'empowerment', including the need for a distinction between psychological and community empowerment. These assertions and a model of community empowerment are offered in an attempt to clarify an important issue for health promotion.", "title": "" }, { "docid": "b622e8a511698116be2b2831e8ea7989", "text": "BACKGROUND\nThe large and growing number of published studies, and their increasing rate of publication, makes the task of identifying relevant studies in an unbiased way for inclusion in systematic reviews both complex and time consuming. Text mining has been offered as a potential solution: through automating some of the screening process, reviewer time can be saved. The evidence base around the use of text mining for screening has not yet been pulled together systematically; this systematic review fills that research gap. Focusing mainly on non-technical issues, the review aims to increase awareness of the potential of these technologies and promote further collaborative research between the computer science and systematic review communities.\n\n\nMETHODS\nFive research questions led our review: what is the state of the evidence base; how has workload reduction been evaluated; what are the purposes of semi-automation and how effective are they; how have key contextual problems of applying text mining to the systematic review field been addressed; and what challenges to implementation have emerged? We answered these questions using standard systematic review methods: systematic and exhaustive searching, quality-assured data extraction and a narrative synthesis to synthesise findings.\n\n\nRESULTS\nThe evidence base is active and diverse; there is almost no replication between studies or collaboration between research teams and, whilst it is difficult to establish any overall conclusions about best approaches, it is clear that efficiencies and reductions in workload are potentially achievable. On the whole, most suggested that a saving in workload of between 30% and 70% might be possible, though sometimes the saving in workload is accompanied by the loss of 5% of relevant studies (i.e. a 95% recall).\n\n\nCONCLUSIONS\nUsing text mining to prioritise the order in which items are screened should be considered safe and ready for use in 'live' reviews. The use of text mining as a 'second screener' may also be used cautiously. The use of text mining to eliminate studies automatically should be considered promising, but not yet fully proven. In highly technical/clinical areas, it may be used with a high degree of confidence; but more developmental and evaluative work is needed in other disciplines.", "title": "" }, { "docid": "fc69f1c092bae3328ce9c5975929e92c", "text": "In allusion to the “on-line beforehand decision-making, real time matching”, this paper proposes the stability control flow based on PMU for interconnected power system, which is a real-time stability control. In this scheme, preventive control, emergency control and corrective control are designed to a closed-loop rolling control process, it will protect the stability of power system. Then it ameliorates the corrective control process, and presents a new control method which is based on PMU and EEAC method. This scheme can ensure the real-time quality and advance the veracity for the corrective control.", "title": "" }, { "docid": "09021eddb5379ad2792f6dd01db93a90", "text": "Surface-mounted permanent magnet synchronous machine with concentrated windings (cwSPMSM) is a high-performance drive machine and has been adopted in many applications. The difficulty of implementing its sensorless control at low and zero speeds is its multiple saliencies, which is much more significant than most other ac machines. The traditional decoupling methods provide successful results only under the condition that high-order saliencies are not stronger than half of the primary saliency. Furthermore, the behavior of the multiple saliencies is principally frequency dependent. Based on the characteristics of such machines, this paper proposes a multisignal injection method for realizing sensorless control. This method injects multiple high-frequency signals with different frequencies and magnitudes into the machine. Different frequency components in the response current signals are demodulated and then combined together to get the clear primary saliency signal, which is used to identify the rotor position. This new method was validated using a cwSPMSM at low speed. The experimental results proved the effectiveness and accuracy of the new method.", "title": "" }, { "docid": "4ad106897a19830c80a40e059428f039", "text": "In 1972, and later in 1979, at the peak of the golden era of Good Old Fashioned Artificial Intelligence (GOFAI), the voice of philosopher Hubert Dreyfus made itself heard as one of the few calls against the hubristic programme of modelling the human mind as a mechanism of symbolic information processing (Dreyfus, 1979). He did not criticise particular solutions to specific problems; instead his deep concern was with the very foundations of the programme. His critical stance was unusual, at least for most GOFAI practitioners, in that it did not rely on technical issues, but on a philosophical position emanating from phenomenology and existentialism, a fact contributing to his claims being largely ignored or dismissed for a long time by the AI community. But, for the most part, he was eventually proven right. AI’s over-reliance on worldmodelling and planning went against the evidence provided by phenomenology of human activity as situated and with a clear and ever-present focus of practical concern – the body and not some algorithm is the originating locus of intelligent activity (if by intelligent we understand intentional, directed and flexible), and the world is not the sum total of all available facts, but the world-as-it-is-for-this-body. Such concerns were later vindicated by the Brooksian revolution in autonomous robotics with its foundations on embodiment, situatedness and de-centralised mechanisms (Brooks, 1991). Brooks’ practical and methodological preoccupations – building robots largely based on biologically plausible principles and capable of acting in the real world – proved parallel, despite his claim that his approach was not “German philosophy”, to issues raised by Dreyfus. Putting robotics back as the acid test of AI, as oppossed to playing chess and proving theorems, is now often seen as a positive response to Dreyfus’ point that AI was unable to capture true meaning by the summing of meaningless processes. This criticism was later devastatingly recast in Searle’s Chinese Room argument (1980), and extended by Harnad’s Symbol Grounding Problem (1990). Meaningful activity – that is, meaningful for the agent and not only for the designer – must obtain through sensorimotor grounding in the agent’s world, and for this both a body and world are needed. Following these developments, work in autonomous robotics and new AI since the 1990s rebelled against pure connectionism because of its lack of biological plausibility and also because most of connectionist research was carried out in vacuo – it was compellingly argued that neural network models as simple input/output processing units are meaningless for modelling the cognitive capabilities of insects, let alone humans, unless they are embedded in a closed sensorimotor loop of interaction with a world (Cliff, 1991). Objective meaning, that is meaningful internal states and states of the world, can only obtain in an embodied agent whose effector and sensor activities become coordinated", "title": "" }, { "docid": "4ec6229ae75b13bbcc429f07eda0fb4a", "text": "Face detection is a well-explored problem. Many challenges on face detectors like extreme pose, illumination, low resolution and small scales are studied in the previous work. However, previous proposed models are mostly trained and tested on good-quality images which are not always the case for practical applications like surveillance systems. In this paper, we first review the current state-of-the-art face detectors and their performance on benchmark dataset FDDB, and compare the design protocols of the algorithms. Secondly, we investigate their performance degradation while testing on low-quality images with different levels of blur, noise, and contrast. Our results demonstrate that both hand-crafted and deep-learning based face detectors are not robust enough for low-quality images. It inspires researchers to produce more robust design for face detection in the wild.", "title": "" }, { "docid": "2fa6f1f630685afd06f28c64a8cb94be", "text": "Recent advances in deep learning (DL) allow for solving complex AI problems that used to be considered very hard. While this progress has advanced many fields, it is considered to be bad news for Completely Automated Public Turing tests to tell Computers and Humans Apart (CAPTCHAs), the security of which rests on the hardness of some learning problems. In this paper, we introduce DeepCAPTCHA, a new and secure CAPTCHA scheme based on adversarial examples, an inherit limitation of the current DL networks. These adversarial examples are constructed inputs, either synthesized from scratch or computed by adding a small and specific perturbation called adversarial noise to correctly classified items, causing the targeted DL network to misclassify them. We show that plain adversarial noise is insufficient to achieve secure CAPTCHA schemes, which leads us to introduce immutable adversarial noise—an adversarial noise that is resistant to removal attempts. In this paper, we implement a proof of concept system, and its analysis shows that the scheme offers high security and good usability compared with the best previously existing CAPTCHAs.", "title": "" }, { "docid": "ba8658fafac3c007ee3ebfadb0144ec5", "text": "In general, as the amount of training data is increased, a deep learning model gains a higher training accuracy. To assign labels to training data for use in supervised learning, human resources are required, which incur temporal and economic costs. Therefore, if a sufficient amount of training data cannot be constructed owing to existing cost constraints, it becomes necessary to select the training data that can maximize the accuracy of the deep learning model with only a limited amount of training data. However, although conventional studies on such training data selections take into consideration the training data labeling cost, the selection cost required in the training data selection process is not taken into consideration, which is a problem. Therefore, with the consideration of the selection cost constraint in addition to the data labeling cost constraint, we introduce a training data selection problem and propose novel algorithms to solve it. The advantage of the proposed algorithms is that they can be applied to any network model or data model of deep learning. The performance was verified through experiments using various network models and data.", "title": "" } ]
scidocsrr
639fa1dc157c2d50012e4137c3269908
Numerical Algebraic Geometry for Optimal Control Applications
[ { "docid": "01d77c925c62a7d26ff294231b449e95", "text": "Al~tmd--We refer to Model Predictive Control (MPC) as that family of controllers in which there is a direct use of an explicit and separately identifiable model. Control design methods based on the MPC concept have found wide acceptance in industrial applications and have been studied by academia. The reason for such popularity is the ability of MPC designs to yield high performance control systems capable of operating without expert intervention for long periods of time. In this paper the issues of importance that any control system should address are stated. MPC techniques are then reviewed in the light of these issues in order to point out their advantages in design and implementation. A number of design techniques emanating from MPC, namely Dynamic Matrix Control, Model Algorithmic Control, Inferential Control and Internal Model Control, are put in perspective with respect to each other and the relation to more traditional methods like Linear Quadratic Control is examined. The flexible constraint handling capabilities of MPC are shown to be a significant advantage in the context of the overall operating objectives of the process industries and the 1-, 2-, and oo-norm formulations of the performance objective are discussed. The application of MPC to non-linear systems is examined and it is shown that its main attractions carry over. Finally, it is explained that though MPC is not inherently more or less robust than classical feedback, it can be adjusted more easily for robustness.", "title": "" } ]
[ { "docid": "81d11f44d55e57d95a04f9a1ea35223c", "text": "In many research fields such as Psychology, Linguistics, Cognitive Science and Artificial Intelligence, computing semantic similarity between words is an important issue. In this paper a new semantic similarity metric, that exploits some notions of the feature based theory of similarity and translates it into the information theoretic domain, which leverages the notion of Information Content (IC), is presented. In particular, the proposed metric exploits the notion of intrinsic IC which quantifies IC values by scrutinizing how concepts are arranged in an ontological structure. In order to evaluate this metric, an on line experiment asking the community of researchers to rank a list of 65 word pairs has been conducted. The experiment’s web setup allowed to collect 101 similarity ratings and to differentiate native and non-native English speakers. Such a large and diverse dataset enables to confidently evaluate similarity metrics by correlating them with human assessments. Experimental evaluations using WordNet indicate that the proposed metric, coupled with the notion of intrinsic IC, yields results above the state of the art. Moreover, the intrinsic IC formulation also improves the accuracy of other IC-based metrics. In order to investigate the generality of both the intrinsic IC formulation and proposed similarity metric a further evaluation using the MeSH biomedical ontology has been performed. Even in this case significant results were obtained. The proposed metric and several others have been implemented in the Java WordNet Similarity Library.", "title": "" }, { "docid": "f79ed9bef2b8e66822be60037dd63e19", "text": "Since AlexNet was developed and applied to the ImageNet classi cation competition in 2012 [1], the quantity of research on convolutional networks for deep learning applications has increased remarkably. In 2015, the top 5 classi cation error was reduced to 3.57%, with Microsoft's Residual Network [2]. The previous top 5 classi cation error was 6.67%, achieved by GoogLeNet [3]. In recent years, new arti cial neural network architectures have been developed which improve upon previous architectures. Speci cally, these are the inception modules in GoogLeNet, and residual networks, in Microsoft's ResNet [2]. Here we will examine convolutional neural networks (convnets) for image recognition, and then provide an explanation for their architecture. The role of various convnet hyperparameters will be examined. The question of how to correctly size a neural network, in terms of the number of layers, and layer size, for example, will be considered. An example for determining GPU memory required for training a de ned network architecture is presented. The training method of backpropagation will be discussed in the context of past and recent developments which have improved training e ectiveness. Other techniques and considerations related to network training, such as choosing an activation function, and proper weight initialization, are discussed brie y. Finally, recent developments in convnet architecture are reviewed.", "title": "" }, { "docid": "97b9627380d9a9fc00dfa63661d199f9", "text": "We study sequences of consumption in which the same item may be consumed multiple times. We identify two macroscopic behavior patterns of repeated consumptions. First, in a given user’s lifetime, very few items live for a long time. Second, the last consumptions of an item exhibit growing inter-arrival gaps consistent with the notion of increasing boredom leading up to eventual abandonment. We then present what is to our knowledge the first holistic model of sequential repeated consumption, covering all observed aspects of this behavior. Our simple and purely combinatorial model includes no planted notion of lifetime distributions or user boredom; nonetheless, the model correctly predicts both of these phenomena. Further, we provide theoretical analysis of the behavior of the model confirming these phenomena. Additionally, the model quantitatively matches a number of microscopic phenomena across a broad range of datasets. Intriguingly, these findings suggest that the observation in a variety of domains of increasing user boredom leading to abandonment may be explained simply by probabilistic conditioning on an extinction event in a simple model, without resort to explanations based on complex human dynamics.", "title": "" }, { "docid": "0ce46853852a20e5e0ab9aacd3ec20c1", "text": "In immunocompromised subjects, Epstein-Barr virus (EBV) infection of terminally differentiated oral keratinocytes may result in subclinical productive infection of the virus in the stratum spinosum and in the stratum granulosum with shedding of infectious virions into the oral fluid in the desquamating cells. In a minority of cases this productive infection with dysregulation of the cell cycle of terminally differentiated epithelial cells may manifest as oral hairy leukoplakia. This is a white, hyperkeratotic, benign lesion of low morbidity, affecting primarily the lateral border of the tongue. Factors that determine whether productive EBV replication within the oral epithelium will cause oral hairy leukoplakia include the fitness of local immune responses, the profile of EBV gene expression, and local environmental factors.", "title": "" }, { "docid": "44edb3321b23f0f0af81ad70d67e940e", "text": "Anomaly detection is an important problem with multiple applications, and thus has been studied for decades in various research domains. In the past decade there has been a growing interest in anomaly detection in data represented as networks, or graphs, largely because of their robust expressiveness and their natural ability to represent complex relationships. Originally, techniques focused on anomaly detection in static graphs, which do not change and are capable of representing only a single snapshot of data. As real-world networks are constantly changing, there has been a shift in focus to dynamic graphs, which evolve over time. In this survey, we aim to provide a comprehensive overview of anomaly detection in dynamic networks, concentrating on the state-of-the-art methods. We first describe four types of anomalies that arise in dynamic networks, providing an intuitive explanation, applications, and a concrete example for each. Having established an idea for what constitutes an anomaly, a general two-stage approach to anomaly detection in dynamic networks that is common among the methods is presented. We then construct a two-tiered taxonomy, first partitioning the methods based on the intuition behind their approach, and subsequently subdividing them based on the types of anomalies they detect. Within each of the tier one categories—community, compression, decomposition, distance, and probabilistic model based—we highlight the major similarities and differences, showing the wealth of techniques derived from similar conceptual approaches. © 2015 The Authors. WIREs Computational Statistics published by Wiley Periodicals, Inc.", "title": "" }, { "docid": "523d11b771c5ea8776217eed253e6817", "text": "Incremental learning (IL) is an important task aimed to increase the capability of a trained model, in terms of the number of classes recognizable by the model. The key problem in this task is the requirement of storing data (e.g. images) associated with existing classes, while training the classifier to learn new classes. However, this is impractical as it increases the memory requirement at every incremental step, which makes it impossible to implement IL algorithms on the edge devices with limited memory. Hence, we propose a novel approach, called ‘Learning without Memorizing (LwM)’, to preserve the information with respect to existing (base) classes, without storing any of their data, while making the classifier progressively learn the new classes. In LwM, we present an information preserving penalty: Attention Distillation Loss (LAD), and demonstrate that penalizing the changes in classifiers’ attention maps helps to retain information of the base classes, as new classes are added. We show that adding LAD to the distillation loss which is an existing information preserving loss consistently outperforms the state-of-the-art performance in the iILSVRC-small and iCIFAR-100 datasets in terms of the overall accuracy of base and incrementally learned classes.", "title": "" }, { "docid": "d45c7f39c315bf5e8eab3052e75354bb", "text": "Predicting the future in real-world settings, particularly from raw sensory observations such as images, is exceptionally challenging. Real-world events can be stochastic and unpredictable, and the high dimensionality and complexity of natural images require the predictive model to build an intricate understanding of the natural world. Many existing methods tackle this problem by making simplifying assumptions about the environment. One common assumption is that the outcome is deterministic and there is only one plausible future. This can lead to low-quality predictions in real-world settings with stochastic dynamics. In this paper, we develop a stochastic variational video prediction (SV2P) method that predicts a different possible future for each sample of its latent variables. To the best of our knowledge, our model is the first to provide effective stochastic multi-frame prediction for real-world videos. We demonstrate the capability of the proposed method in predicting detailed future frames of videos on multiple real-world datasets, both action-free and action-conditioned. We find that our proposed method produces substantially improved video predictions when compared to the same model without stochasticity, and to other stochastic video prediction methods. Our SV2P implementation will be open sourced upon publication.", "title": "" }, { "docid": "cc1b8f1689c45c53e461dc268c664f53", "text": "This paper presents a one switch silicon carbide JFET normally-ON resonant inverter applied to induction heating for consumer home cookers. The promising characteristics of silicon carbide (SiC) devices need to be verified in practical applications; therefore, the objective of this work is to compare Si IGBTs and normally-ON commercially available JFET in similar operating conditions, with two similar boards. The paper describes the gate circuit implemented, the design of the basic converter in ideal operation, namely Zero Voltage Switching (ZVS) and Zero Derivative Voltage Switching (ZVDS), as well as some preliminary comparative results for 700W and 2 kW output power delivered to an induction heating coil and load.", "title": "" }, { "docid": "af02dd142aa378632a9222ed19c57968", "text": "Commodity CPU architectures, such as ARM and Intel CPUs, have started to offer trusted computing features in their CPUs aimed at displacing dedicated trusted hardware. Unfortunately, these CPU architectures raise serious challenges to building trusted systems because they omit providing secure resources outside the CPU perimeter. This paper shows how to overcome these challenges to build software systems with security guarantees similar to those of dedicated trusted hardware. We present the design and implementation of a firmware-based TPM 2.0 (fTPM) leveraging ARM TrustZone. Our fTPM is the reference implementation of a TPM 2.0 used in millions of mobile devices. We also describe a set of mechanisms needed for the fTPM that can be useful for building more sophisticated trusted applications beyond just a TPM.", "title": "" }, { "docid": "d5b304f3ee80b07a85e1c75264cce9b1", "text": "Personal robotic assistants help reducing the manual efforts being put by humans in their day-to-day tasks. In this paper, we develop a voice-controlled personal assistant robot. The human voice commands are given to the robotic assistant remotely, by using a smart mobile phone. The robot can perform different movements, turns, start/stop operations and relocate an object from one place to another. The voice commands are processed in real-time, using an online cloud server. The speech signal commands converted to text form are communicated to the robot over a Bluetooth network. The personal assistant robot is developed on a micro-controller based platform and can be aware of its current location. The effectiveness of the voice control communicated over a distance is measured through several experiments. Performance evaluation is carried out with encouraging results of the initial experiments. Possible improvements are also discussed towards potential applications in home, hospitals and industries.", "title": "" }, { "docid": "69b831bb25e5ad0f18054d533c313b53", "text": "In recent years, indoor positioning has emerged as a critical function in many end-user applications; including military, civilian, disaster relief and peacekeeping missions. In comparison with outdoor environments, sensing location information in indoor environments requires a higher precision and is a more challenging task in part because various objects reflect and disperse signals. Ultra WideBand (UWB) is an emerging technology in the field of indoor positioning that has shown better performance compared to others. In order to set the stage for this work, we provide a survey of the state-of-the-art technologies in indoor positioning, followed by a detailed comparative analysis of UWB positioning technologies. We also provide an analysis of strengths, weaknesses, opportunities, and threats (SWOT) to analyze the present state of UWB positioning technologies. While SWOT is not a quantitative approach, it helps in assessing the real status and in revealing the potential of UWB positioning to effectively address the indoor positioning problem. Unlike previous studies, this paper presents new taxonomies, reviews some major recent advances, and argues for further exploration by the research community of this challenging problem space.", "title": "" }, { "docid": "f5ef6e7c9f700b2da6eea75d750a1b1f", "text": "The long-term consequences of early environmental experiences for development have been explored extensively in animal models to better understand the mechanisms mediating risk of psychopathology in individuals exposed to childhood adversity. One common feature of these models is disruption of the mother-infant relationship which is associated with impairments in stress responsivity and maternal behavior in adult offspring. These behavioral and physiological characteristics are associated with stable changes in gene expression which emerge in infancy and are sustained into adulthood. Recent evidence suggests that these long-term effects may be mediated by epigenetic modification to the promoter regions of steroid receptor genes. In particular, DNA methylation may be critical to maternal effects on gene expression and thus generate phenotypic differentiation of offspring and, through effects on maternal behavior of offspring, mediate the transmission of these effects across generations. In this review we explore evidence for the influence of mother-infant interactions on the epigenome and consider evidence for and the implications of such epigenetic effects for human mental health.", "title": "" }, { "docid": "05f32467e54e3d7ed1547477aa0db7da", "text": "Deep neural networks with several layers have during the last years become a highly successful and popular research topic in machine learning due to their excellent performance in many benchmark problems and applications. A key idea in deep learning is to not only learn the nonlinear mapping between the inputs and outputs, but also the underlying structure of the data (input) vectors. In this chapter, we first consider problems with training deep networks using backpropagation type algorithms. After this, we consider various structures used in deep learning, including restricted Boltzmann machines, deep belief networks, deep Boltzmann machines, and nonlinear autoencoders. In the later part of this chapter we discuss in more detail the recently developed neural autoregressive distribution estimator (NADE) and its variants.", "title": "" }, { "docid": "9665328d7993e2b1298a2c849c987979", "text": "The case study presented here, deals with the subject of second language acquisition making at the same time an effort to show as much as possible how L1 was acquired and the ways L1 affected L2, through the process of examining a Greek girl who has been exposed to the English language from the age of eight. Furthermore, I had the chance to analyze the method used by the frontistirio teachers and in what ways this method helps or negatively influences children regarding their performance in the four basic skills. We will evaluate the evidence acquired by the girl by studying briefly the basic theories provided by important figures in the field of L2. Finally, I will also include my personal suggestions and the improvement of the child’s abilities and I will state my opinion clearly.", "title": "" }, { "docid": "3482354f79c4185ad9d63412184ddce4", "text": "In this paper we address the problem of learning the Markov blanket of a quantity from data in an efficient manner Markov blanket discovery can be used in the feature selection problem to find an optimal set of features for classification tasks, and is a frequently-used preprocessing phase in data mining, especially for high-dimensional domains. Our contribution is a novel algorithm for the induction of Markov blankets from data, called Fast-IAMB, that employs a heuristic to quickly recover the Markov blanket. Empirical results show that Fast-IAMB performs in many cases faster and more reliably than existing algorithms without adversely affecting the accuracy of the recovered Markov blankets.", "title": "" }, { "docid": "ee631c4cff3ff6ae99e1afa1ba4788d3", "text": "Teleoperation can be improved if humans and robots work as partners, exchanging information and assisting one another to achieve common goals. In this paper, we discuss the importance of collaboration and dialogue in human-robot systems. We then present collaborative control, a system model in which human and robot collaborate, and describe its use in vehicle teleoperation.", "title": "" }, { "docid": "7d646444717ad4c7d2c208e6dca31991", "text": "Self-interruptions account for a significant portion of task switching in information-centric work contexts. However, most of the research to date has focused on understanding, analyzing and designing for external interruptions. The causes of self-interruptions are not well understood. In this paper we present an analysis of 889 hours of observed task switching behavior from 36 individuals across three high-technology information work organizations. Our analysis suggests that self-interruption is a function of organizational environment and individual differences, but also external interruptions experienced. We find that people in open office environments interrupt themselves at a higher rate. We also find that people are significantly more likely to interrupt themselves to return to solitary work associated with central working spheres, suggesting that self-interruption occurs largely as a function of prospective memory events. The research presented contributes substantially to our understanding of attention and multitasking in context.", "title": "" }, { "docid": "dfc0f23dbb0a0556f53f5a913b936c8f", "text": "Neural network-based methods represent the state-of-the-art in question generation from text. Existing work focuses on generating only questions from text without concerning itself with answer generation. Moreover, our analysis shows that handling rare words and generating the most appropriate question given a candidate answer are still challenges facing existing approaches. We present a novel two-stage process to generate question-answer pairs from the text. For the first stage, we present alternatives for encoding the span of the pivotal answer in the sentence using Pointer Networks. In our second stage, we employ sequence to sequence models for question generation, enhanced with rich linguistic features. Finally, global attention and answer encoding are used for generating the question most relevant to the answer. We motivate and linguistically analyze the role of each component in our framework and consider compositions of these. This analysis is supported by extensive experimental evaluations. Using standard evaluation metrics as well as human evaluations, our experimental results validate the significant improvement in the quality of questions generated by our framework over the state-of-the-art. The technique presented here represents another step towards more automated reading comprehension assessment. We also present a live system to demonstrate the effectiveness of our approach.", "title": "" }, { "docid": "b199e6484b6bf595e1736bd36115ce81", "text": "Personality traits are increasingly being incorporated in systems to provide a personalized experience to the user. Current work focusing on identifying the relationship between personality and behavior, preferences, and needs o‰en do not take into account di‚erences between age groups. With music playing an important role in our lives, di‚erences between age groups may be especially prevalent. In this work we investigate whether di‚erences exist in music listening behavior between age groups. We analyzed a dataset with the music listening histories and personality information of 1415 users. Our results show agreements with prior work that identi€ed personality-based music listening preferences. However, our results show that the agreements we found are in some cases divided over di‚erent age groups, whereas in other cases additional correlations were found within age groups. With our results personality-based systems can provide beŠer music recommendations that is in line with the user’s age.", "title": "" }, { "docid": "e1a8e53f184f58ff80ef528584c59907", "text": "■ Abstract A number of important insights into the peopling of the New World have been gained through molecular genetic studies of Siberian and Native American populations. These data indicate that the initial migration of ancestral Amerindian originated in south-central Siberia and entered the New World between 20,000–14,000 calendar years before present (cal yr BP). These early immigrants probably followed a coastal route into the New World, where they expanded into all continental regions. A second migration that may have come from the same Siberian region entered the Americas somewhat later, possibly using an interior route, and genetically contributed to indigenous populations from North and Central America. In addition, Beringian populations moved into northern North America after the last glacial maximum (LGM) and gave rise to Aleuts, Eskimos, and Na-Dené Indians.", "title": "" } ]
scidocsrr
162e056223565973a0a5aef8fbb64bae
OntoMetrics: Application of On-line Ontology Metric Calculation
[ { "docid": "0e758ff82eae43d705b6fde249b29998", "text": "The continued growth of the World Wide Web makes the retrieval of relevant information for a user’s query increasingly difficult. Current search engines provide the user with many web pages, but varying levels of relevancy. In response, the Semantic Web has been proposed to retrieve and use more semantic information from the web. Our prior research has developed a Semantic Retrieval System to automate the processing of a user’s query while taking into account the query’s context. The system uses WordNet and the DARPA Agent Markup Language (DAML) ontologies to act as surrogates for understanding the context of terms in a user’s query. Like other applications that use ontologies, our system relies on using ‘good’ ontologies. This research draws upon semiotic theory to develop a suite of metrics that assess the syntactic, semantic, pragmatic, and social aspects of ontology quality. We operationalize the metrics and implement them in a prototype tool called the “Ontology Auditor.” An initial validation of the Ontology Auditor on the DAML library of domain ontologies indicates that the metrics are feasible and highlight the wide variations in quality among ontologies in the library. Acknowledgments The authors wish to thank Xinlin Tang and Sunyoung Cho for comments on a previous draft. This research was supported by Oakland University and by Georgia State University.", "title": "" } ]
[ { "docid": "72a1798a864b4514d954e1e9b6089ad8", "text": "Clustering image pixels is an important image segmentation technique. While a large amount of clustering algorithms have been published and some of them generate impressive clustering results, their performance often depends heavily on user-specified parameters. This may be a problem in the practical tasks of data clustering and image segmentation. In order to remove the dependence of clustering results on user-specified parameters, we investigate the characteristics of existing clustering algorithms and present a parameter-free algorithm based on the DSets (dominant sets) and DBSCAN (Density-Based Spatial Clustering of Applications with Noise) algorithms. First, we apply histogram equalization to the pairwise similarity matrix of input data and make DSets clustering results independent of user-specified parameters. Then, we extend the clusters from DSets with DBSCAN, where the input parameters are determined based on the clusters from DSets automatically. By merging the merits of DSets and DBSCAN, our algorithm is able to generate the clusters of arbitrary shapes without any parameter input. In both the data clustering and image segmentation experiments, our parameter-free algorithm performs better than or comparably with other algorithms with careful parameter tuning.", "title": "" }, { "docid": "e5a4f3c3029ba5b5009f290ba3add393", "text": "ALSTOM Power Inc.’s US Power Plant Laboratories (ALSTOM) has teamed with American Electric Power (AEP), ABB Lummus Global Inc. (ABB), the US Department of Energy National Energy Technology Laboratory (DOE), and the Ohio Coal Development Office (OCDO) to conduct a comprehensive study evaluating the technical feasibility and economics of alternate CO2 capture and sequestration technologies applied to an existing US coal-fired electric generation power plant. Three retrofit technology concepts are being evaluated, namely: • Concept A: Coal combustion in air, followed by CO2 separation with Kerr-McGee/ABB Lummus Global’s commercial MEA-based absorption/stripping process • Concept B: Coal combustion with O2 firing and flue gas recycle • Concept C: Coal Combustion in air with Oxygen Removal and CO2 Separation by Tertiary Amines Each of these technologies is being evaluated against a baseline case and CO2 tax options from the standpoints of performance and impacts on power generating cost. A typical existing US domestic pulverized coal fired power plant is being used in this evaluation. Specifically, AEP’s 450 MW Conesville Unit No. 5, located in Conesville, Ohio is the power plant case study. All technical performance and cost results associated with these options are being evaluated in comparative manner. These technical and economic issues being evaluated include: • Boiler performance and plant efficiency • Purity of O2 produced and flue gas recycled • Heat transfer into the radiant and convective sections of the boiler • NOX, SO2, CO and unburned carbon emissions • Heat transfer surface materials • Steam temperature control • Boiler and Steam Cycle modifications • Electrostatic Precipitator system performance • Flue Gas Desulfurization system performance • Plant systems integration and control • Retrofit investment cost and cost of electricity (COE) ALSTOM is managing and performing the subject study from its US Power Plant Laboratories office in Windsor, CT. ABB, from its offices in Houston, Texas, is participating as a sub-contractor. AEP is participating by offering their Conesville Generating Station as the case study and cost sharing consultation, and relevant technical and cost data. AEP is one of the largest US utilities and as the largest consumer of Ohio coal is bringing considerable value to the project. Similarly, ALSTOM and ABB are well established as global leaders in the design and manufacturing of steam generating equipment, petrochemical and CO2 separation technology. ALSTOM’s world leaders in providing equipment and services for boilers and power plant environmental control, respectively, and are providing their expertise to this project. The DOE National Energy Technology Laboratory and the Ohio Coal Development Office provided consultation and funding. All participants contributed to the cost share of this project. The motivation for this study was to provide input to potential US electric utility actions to meet Kyoto protocol targets. If the US decides to reduce CO2 emissions consistent with the Kyoto protocol, action would need to be taken to address existing power plants. Although fuel switching to gas may be a likely scenario, it will not be a sufficient measure and some form of CO2 capture for use or disposal may also be required. The output of this CO2 capture study will enhance the public’s understanding of control options and influence decisions and actions by government, regulators, and power plant owners to reduce their greenhouse gas CO2 emissions.", "title": "" }, { "docid": "d3fc62a9858ddef692626b1766898c9f", "text": "In order to detect the Cross-Site Script (XSS) vulnerabilities in the web applications, this paper proposes a method of XSS vulnerability detection using optimal attack vector repertory. This method generates an attack vector repertory automatically, optimizes the attack vector repertory using an optimization model, and detects XSS vulnerabilities in web applications dynamically. To optimize the attack vector repertory, an optimization model is built in this paper with a machine learning algorithm, reducing the size of the attack vector repertory and improving the efficiency of XSS vulnerability detection. Based on this method, an XSS vulnerability detector is implemented, which is tested on 50 real-world websites. The testing results show that the detector can detect a total of 848 XSS vulnerabilities effectively in 24 websites.", "title": "" }, { "docid": "ba4121003eb56d3ab6aebe128c219ab7", "text": "Mediation is said to occur when a causal effect of some variable X on an outcome Y is explained by some intervening variable M. The authors recommend that with small to moderate samples, bootstrap methods (B. Efron & R. Tibshirani, 1993) be used to assess mediation. Bootstrap tests are powerful because they detect that the sampling distribution of the mediated effect is skewed away from 0. They argue that R. M. Baron and D. A. Kenny's (1986) recommendation of first testing the X --> Y association for statistical significance should not be a requirement when there is a priori belief that the effect size is small or suppression is a possibility. Empirical examples and computer setups for bootstrap analyses are provided.", "title": "" }, { "docid": "f6677bda56105cfaa932cdfdace764eb", "text": "We construct a segmentation scheme that combines top-down with bottom-up processing. In the proposed scheme, segmentation and recognition are intertwined rather than proceeding in a serial manner. The top-down part applies stored knowledge about object shapes acquired through learning, whereas the bottom-up part creates a hierarchy of segmented regions based on uniformity criteria. Beginning with unsegmented training examples of class and non-class images, the algorithm constructs a bank of class-specific fragments and determines their figure-ground segmentation. This bank is then used to segment novel images in a top-down manner: the fragments are first used to recognize images containing class objects, and then to create a complete cover that best approximates these objects. The resulting segmentation is then integrated with bottom-up multi-scale grouping to better delineate the object boundaries. Our experiments, applied to a large set of four classes (horses, pedestrians, cars, faces), demonstrate segmentation results that surpass those achieved by previous top-down or bottom-up schemes. The main novel aspects of this work are the fragment learning phase, which efficiently learns the figure-ground labeling of segmentation fragments, even in training sets with high object and background variability; combining the top-down segmentation with bottom-up criteria to draw on their relative merits; and the use of segmentation to improve recognition.", "title": "" }, { "docid": "875c6251102727b6bb94d16eb8b05a17", "text": "Learning to rank is a machine learning technique broadly used in many areas such as document retrieval, collaborative filtering or question answering. We present experimental results which suggest that the performance of the current state-of-the-art learning to rank algorithm LambdaMART, when used for document retrieval for search engines, can be improved if standard regression trees are replaced by oblivious trees. This paper provides a comparison of both variants and our results demonstrate that the use of oblivious trees can improve the performance by more than 2.2%. Additional experimental analysis of the influence of a number of features and of a size of the training set is also provided and confirms the desirability of properties of oblivious decision trees.", "title": "" }, { "docid": "8d3a65d1dcf04773839a9ac4de0014ac", "text": "This paper proposes an energy-efficient deep inmemory architecture for NAND flash (DIMA-F) to perform machine learning and inference algorithms on NAND flash memory. Algorithms for data analytics, inference, and decision-making require processing of large data volumes and are hence limited by data access costs. DIMA-F achieves energy savings and throughput improvement for such algorithms by reading and processing data in the analog domain at the periphery of NAND flash memory. This paper also provides behavioral models of DIMA-F that can be used for analysis and large scale system simulations in presence of circuit non-idealities and variations. DIMA-F is studied in the context of linear support vector machines and knearest neighbor for face detection and recognition, respectively. An estimated 8×-to-23× reduction in energy and 9×-to-15× improvement in throughput resulting in EDP gains up to 345× over the conventional NAND flash architecture incorporating an external digital ASIC for computation.", "title": "" }, { "docid": "b5d3c7822f2ba9ca89d474dda5f180b6", "text": "We consider a class of a nested optimization problems involving inner and outer objectives. We observe that by taking into explicit account the optimization dynamics for the inner objective it is possible to derive a general framework that unifies gradient-based hyperparameter optimization and meta-learning (or learning-to-learn). Depending on the specific setting, the variables of the outer objective take either the meaning of hyperparameters in a supervised learning problem or parameters of a meta-learner. We show that some recently proposed methods in the latter setting can be instantiated in our framework and tackled with the same gradient-based algorithms. Finally, we discuss possible design patterns for learning-to-learn and present encouraging preliminary experiments for few-shot learning.", "title": "" }, { "docid": "05eb1af3e6838640b6dc5c1c128cc78a", "text": "Predicting the success of referring expressions (RE) is vital for real-world applications such as navigation systems. Traditionally, research has focused on studying Referring Expression Generation (REG) in virtual, controlled environments. In this paper, we describe a novel study of spatial references from real scenes rather than virtual. First, we investigate how humans describe objects in open, uncontrolled scenarios and compare our findings to those reported in virtual environments. We show that REs in real-world scenarios differ significantly to those in virtual worlds. Second, we propose a novel approach to quantifying image complexity when complete annotations are not present (e.g. due to poor object recognition capabitlities), and third, we present a model for success prediction of REs for objects in real scenes. Finally, we discuss implications for Natural Language Generation (NLG) systems and future directions.", "title": "" }, { "docid": "b6983a5ccdac40607949e2bfe2beace2", "text": "A focus on novel, confirmatory, and statistically significant results leads to substantial bias in the scientific literature. One type of bias, known as \"p-hacking,\" occurs when researchers collect or select data or statistical analyses until nonsignificant results become significant. Here, we use text-mining to demonstrate that p-hacking is widespread throughout science. We then illustrate how one can test for p-hacking when performing a meta-analysis and show that, while p-hacking is probably common, its effect seems to be weak relative to the real effect sizes being measured. This result suggests that p-hacking probably does not drastically alter scientific consensuses drawn from meta-analyses.", "title": "" }, { "docid": "d757f4c2294092f0735a3d822c2b870c", "text": "This paper is concerned with the Multi-Objective Next Release Problem (MONRP), a problem in search-based requirements engineering. Previous work has considered only single objective formulations. In the multi-objective formulation, there are at least two (possibly conflicting) objectives that the software engineer wishes to optimize. It is argued that the multi-objective formulation is more realistic, since requirements engineering is characterised by the presence of many complex and conflicting demands, for which the software engineer must find a suitable balance. The paper presents the results of an empirical study into the suitability of weighted and Pareto optimal genetic algorithms, together with the NSGA-II algorithm, presenting evidence to support the claim that NSGA-II is well suited to the MONRP. The paper also provides benchmark data to indicate the size above which the MONRP becomes non--trivial.", "title": "" }, { "docid": "bda90d8f3b9cf98f714c1a4bfb7a9f61", "text": "Learning image similarity metrics in an end-to-end fashion with deep networks has demonstrated excellent results on tasks such as clustering and retrieval. However, current methods, all focus on a very local view of the data. In this paper, we propose a new metric learning scheme, based on structured prediction, that is aware of the global structure of the embedding space, and which is designed to optimize a clustering quality metric (NMI). We show state of the art performance on standard datasets, such as CUB200-2011 [37], Cars196 [18], and Stanford online products [30] on NMI and R@K evaluation metrics.", "title": "" }, { "docid": "bde7c16585b284ed9b6b0e54110deeee", "text": "BACKGROUND\nEpidemiological reports suggest that Asians consuming a diet high in soy have a low incidence of prostate cancer. In animal models, soy and genistein have been demonstrated to suppress the development of prostate cancer. In this study, we investigate the mechanism of action, bioavailability, and potential for toxicity of dietary genistein in a rodent model.\n\n\nMETHODS\nLobund-Wistar rats were fed a 0.025-1.0-mg genistein/g AIN-76A diet. The dorsolateral prostate was subjected to Western blot analysis for expression of tyrosine-phosphorylated proteins, and of the EGF and ErbB2/Neu receptors. Genistein concentrations were measured from serum and prostate using HPLC-mass spectrometry. Body and prostate weights, and circulating testosterone levels, were measured.\n\n\nRESULTS\nIncreasing concentrations of genistein in the diet inhibited tyrosine-phosphorylated proteins with molecular weights of 170,000 and 85,000 in the dorsolateral prostate. Western blot analysis revealed that the 1-mg genistein/g AIN-76A diet inhibited by 50% the expression of the EGF receptor and its phosphorylation. In rats fed this diet, serum-free and total genistein concentrations were 137 and 2,712 pmol/ml, respectively. The free and total genistein IC50 values for the EGF receptor were 150 and 600 pmol/g prostate tissue, respectively. Genistein in the diet also inhibited the ErbB2/Neu receptor. Body and dorsolateral prostate weights, and circulating testosterone concentrations, were not adversely effected from exposure to genistein in the diet for 3 weeks.\n\n\nCONCLUSIONS\nWe conclude that genistein in the diet can downregulate the EGF and ErbB2/Neu receptors in the rat prostate with no apparent adverse toxicity to the host. The concentration needed to achieve a 50% reduction in EGF receptor expression can be achieved by eating a diet high in soy products or with genistein supplementation. Genistein inhibition of the EGF signaling pathway suggests that this phytoestrogen may be useful in both protecting against and treating prostate cancer.", "title": "" }, { "docid": "6bea1d7242fc23ec8f462b1c8478f2c1", "text": "Determining a consensus opinion on a product sold online is no longer easy, because assessments have become more and more numerous on the Internet. To address this problem, researchers have used various approaches, such as looking for feelings expressed in the documents and exploring the appearance and syntax of reviews. Aspect-based evaluation is the most important aspect of opinion mining, and researchers are becoming more interested in product aspect extraction; however, more complex algorithms are needed to address this issue precisely with large data sets. This paper introduces a method to extract and summarize product aspects and corresponding opinions from a large number of product reviews in a specific domain. We maximize the accuracy and usefulness of the review summaries by leveraging knowledge about product aspect extraction and providing both an appropriate level of detail and rich representation capabilities. The results show that the proposed system achieves F1-scores of 0.714 for camera reviews and 0.774 for laptop reviews.", "title": "" }, { "docid": "e389bed063035d3e9160d3136d2729a0", "text": "We introduce and construct timed commitment schemes, an extension to the standard notion of commitments in which a potential forced opening phase permits the receiver to recover (with effort) the committed value without the help of the committer. An important application of our timed-commitment scheme is contract signing: two mutually suspicious parties wish to exchange signatures on a contract. We show a two-party protocol that allows them to exchange RSA or Rabin signatures. The protocol is strongly fair: if one party quits the protocol early, then the two parties must invest comparable amounts of time to retrieve the signatures. This statement holds even if one party has many more machines than the other. Other applications, including honesty preserving auctions and collective coin-flipping, are discussed.", "title": "" }, { "docid": "127434902fe337d104929cd95db42def", "text": "Formal concepts and closed itemsets proved to be of big importance for knowledge discovery, both as a tool for concise representation of association rules and a tool for clustering and constructing domain taxonomies and ontologies. Exponential explosion makes it difficult to consider the whole concept lattice arising from data, one needs to select most useful and interesting concepts. In this paper interestingness measures of concepts are considered and compared with respect to various aspects, such as efficiency of computation and applicability to noisy data and performing ranking correlation. Formal Concept Analysis intrestingess measures closed itemsets", "title": "" }, { "docid": "99d9dcef0e4441ed959129a2a705c88e", "text": "Wikipedia has grown to a huge, multi-lingual source of encyclopedic knowledge. Apart from textual content, a large and everincreasing number of articles feature so-called infoboxes, which provide factual information about the articles’ subjects. As the different language versions evolve independently, they provide different information on the same topics. Correspondences between infobox attributes in different language editions can be leveraged for several use cases, such as automatic detection and resolution of inconsistencies in infobox data across language versions, or the automatic augmentation of infoboxes in one language with data from other language versions. We present an instance-based schema matching technique that exploits information overlap in infoboxes across different language editions. As a prerequisite we present a graph-based approach to identify articles in different languages representing the same real-world entity using (and correcting) the interlanguage links in Wikipedia. To account for the untyped nature of infobox schemas, we present a robust similarity measure that can reliably quantify the similarity of strings with mixed types of data. The qualitative evaluation on the basis of manually labeled attribute correspondences between infoboxes in four of the largest Wikipedia editions demonstrates the effectiveness of the proposed approach. 1. Entity and Attribute Matching across Wikipedia Languages Wikipedia is a well-known public encyclopedia. While most of the information contained in Wikipedia is in textual form, the so-called infoboxes provide semi-structured, factual information. They are displayed as tables in many Wikipedia articles and state basic facts about the subject. There are different templates for infoboxes, each targeting a specific category of articles and providing fields for properties that are relevant for the respective subject type. For example, in the English Wikipedia, there is a class of infoboxes about companies, one to describe the fundamental facts about countries (such as their capital and population), one for musical artists, etc. However, each of the currently 281 language versions1 defines and maintains its own set of infobox classes with their own set of properties, as well as providing sometimes different values for corresponding attributes. Figure 1 shows extracts of the English and German infoboxes for the city of Berlin. The arrows indicate matches between properties. It is already apparent that matching purely based on property names is futile: The terms Population density and Bevölkerungsdichte or Governing parties and Reg. Parteien have no textual similarity. However, their property values are more revealing: <3,857.6/km2> and <3.875 Einw. je km2> or <SPD/Die Linke> and <SPD und Die Linke> have a high textual similarity, respectively. Email addresses: [email protected] (Daniel Rinser), [email protected] (Dustin Lange), [email protected] (Felix Naumann) 1as of March 2011 Our overall goal is to automatically find a mapping between attributes of infobox templates across different language versions. Such a mapping can be valuable for several different use cases: First, it can be used to increase the information quality and quantity in Wikipedia infoboxes, or at least help the Wikipedia communities to do so. Inconsistencies among the data provided by different editions for corresponding attributes could be detected automatically. For example, the infobox in the English article about Germany claims that the population is 81,799,600, while the German article specifies a value of 81,768,000 for the same country. Detecting such conflicts can help the Wikipedia communities to increase consistency and information quality across language versions. Further, the detected inconsistencies could be resolved automatically by fusing the data in infoboxes, as proposed by [1]. Finally, the coverage of information in infoboxes could be increased significantly by completing missing attribute values in one Wikipedia edition with data found in other editions. An infobox template does not describe a strict schema, so that we need to collect the infobox template attributes from the template instances. For the purpose of this paper, an infobox template is determined by the set of attributes that are mentioned in any article that reference the template. The task of matching attributes of corresponding infoboxes across language versions is a specific application of schema matching. Automatic schema matching is a highly researched topic and numerous different approaches have been developed for this task as surveyed in [2] and [3]. Among these, schema-level matchers exploit attribute labels, schema constraints, and structural similarities of the schemas. However, in the setting of Wikipedia infoboxes these Preprint submitted to Information Systems October 19, 2012 Figure 1: A mapping between the English and German infoboxes for Berlin techniques are not useful, because infobox definitions only describe a rather loose list of supported properties, as opposed to a strict relational or XML schema. Attribute names in infoboxes are not always sound, often cryptic or abbreviated, and the exact semantics of the attributes are not always clear from their names alone. Moreover, due to our multi-lingual scenario, attributes are labeled in different natural languages. This latter problem might be tackled by employing bilingual dictionaries, if the previously mentioned issues were solved. Due to the flat nature of infoboxes and their lack of constraints or types, other constraint-based matching approaches must fail. On the other hand, there are instance-based matching approaches, which leverage instance data of multiple data sources. Here, the basic assumption is that similarity of the instances of the attributes reflects the similarity of the attributes. To assess this similarity, instance-based approaches usually analyze the attributes of each schema individually, collecting information about value patterns and ranges, amongst others, such as in [4]. A different, duplicate-based approach exploits information overlap across data sources [5]. The idea there is to find two representations of same real-world objects (duplicates) and then suggest mappings between attributes that have the same or similar values. This approach has one important requirement: The data sources need to share a sufficient amount of common instances (or tuples, in a relational setting), i.e., instances describing the same real-world entity. Furthermore, the duplicates either have to be known in advance or have to be discovered despite a lack of knowledge of corresponding attributes. The approach presented in this article is based on such duplicate-based matching. Our approach consists of three steps: Entity matching, template matching, and attribute matching. The process is visualized in Fig. 2. (1) Entity matching: First, we find articles in different language versions that describe the same real-world entity. In particular, we make use of the crosslanguage links that are present for most Wikipedia articles and provide links between same entities across different language versions. We present a graph-based approach to resolve conflicts in the linking information. (2) Template matching: We determine a cross-lingual mapping between infobox templates by analyzing template co-occurrences in the language versions. (3) Attribute matching: The infobox attribute values of the corresponding articles are compared to identify matching attributes across the language versions, assuming that the values of corresponding attributes are highly similar for the majority of article pairs. As a first step we analyze the quality of Wikipedia’s interlanguage links in Sec. 2. We show how to use those links to create clusters of semantically equivalent entities with only one entity from each language in Sec. 3. This entity matching approach is evaluated in Sec. 4. In Sec. 5, we show how a crosslingual mapping between infobox templates can be established. The infobox attribute matching approach is described in Sec. 6 and in turn evaluated in Sec. 7. Related work in the areas of ILLs, concept identification, and infobox attribute matching is discussed in Sec. 8. Finally, Sec. 9 draws conclusions and discusses future work. 2. Interlanguage Links Our basic assumption is that there is a considerable amount of information overlap across the different Wikipedia language editions. Our infobox matching approach presented later requires mappings between articles in different language editions", "title": "" }, { "docid": "e8f431676ed0a85cb09a6462303a3ec7", "text": "This paper describes Champollion, a lexicon-based sentence aligner designed for robust alignment of potential noisy parallel text. Champollion increases the robustness of the alignment by assigning greater weights to less frequent translated words. Experiments on a manually aligned Chinese – English parallel corpus show that Champollion achieves high precision and recall on noisy data. Champollion can be easily ported to new language pairs. It’s freely available to the public.", "title": "" }, { "docid": "43db7c431cac1afd33f48774ee0dbc61", "text": "We present a diff algorithm for XML data. This work is motivated by the support for change control in the context of the Xyleme project that is investigating dynamic warehouses capable of storing massive volume of XML data. Because of the context, our algorithm has to be very efficient in terms of speed and memory space even at the cost of some loss of “quality”. Also, it considers, besides insertions, deletions and updates (standard in diffs), a move operation on subtrees that is essential in the context of XML. Intuitively, our diff algorithm uses signatures to match (large) subtrees that were left unchanged between the old and new versions. Such exact matchings are then possibly propagated to ancestors and descendants to obtain more matchings. It also uses XML specific information such as ID attributes. We provide a performance analysis of the algorithm. We show that it runs in average in linear time vs. quadratic time for previous algorithms. We present experiments on synthetic data that confirm the analysis. Since this problem is NPhard, the linear time is obtained by trading some quality. We present experiments (again on synthetic data) that show that the output of our algorithm is reasonably close to the “optimal” in terms of quality. Finally we present experiments on a small sample of XML pages found on the Web.", "title": "" }, { "docid": "d9214591462b0780ede6d58dab42f48c", "text": "Software testing in general and graphical user interface (GUI) testing in particular is one of the major challenges in the lifecycle of any software system. GUI testing is inherently more difficult than the traditional and command-line interface testing. Some of the factors that make GUI testing different from the traditional software testing and significantly more difficult are: a large number of objects, different look and feel of objects, many parameters associated with each object, progressive disclosure, complex inputs from multiple sources, and graphical outputs. The existing testing techniques for the creation and management of test suites need to be adapted/enhanced for GUIs, and new testing techniques are desired to make the creation and management of test suites more efficient and effective. In this article, a methodology is proposed to create test suites for a GUI. The proposed methodology organizes the testing activity into various levels. The tests created at a particular level can be reused at higher levels. This methodology extends the notion of modularity and reusability to the testing phase. The organization and management of the created test suites resembles closely to the structure of the GUI under test.", "title": "" } ]
scidocsrr
eb2379ee086a2f4d8cb5712b0b0cd3e5
Optimal Schedule of Mobile Edge Computing for Internet of Things Using Partial Information
[ { "docid": "8d6b3e28ba335f2c3c98d18994610319", "text": "We study a sensor node with an energy harvesting source. The generated energy can be stored in a buffer. The sensor node periodically senses a random field and generates a packet. These packets are stored in a queue and transmitted using the energy available at that time. We obtain energy management policies that are throughput optimal, i.e., the data queue stays stable for the largest possible data rate. Next we obtain energy management policies which minimize the mean delay in the queue. We also compare performance of several easily implementable sub-optimal energy management policies. A greedy policy is identified which, in low SNR regime, is throughput optimal and also minimizes mean delay.", "title": "" }, { "docid": "07e57751ca031fd818179e380b00ccc4", "text": "In this paper, we consider the problem of energy efficient scheduling under average delay constraint for a single user fading channel. We propose a new approach for on-line implementation of the optimal packet scheduling algorithm. This approach is based on reformulating the value iteration equation by introducing a virtual state called post-decision state. The resultant value iteration equation becomes amenable to online implementation based on stochastic approximation. This approach has an advantage that an explicit knowledge of the probability distribution of the channel state as well as the arrivals is not required for the implementation. We prove that the on-line algorithm indeed converges to the optimal policy.", "title": "" } ]
[ { "docid": "5c0994fab71ea871fad6915c58385572", "text": "We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image.", "title": "" }, { "docid": "185c637bec8ecaa828f62b75edb3741c", "text": "Up until now we have learned that a rotation in R3 about an axis through the origin can be represented by a 3×3 orthogonal matrix with determinant 1. However, the matrix representation seems redundant because only four of its nine elements are independent. Also the geometric interpretation of such a matrix is not clear until we carry out several steps of calculation to extract the rotation axis and angle. Furthermore, to compose two rotations, we need to compute the product of the two corresponding matrices, which requires twenty-seven multiplications and eighteen additions. Quaternions are very efficient for analyzing situations where rotations in R3 are involved. A quaternion is a 4-tuple, which is a more concise representation than a rotation matrix. Its geometric meaning is also more obvious as the rotation axis and angle can be trivially recovered. The quaternion algebra to be introduced will also allow us to easily compose rotations. This is because quaternion composition takes merely sixteen multiplications and twelve additions. The development of quaternions is attributed to W. R. Hamilton [5] in 1843. Legend has it that Hamilton was walking with his wife Helen at the Royal Irish Academy when he was suddenly struck by the idea of adding a fourth dimension in order to multiply triples. Excited by this breakthrough, as the couple passed the Broome Bridge of the Royal Canal, he carved the newfound quaternion equations", "title": "" }, { "docid": "93b3c8cd0a1c5f1d0112115e1c556b46", "text": "Graph processing is important for a growing range of applications. Current performance studies of parallel graph computation employ a large variety of algorithms and graphs. To explore their robustness, we characterize behavior variation across algorithms and graph structures at different scales. Our results show that graph computation behaviors, with up to 1000-fold variation, form a very broad space. Any inefficient exploration of this space may lead to narrow understanding and ad-hoc studies. Hence, we consider constructing an ensemble of graph computations, or graph-algorithm pairs, to most effectively explore this graph computation behavior space. We study different ensembles of parallel graph computations, and define two metrics to quantify how efficiently and completely an ensemble explores the space. Our results show that: (1) experiments limited to a single algorithm or a single graph may unfairly characterize a graph-processing system, (2) benchmarks exploring both algorithm and graph diversity can significantly improve the quality (30% more complete and 200% more efficient), but must be carefully chosen, (3) some algorithms are more useful than others in benchmarking, and (4) we can reduce the complexity (number of algorithms, graphs, runtime) while conserving the benchmarking quality.", "title": "" }, { "docid": "b3235d925a1f452ee5ed97cac709b9d4", "text": "Xiaoming Zhai is a doctoral student in the Department of Physics, Beijing Normal University, and is a visiting scholar in the College of Education, University of Washington. His research interests include physics assessment and evaluation, as well as technology-supported physics instruction. He has been a distinguished high school physics teacher who won numerous nationwide instructional awards. Meilan Zhang is an instructor in the Department of Teacher Education at University of Texas at El Paso. Her research focuses on improving student learning using mobile technology, understanding Internet use and the digital divide using big data from Internet search trends and Web analytics. Min Li is an Associate Professor in the College of Education, University of Washington. Her expertise is science assessment and evaluation, and quantitative methods. Address for correspondence: Xiaoming Zhai, Department of Physics, Beijing Normal University, Room A321, No. 19 Xinjiekouwai Street, Haidian District, Beijing 100875, China. Email: [email protected]", "title": "" }, { "docid": "df487337795d03d8538024aedacbbbe9", "text": "This study aims to make an inquiry regarding the advantages and challenges of integrating augmented reality (AR) into the library orientation programs of academic/research libraries. With the vast number of emerging technologies that are currently being introduced to the library world, it is essential for academic librarians to fully utilize these technologies to their advantage. However, it is also of equal importance for them to first make careful analysis and research before deciding whether to adopt a certain technology or not. AR offers a strategic medium through which librarians can attach digital information to real-world objects and simply let patrons interact with them. It is a channel that librarians can utilize in order to disseminate information and guide patrons in their studies or researches. And while it is expected for AR to grow tremendously in the next few years, it becomes more inevitable for academic librarians to acquire related IT skills in order to further improve the services they offer in their respective colleges and universities. The study shall employ the pragmatic approach to research, conducting an extensive review of available literature on AR as used in academic libraries, designing a prototype to illustrate how AR can be integrated to an existing library orientation program, and performing surveys and interviews on patrons and librarians who used it. This study can serve as a guide in order for academic librarians to assess whether implementing AR in their respective libraries will be beneficial to them or not.", "title": "" }, { "docid": "970a1c802a4c731c3fcb03855d5cfb8c", "text": "Visual prior from generic real-world images can be learned and transferred for representing objects in a scene. Motivated by this, we propose an algorithm that transfers visual prior learned offline for online object tracking. From a collection of real-world images, we learn an overcomplete dictionary to represent visual prior. The prior knowledge of objects is generic, and the training image set does not necessarily contain any observation of the target object. During the tracking process, the learned visual prior is transferred to construct an object representation by sparse coding and multiscale max pooling. With this representation, a linear classifier is learned online to distinguish the target from the background and to account for the target and background appearance variations over time. Tracking is then carried out within a Bayesian inference framework, in which the learned classifier is used to construct the observation model and a particle filter is used to estimate the tracking result sequentially. Experiments on a variety of challenging sequences with comparisons to several state-of-the-art methods demonstrate that more robust object tracking can be achieved by transferring visual prior.", "title": "" }, { "docid": "3003d3b353a2e6edf4a9c8008b1be8a0", "text": "An important issue faced while employing Pyroelectric InfraRed (PIR) sensors in an outdoor Wireless Sensor Network (WSN) deployment for intrusion detection, is that the output of the PIR sensor can, as shown in a recent paper, degenerate into a weak and unpredictable signal when the background temperature is close to that of the intruder. The current paper explores the use of an optical camera as a complementary sensing modality in an outdoor WSN deployment to reliably handle such situations. A combination of backgroundsubtraction and the Lucas-Kanade optical-flow algorithms is used to classify between human and animal in an outdoor environment based on video data.,,The algorithms were developed keeping in mind the need for the camera to act when called upon, as a substitute for the PIR sensor by turning in comparable classification accuracies. All algorithms are implemented on a mote in the case of the PIR sensor array and on an Odroid single-board computer in the case of the optical camera. Three sets of experimental results are presented. The first set shows the optical-camera platform to turn in under supervised learning, high accuracy classification (in excess of 95%) comparable to that of the PIR sensor array. The second set of results correspond to an outdoor WSN deployment over a period of 7 days where similar accuracies are achieved. The final set also corresponds to a single-day outdoor WSN deployment and shows that the optical camera can act as a stand-in for the PIR sensor array when the ambient temperature conditions cause the PIR sensor to perform poorly.", "title": "" }, { "docid": "ea28d601dfbf1b312904e39802ce25b8", "text": "In this paper, we present the implementation and performance evaluation of security functionalities at the link layer of IEEE 802.15.4-compliant IoT devices. Specifically, we implement the required encryption and authentication mechanisms entirely in software and as well exploit the hardware ciphers that are made available by our IoT platform. Moreover, we present quantitative results on the memory footprint, the execution time and the energy consumption of selected implementation modes and discuss some relevant tradeoffs. As expected, we find that hardware-based implementations are not only much faster, leading to latencies shorter than two orders of magnitude compared to software-based security suites, but also provide substantial savings in terms of ROM memory occupation, i.e. up to six times, and energy consumption. Furthermore, the addition of hardware-based security support at the link layer only marginally impacts the network lifetime metric, leading to worst-case reductions of just 2% compared to the case where no security is employed. This is due to the fact that energy consumption is dominated by other factors, including the transmission and reception of data packets and the control traffic that is required to maintain the network structures for routing and data collection. On the other hand, entirely software-based implementations are to be avoided as the network lifetime reduction in this case can be as high as 25%.", "title": "" }, { "docid": "98b4974b118ac3c6eabbd0edd98b638e", "text": "A system that performs text categorization aims to assign appropriate categories from a predefined classification scheme to incoming documents. These assignments might be used for varied purposes such as filtering, or retrieval. This paper introduces a new effective model for text categorization with great corpus (more or less 1 million documents). Text categorization is performed using the Kullback-Leibler distance between the probability distribution of the document to classify and the probability distribution of each category. Using the same representation of categories, experiments show a significant improvement when the above mentioned method is used. KLD method achieve substantial improvements over the tfidf performing method.", "title": "" }, { "docid": "e5b7402470ad6198b4c1ddb9d9878ea9", "text": "Chit-chat models are known to have several problems: they lack specificity, do not display a consistent personality and are often not very captivating. In this work we present the task of making chit-chat more engaging by conditioning on profile information. We collect data and train models to (i) condition on their given profile information; and (ii) information about the person they are talking to, resulting in improved dialogues, as measured by next utterance prediction. Since (ii) is initially unknown, our model is trained to engage its partner with personal topics, and we show the resulting dialogue can be used to predict profile information about the interlocutors.", "title": "" }, { "docid": "e2a6563ab28e2679dca05b60c908dfb3", "text": "Recently, deep generative models have revealed itself as a promising way of performing de novo molecule design. However, previous research has focused mainly on generating SMILES strings instead of molecular graphs. Although available, current graph generative models are are often too general and computationally expensive. In this work, a new de novo molecular design framework is proposed based on a type of sequential graph generators that do not use atom level recurrent units. Compared with previous graph generative models, the proposed method is much more tuned for molecule generation and has been scaled up to cover significantly larger molecules in the ChEMBL database. It is shown that the graph-based model outperforms SMILES based models in a variety of metrics, especially in the rate of valid outputs. For the application of drug design tasks, conditional graph generative model is employed. This method offers highe flexibility and is suitable for generation based on multiple objectives. The results have demonstrated that this approach can be effectively applied to solve several drug design problems, including the generation of compounds containing a given scaffold, compounds with specific drug-likeness and synthetic accessibility requirements, as well as dual inhibitors against JNK3 and GSK-3β.", "title": "" }, { "docid": "86ffd10b7f5f49f8e917be87cdbcb02d", "text": "Audit logs are considered good practice for business systems, and are required by federal regulations for secure systems, drug approval data, medical information disclosure, financial records, and electronic voting. Given the central role of audit logs, it is critical that they are correct and inalterable. It is not sufficient to say, “our data is correct, because we store all interactions in a separate audit log.” The integrity of the audit log itself must also be guaranteed. This paper proposes mechanisms within a database management system (DBMS), based on cryptographically strong one-way hash functions, that prevent an intruder, including an auditor or an employee or even an unknown bug within the DBMS itself, from silently corrupting the audit log. We propose that the DBMS store additional information in the database to enable a separate audit log validator to examine the database along with this extra information and state conclusively whether the audit log has been compromised. We show with an implementation on a high-performance storage engine that the overhead for auditing is low and that the validator can efficiently and correctly determine if the audit log has been compromised.", "title": "" }, { "docid": "486d31b962600141ba75dfde718f5b3d", "text": "The design, fabrication, and measurement of a coax to double-ridged waveguide launcher and horn antenna is presented. The novel launcher design employs two symmetric field probes across the ridge gap to minimize spreading inductance in the transition, and achieves better than 15 dB return loss over a 10:1 bandwidth. The aperture-matched horn uses a half-cosine transition into a linear taper for the outer waveguide dimensions and ridge width, and a power-law scaled gap to realize monotonically varying cutoff frequencies, thus avoiding the appearance of trapped mode resonances. It achieves a nearly constant beamwidth in both E- and H-planes for an overall directivity of about 16.5 dB from 10-100 GHz.", "title": "" }, { "docid": "b238ceff7cf19621a420494ac311b2dd", "text": "In this paper, we discuss the extension and integration of the statistical concept of Kernel Density Estimation (KDE) in a scatterplot-like visualization for dynamic data at interactive rates. We present a line kernel for representing streaming data, we discuss how the concept of KDE can be adapted to enable a continuous representation of the distribution of a dependent variable of a 2D domain. We propose to automatically adapt the kernel bandwith of KDE to the viewport settings, in an interactive visualization environment that allows zooming and panning. We also present a GPU-based realization of KDE that leads to interactive frame rates, even for comparably large datasets. Finally, we demonstrate the usefulness of our approach in the context of three application scenarios - one studying streaming ship traffic data, another one from the oil & gas domain, where process data from the operation of an oil rig is streaming in to an on-shore operational center, and a third one studying commercial air traffic in the US spanning 1987 to 2008.", "title": "" }, { "docid": "8584fc5cbd280874da5cebe016def0fa", "text": "This paper considers the problem of mining closed frequent itemsets over a data stream sliding window using limited memory space. We design a synopsis data structure to monitor transactions in the sliding window so that we can output the current closed frequent itemsets at any time. Due to time and memory constraints, the synopsis data structure cannot monitor all possible itemsets. However, monitoring only frequent itemsets will make it impossible to detect new itemsets when they become frequent. In this paper, we introduce a compact data structure, the closed enumeration tree (CET), to maintain a dynamically selected set of itemsets over a sliding window. The selected itemsets contain a boundary between closed frequent itemsets and the rest of the itemsets. Concept drifts in a data stream are reflected by boundary movements in the CET. In other words, a status change of any itemset (e.g., from non-frequent to frequent) must occur through the boundary. Because the boundary is relatively stable, the cost of mining closed frequent itemsets over a sliding window is dramatically reduced to that of mining transactions that can possibly cause boundary movements in the CET. Our experiments show that our algorithm performs much better than representative algorithms for the sate-of-the-art approaches.", "title": "" }, { "docid": "15852fff036f959b5aeeeb393c5896f8", "text": "This chapter introduces deep density models with latent variables which are based on a greedy layer-wise unsupervised learning algorithm. Each layer of the deep models employs a model that has only one layer of latent variables, such as the Mixtures of Factor Analyzers (MFAs) and the Mixtures of Factor Analyzers with Common Loadings (MCFAs). As the background, MFAs and MCFAs approaches are reviewed. By the comparison between these two approaches, sharing the common loading is more physically meaningful since the common loading is regarded as a kind of feature selection or reduction matrix. Importantly, MCFAs can remarkably reduce the number of free parameters than MFAs. Then the deep models (deep MFAs and deep MCFAs) and their inferences are described, which show that the greedy layer-wise algorithm is an efficient way to learn deep density models and the deep architectures can be much more efficient (sometimes exponentially) than shallow architectures. The performance is evaluated between two shallow models, and two deep models separately on both density estimation and clustering. Furthermore, the deep models are also compared with their shallow counterparts.", "title": "" }, { "docid": "23583b155fc8ec3301cfef805f568e57", "text": "We address the problem of covering an environment with robots equipped with sensors. The robots are heterogeneous in that the sensor footprints are different. Our work uses the location optimization framework in with three significant extensions. First, we consider robots with different sensor footprints, allowing, for example, aerial and ground vehicles to collaborate. We allow for finite size robots which enables implementation on real robotic systems. Lastly, we extend the previous work allowing for deployment in non convex environments.", "title": "" }, { "docid": "1fefa4074f1abdded36baa3425752490", "text": "With the popularization and development of network knowledge, network intruders are increasing, and the attack mode has been updated. Intrusion detection technology is a kind of active defense technology, which can extract the key information from the network system, and quickly judge and protect the internal or external network intrusion. Intrusion detection is a kind of active security technology, which provides real-time protection for internal attacks, external attacks and misuse, and it plays an important role in ensuring network security. However, with the diversification of intrusion technology, the traditional intrusion detection system cannot meet the requirements of the current network security. Therefore, the implementation of intrusion detection needs diversifying. In this context, we apply neural network technology to the network intrusion detection system to solve the problem. In this paper, on the basis of intrusion detection method, we analyze the development history and the present situation of intrusion detection technology, and summarize the intrusion detection system overview and architecture. The neural network intrusion detection is divided into data acquisition, data analysis, pretreatment, intrusion behavior detection and testing.", "title": "" }, { "docid": "f4075ef96ed2d20cbd8615a7ffec0f8e", "text": "The objective of this paper is to control the speed of Permanent Magnet Synchronous Motor (PMSM) over wide range of speed by consuming minimum time and low cost. Therefore, comparative performance analysis of PMSM on basis of speed regulation has been done in this study. Comparison of two control strategies i.e. Field oriented control (FOC) without sensor less Model Reference Adaptive System (MRAS) and FOC with sensor less MRAS has been carried out. Sensor less speed control of PMSM is achieved by using estimated speed deviation as feedback signal for the PI controller. Performance of the both control strategies has been evaluated in in MATLAB Simulink software. Simulation studies show the response of PMSM speed during various conditions of load and speed variations. Obtained results reveal that the proposed MRAS technique can effectively estimate the speed of rotor with high exactness and torque response is significantly quick as compared to the system without MRAS control system.", "title": "" }, { "docid": "6c3d34e1a7ab24493a79e938fb67ebec", "text": "The need to enhance the sustainability of intensive agricultural systems is widely recognized One promising approach is to encourage beneficial services provided by soil microorganisms to decrease the inputs of fertilizers and pesticides. However, limited success of this approach in field applications raises questions as to how this might be best accomplished. We highlight connections between root exudates and the rhizosphere microbiome, and discuss the possibility of using plant exudation characteristics to selectively enhance beneficial microbial activities and microbiome characteristics. Gaps in our understanding and areas of research that are vital to our ability to more fully exploit the soil microbiome for agroecosystem productivity and sustainability are also discussed. This article outlines strategies for more effectively exploiting beneficial microbial services on agricultural systems, and cals attention to topics that require additional research.", "title": "" } ]
scidocsrr
2c4dea171f6841f24184d4795131058f
Learning to Search Better Than Your Teacher
[ { "docid": "ceac1f5535c88ed5dc947d32b17185c4", "text": "This paper describes an incremental parsing approach where parameters are estimated using a variant of the perceptron algorithm. A beam-search algorithm is used during both training and decoding phases of the method. The perceptron approach was implemented with the same feature set as that of an existing generative model (Roark, 2001a), and experimental results show that it gives competitive performance to the generative model on parsing the Penn treebank. We demonstrate that training a perceptron model to combine with the generative model during search provides a 2.1 percent F-measure improvement over the generative model alone, to 88.8 percent.", "title": "" }, { "docid": "c3b691cd3671011278ecd30563b27245", "text": "We formalize weighted dependency parsing as searching for maximum spanning trees (MSTs) in directed graphs. Using this representation, the parsing algorithm of Eisner (1996) is sufficient for searching over all projective trees in O(n3) time. More surprisingly, the representation is extended naturally to non-projective parsing using Chu-Liu-Edmonds (Chu and Liu, 1965; Edmonds, 1967) MST algorithm, yielding anO(n2) parsing algorithm. We evaluate these methods on the Prague Dependency Treebank using online large-margin learning techniques (Crammer et al., 2003; McDonald et al., 2005) and show that MST parsing increases efficiency and accuracy for languages with non-projective dependencies.", "title": "" } ]
[ { "docid": "531ebcdbcfc606d315fac7ce7042c0b4", "text": "This paper reviews the potential for using trees for the phytoremediation of heavy metal-contaminated land. It considers the following aspects: metal tolerance in trees, heavy metal uptake by trees grown on contaminated substrates, heavy metal compartmentalisation within trees, phytoremediation using trees and the phytoremediation potential of willow (Salix spp.).", "title": "" }, { "docid": "f0c0bbb0282d76da7146e05f4a371843", "text": "We have proposed a claw pole type half-wave rectified variable field flux motor (CP-HVFM) with special self-excitation method. The claw pole rotor needs the 3D magnetic path core. This paper reports an analysis method with experimental BH and loss data of the iron powder core for FEM. And it shows a designed analysis model and characteristics such as torque, efficiency and loss calculation results.", "title": "" }, { "docid": "5762adf6fc9a0bf6da037cdb10191400", "text": "Graphics Processing Unit (GPU) virtualization is an enabling technology in emerging virtualization scenarios. Unfortunately, existing GPU virtualization approaches are still suboptimal in performance and full feature support. This paper introduces gVirt, a product level GPU virtualization implementation with: 1) full GPU virtualization running native graphics driver in guest, and 2) mediated pass-through that achieves both good performance and scalability, and also secure isolation among guests. gVirt presents a virtual full-fledged GPU to each VM. VMs can directly access performance-critical resources, without intervention from the hypervisor in most cases, while privileged operations from guest are trap-and-emulated at minimal cost. Experiments demonstrate that gVirt can achieve up to 95% native performance for GPU intensive workloads, and scale well up to 7 VMs.", "title": "" }, { "docid": "4ea7482524661175e8268c15eb22a6ae", "text": "We present a fully unsupervised, extractive text summarization system that leverages a submodularity framework introduced by past research. The framework allows summaries to be generated in a greedy way while preserving near-optimal performance guarantees. Our main contribution is the novel coverage reward term of the objective function optimized by the greedy algorithm. This component builds on the graph-of-words representation of text and the k-core decomposition algorithm to assign meaningful scores to words. We evaluate our approach on the AMI and ICSI meeting speech corpora, and on the DUC2001 news corpus. We reach state-of-the-art performance on all datasets. Results indicate that our method is particularly well-suited to the meeting domain.", "title": "" }, { "docid": "904278b251c258d1dac9b652dcd7ee82", "text": "This paper addresses the repeated acquisition of labels for data items when the labeling is imperfect. We examine the improvement (or lack thereof) in data quality via repeated labeling, and focus especially on the improvement of training labels for supervised induction. With the outsourcing of small tasks becoming easier, for example via Rent-A-Coder or Amazon's Mechanical Turk, it often is possible to obtain less-than-expert labeling at low cost. With low-cost labeling, preparing the unlabeled part of the data can become considerably more expensive than labeling. We present repeated-labeling strategies of increasing complexity, and show several main results. (i) Repeated-labeling can improve label quality and model quality, but not always. (ii) When labels are noisy, repeated labeling can be preferable to single labeling even in the traditional setting where labels are not particularly cheap. (iii) As soon as the cost of processing the unlabeled data is not free, even the simple strategy of labeling everything multiple times can give considerable advantage. (iv) Repeatedly labeling a carefully chosen set of points is generally preferable, and we present a robust technique that combines different notions of uncertainty to select data points for which quality should be improved. The bottom line: the results show clearly that when labeling is not perfect, selective acquisition of multiple labels is a strategy that data miners should have in their repertoire; for certain label-quality/cost regimes, the benefit is substantial.", "title": "" }, { "docid": "33cab03ab9773efe22ba07dd461811ef", "text": "This paper describes a real-time feature-based stereo SLAM system that is robust and accurate in a wide variety of conditions –indoors, outdoors, with dynamic objects, changing light conditions, fast robot motions and large-scale loops. Our system follows a parallel-tracking-and-mapping strategy: a tracking thread estimates the camera pose at frame rate; and a mapping thread updates a keyframe-based map at a lower frequency. The stereo constraints of our system allow a robust initialization –avoiding the well-known bootstrapping problem in monocular systems– and the recovery of the real scale. Both aspects are essential for its practical use in real robotic systems that interact with the physical world. In this paper we provide the implementation details, an exhaustive evaluation of the system in public datasets and a comparison of most state-of-the-art feature detectors and descriptors on the presented system. For the benefit of the community, its code for ROS (Robot Operating System) has been released.", "title": "" }, { "docid": "c9e47bfe0f1721a937ba503ed9913dba", "text": "The Web contains a vast amount of structured information such as HTML tables, HTML lists and deep-web databases; there is enormous potential in combining and re-purposing this data in creative ways. However, integrating data from this relational web raises several challenges that are not addressed by current data integration systems or mash-up tools. First, the structured data is usually not published cleanly and must be extracted (say, from an HTML list) before it can be used. Second, due to the vastness of the corpus, a user can never know all of the potentially-relevant databases ahead of time (much less write a wrapper or mapping for each one); the source databases must be discovered during the integration process. Third, some of the important information regarding the data is only present in its enclosing web page and needs to be extracted appropriately. This paper describes Octopus, a system that combines search, extraction, data cleaning and integration, and enables users to create new data sets from those found on the Web. The key idea underlying Octopus is to offer the user a set of best-effort operators that automate the most labor-intensive tasks. For example, the Search operator takes a search-style keyword query and returns a set of relevance-ranked and similarity-clustered structured data sources on the Web; the Context operator helps the user specify the semantics of the sources by inferring attribute values that may not appear in the source itself, and the Extend operator helps the user find related sources that can be joined to add new attributes to a table. Octopus executes some of these operators automatically, but always allows the user to provide feedback and correct errors. We describe the algorithms underlying each of these operators and experiments that demonstrate their efficacy.", "title": "" }, { "docid": "b4138a3c89e89d402aa92190d25d3d59", "text": "The conotruncal anomaly face syndrome was described in a Japanese publication in 1976 and comprises dysmorphic facial appearance and outflow tract defects of the heart. The authors subsequently noted similarities to Shprintzen syndrome and DiGeorge syndrome. Chromosome analysis in five cases did not show a deletion at high resolution, but fluorescent in situ hybridisation using probe DO832 showed a deletion within chromosome 22q11 in all cases.", "title": "" }, { "docid": "54c8a8669b133e23035d93aabdc01a54", "text": "The proposed antenna topology is an interesting radiating element, characterized by broadband or multiband capabilities. The exponential and soft/tapered design of the edge transitions and feeding makes it a challenging item to design and tune, leading though to impressive results. The antenna is build on Rogers RO3010 material. The bands in which the antenna works are GPS and Galileo (1.57 GHz), UMTS (1.8–2.17 GHz) and ISM 2.4 GHz (Bluetooth WiFi). The purpose of such an antenna is to be embedded in an Assisted GPS (A-GPS) reference station. Such a device serves as a fix GPS reference distributing the positioning information to mobile device users and delivering at the same time services via GSM network standards or via Wi-Fi / Bluetooth connections.", "title": "" }, { "docid": "25f0c6f9f050c2a46a8f36b17a84c281", "text": "universal artificial intelligence sequential decisions towards a universal theory of artificial intelligence universal artificial intelligence sequential decisions towards a universal theory of arti?cial intelligence based universal artificial intelligence: sequential decisions universal artificial intelligence sequential decisions universal artificial intelligence researchgate universal artificial intelligence preamble one decade of universal artificial intelligence texts in theoretical computer science an eatcs series universal artificial intelligence marcus hutter algorithmic probability, part 1 of n qmul maths health data entanglement and artificial intelligence-based marcus hutter arxiv:cs/0701125v1 [cs] 20 jan 2007 the university mdmtv the program management office the program spzone summary of stephen roach on the next asia opportunities nursing leadership and management for patient safety and xbox 360 hard drive manual tapsey facing the world orthodox christian essays on global le arti e i lumi pittura e scultura da piranesi a canova civil service nj supervisor practice test guide anaqah healthcare ministryrefounding the mission in tumultuous supporting and educating traumatized students a guide for answer key to lab manual physical geology aadver directions in managing construction zaraa swine show at indiana state fair teleip w169 service manual foserv introductory psychology for nursing allied health sciences manual for beech duchess nfcqr great soccer team defense blwood lumber company case solution working capital binary option strategy guide torrent zaraa 454 chevy timing advance nfcqr corpse in the waxworks a monsieur bencolin mystery free pdf lumina service manual blwood universal algorithmic ethics arxiv citroen relay td van workshop manual birdz ejb 2 1 kick start boscos", "title": "" }, { "docid": "6c2b19b2888d00fccb1eae37352d653d", "text": "Between June 1985 and January 1987, the Therac-25 medical electron accelerator was involved in six massive radiation overdoses. As a result, several people died and others were seriously injured. A detailed investigation of the factors involved in the software-related overdoses and attempts by users, manufacturers, and government agencies to deal with the accidents is presented. The authors demonstrate the complex nature of accidents and the need to investigate all aspects of system development and operation in order to prevent future accidents. The authors also present some lessons learned in terms of system engineering, software engineering, and government regulation of safety-critical systems containing software components.<<ETX>>", "title": "" }, { "docid": "cb3d1448269b29807dc62aa96ff6ad1a", "text": "OBJECTIVES\nInformation overload in electronic medical records can impede providers' ability to identify important clinical data and may contribute to medical error. An understanding of the information requirements of ICU providers will facilitate the development of information systems that prioritize the presentation of high-value data and reduce information overload. Our objective was to determine the clinical information needs of ICU physicians, compared to the data available within an electronic medical record.\n\n\nDESIGN\nProspective observational study and retrospective chart review.\n\n\nSETTING\nThree ICUs (surgical, medical, and mixed) at an academic referral center.\n\n\nSUBJECTS\nNewly admitted ICU patients and physicians (residents, fellows, and attending staff).\n\n\nMEASUREMENTS AND MAIN RESULTS\nThe clinical information used by physicians during the initial diagnosis and treatment of admitted patients was captured using a questionnaire. Clinical information concepts were ranked according to the frequency of reported use (primary outcome) and were compared to information availability in the electronic medical record (secondary outcome). Nine hundred twenty-five of 1,277 study questionnaires (408 patients) were completed. Fifty-one clinical information concepts were identified as being useful during ICU admission. A median (interquartile range) of 11 concepts (6-16) was used by physicians per patient admission encounter with four used greater than 50% of the time. Over 25% of the clinical data available in the electronic medical record was never used, and only 33% was used greater than 50% of the time by admitting physicians.\n\n\nCONCLUSIONS\nPhysicians use a limited number of clinical information concepts at the time of patient admission to the ICU. The electronic medical record contains an abundance of unused data. Better electronic data management strategies are needed, including the priority display of frequently used clinical concepts within the electronic medical record, to improve the efficiency of ICU care.", "title": "" }, { "docid": "bfd289e3fa71c49337aab4cf96cd1755", "text": "We present two modifications to the popular k-means clustering algorithm to address the extreme requirements for latency, scalability, and sparsity encountered in user-facing web applications. First, we propose the use of mini-batch optimization for k-means clustering. This reduces computation cost by orders of magnitude compared to the classic batch algorithm while yielding significantly better solutions than online stochastic gradient descent. Second, we achieve sparsity with projected gradient descent, and give a fast ε-accurate projection onto the L1-ball. Source code is freely available: http://code.google.com/p/sofia-ml", "title": "" }, { "docid": "1b37c9f413f1c12d80f5995a40df4684", "text": "Various orodispersible drug formulations have been recently introduced into the market. Oral lyophilisates and orodispersible granules, tablets or films have enriched the therapeutic options. In particular, the paediatric and geriatric population may profit from the advantages like convenient administration, lack of swallowing, ease of use. Until now, only a few novel products made it to the market as the development and production usually is more expensive than for conventional oral drug dosage forms like tablets or capsules. The review reports the recent advances, existing and upcoming products, and the significance of formulating patient-friendly oral dosage forms. The preparation of the medicines can be performed both in pharmaceutical industry and in community pharmacies. Recent advances, e.g. drug printing technologies, may facilitate this process for community or hospital pharmacies. Still, regulatory guidelines and pharmacopoeial monographs lack appropriate methods, specifications and global harmonization to foster the development of innovative orodispersible drug dosage forms.", "title": "" }, { "docid": "ea937e1209c270a7b6ab2214e0989fed", "text": "With current projections regarding the growth of Internet sales, online retailing raises many questions about how to market on the Net. While convenience impels consumers to purchase items on the web, quality remains a significant factor in deciding where to shop online. The competition is increasing and personalization is considered to be the competitive advantage that will determine the winners in the market of online shopping in the following years. Recommender systems are a means of personalizing a site and a solution to the customer’s information overload problem. As such, many e-commerce sites already use them to facilitate the buying process. In this paper we present a recommender system for online shopping focusing on the specific characteristics and requirements of electronic retailing. We use a hybrid model supporting dynamic recommendations, which eliminates the problems the underlying techniques have when applied solely. At the end, we conclude with some ideas for further development and research in this area.", "title": "" }, { "docid": "c17e6363762e0e9683b51c0704d43fa7", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "a8fe62e387610682f90018ca1a56ba04", "text": "Aarskog-Scott syndrome (AAS), also known as faciogenital dysplasia (FGD, OMIM # 305400), is an X-linked disorder of recessive inheritance, characterized by short stature and facial, skeletal, and urogenital abnormalities. AAS is caused by mutations in the FGD1 gene (Xp11.22), with over 56 different mutations identified to date. We present the clinical and molecular analysis of four unrelated families of Mexican origin with an AAS phenotype, in whom FGD1 sequencing was performed. This analysis identified two stop mutations not previously reported in the literature: p.Gln664* and p.Glu380*. Phenotypically, every male patient met the clinical criteria of the syndrome, whereas discrepancies were found between phenotypes in female patients. Our results identify two novel mutations in FGD1, broadening the spectrum of reported mutations; and provide further delineation of the phenotypic variability previously described in AAS.", "title": "" }, { "docid": "b5b5d6c5768e40a343b672a33f9c3f0c", "text": "In this paper we describe Icarus, a cognitive architecture for physical agents that integrates ideas from a number of traditions, but that has been especially influenced by results from cognitive psychology. We review Icarus’ commitments to memories and representations, then present its basic processes for performance and learning. We illustrate the architecture’s behavior on a task from in-city driving that requires interaction among its various components. In addition, we discuss Icarus’ consistency with qualitative findings about the nature of human cognition. In closing, we consider the framework’s relation to other cognitive architectures that have been proposed in the literature. Introduction and Motivation A cognitive architecture (Newell, 1990) specifies the infrastructure for an intelligent system that remains constant across different domains and knowledge bases. This infrastructure includes a commitment to formalisms for representing knowledge, memories for storing this domain content, and processes that utilize and acquire the knowledge. Research on cognitive architectures has been closely tied to cognitive modeling, in that they often attempt to explain a wide range of human behavior and, at the very least, desire to support the same broad capabilities as human intelligence. In this paper we describe Icarus, a cognitive architecture that builds on previous work in this area but also has some novel features. Our aim is not to match quantitative data, but rather to reproduce qualitative characteristics of human behavior, and our discussion will focus on such issues. The best method for evaluating a cognitive architecture remains an open question, but it is clear that this should happen at the systems level rather than in terms of isolated phenomena. We will not claim that Icarus accounts for any one result better than other candidates, but we will argue that it models facets of the human cognitive architecture, and the ways they fit together, that have been downplayed by other researchers in this area. Copyright c © 2006, American Association for Artificial Intelligence (www.aaai.org). All rights reserved. A conventional paper on cognitive architectures would first describe the memories and their contents, then discuss the mechanisms that operate over them. However, Icarus’ processes interact with certain memories but not others, suggesting that we organize the text around these processes and the memories on which they depend. Moreover, some modules build on other components, which suggests a natural progression. Therefore, we first discuss Icarus’ most basic mechanism, conceptual inference, along with the memories it inspects and alters. After this, we present the processes for goal selection and skill execution, which operate over the results of inference. Finally, we consider the architecture’s module for problem solving, which builds on both inference and execution, and its associated learning processes, which operate over the results of problem solving. In each case, we discuss the framework’s connection to qualitative results from cognitive psychology. In addition, we illustrate the ideas with examples from the domain of in-city driving, which has played a central role in our research. Briefly, this involves controlling a vehicle in a simulated urban environment with buildings, road segments, street intersections, and other vehicles. This domain, which Langley and Choi (2006) describe at more length, provides a rich setting to study the interplay among different facets of cognition. Beliefs, Concepts, and Inference In order to carry out actions that achieve its goals, an agent must understand its current situation. Icarus includes a module for conceptual inference that is responsible for this cognitive task which operates by matching conceptual structures against percepts and beliefs. This process depends on the contents and representation of elements in short-term and long-term memory. Because Icarus is designed to support intelligent agents that operate in some external environment, it requires information about the state of its surroundings. To this end, it incorporates a perceptual buffer that describes aspects of the environment the agent perceives directly on a given cycle, after which it is updated. Each element or percept in this ephemeral memory corresponds to a particular object and specifies the object’s type, a unique name, and a set of attribute-value pairs that characterize the object on the current time step. Although one could create a stimulus-response agent that operates directly off perceptual information, its behavior would not reflect what we normally mean by the term ‘intelligent’, which requires higher-level cognition. Thus, Icarus also includes a belief memory that contains higher-level inferences about the agent’s situation. Whereas percepts describe attributes of specific objects, beliefs describe relations among objects, such as the relative positions of two buildings. Each element in this belief memory consists of a predicate and a set of symbolic arguments, each of which refers to some object, typically one that appears in the perceptual buffer. Icarus beliefs are instances of generalized concepts that reside in conceptual memory , which contains longterm structures that describe classes of environmental situations. The formalism that expresses these logical concepts is similar to that for Prolog clauses. Like beliefs, Icarus concepts are inherently symbolic and relational structures. Each clause in conceptual memory includes a head that gives the concept’s name and arguments, along with a body that states the conditions under which the clause should match against the contents of short-term memories. The architecture’s most basic activity is conceptual inference. On each cycle, the environmental simulator returns a set of perceived objects, including their types, names, and descriptions in the format described earlier. Icarus deposits this set of elements in the perceptual buffer, where they initiate matching against long-term conceptual definitions. The overall effect is that the system adds to its belief memory all elements that are implied deductively by these percepts and concept definitions. Icarus repeats this process on every cycle, so it constantly updates its beliefs about the environment. The inference module operates in a bottom-up, datadriven manner that starts from descriptions of perceived objects. The architecture matches these percepts against the bodies of primitive concept clauses and adds any supported beliefs (i.e., concept instances) to belief memory. These trigger matching against higher-level concept clauses, which in turn produces additional beliefs. The process continues until Icarus has added to memory all beliefs it can infer in this manner. Although this mechanism reasons over structures similar to Prolog clauses, its operation is closer to the elaboration process in the Soar architecture (Laird et al., 1987). For example, for the in-city driving domain, we provided Icarus with 41 conceptual clauses. On each cycle, the simulator deposits a variety of elements in the perceptual buffer, including percepts for the agent itself (self ), street segments (e.g., segment2), lane lines (e.g., line1), buildings, and other entities. Based on attributes of the object self and one of the segments, the architecture derives the primitive concept instance (in-segment self segment2). Similarly, from self and the object line1, it infers the belief (in-lane self line1). These two elements lead Icarus to deduce two nonprimitive beliefs, (centered-in-lane self segment2 line1) and (aligned-with-lane-in-segment self segment2 line1). Finally, from these two instances and another belief, (steering-wheel-straight self), the system draws an even higher-level inference, (driving-well-in-segment self segment2 line1). Other beliefs that encode relations among perceived entities also follow from the inference process. Icarus’ conceptual inference module incorporates a number of key ideas from the psychological literature: • Concepts are distinct cognitive entities that humans use to describe their environment and goals; moreover, they support both categorization and inference; • The great majority of human categories are grounded in perception, making reference to physical characteristics of objects they describe (Barsalou, 1999); • Many human concepts are relational in nature, in that they describe connections or interactions among objects or events (Kotovsky & Gentner, 1996); • Concepts are organized in a hierarchical manner, with complex categories being defined in terms of simpler structures. Icarus reflects each of these claims at the architectural level, which contrasts with most other architectures’ treatment of concepts and categorization. However, we will not claim our treatment is complete. Icarus currently models concepts as Boolean structures that match in an all-or-none manner, whereas human categories have a graded character (Rosch & Mervis, 1975). Also, retrieval occurs in a purely bottomup fashion, whereas human categorization and inference exhibits top-down priming effects. Both constitute important directions for extending the framework. Goals, Skills, and Execution We have seen that Icarus can utilize its conceptual knowledge to infer and update beliefs about its surroundings, but an intelligent agent must also take action in the environment. To this end, the architecture includes additional memories that concern goals the agent wants to achieve, skills the agent can execute to reach them, and intentions about which skills to pursue. These are linked by a performance mechanism that executes stored skills, thus changing the environment and, hopefully, taking the agent closer to its goals. In particular, Icarus incorporates a goal memory that contains the agent’s top-level objectives. A goal is some concept instance that the agent wants to satisfy. T", "title": "" }, { "docid": "b95de5287e9f65eff25d2550d4c71c19", "text": "The syntax of application layer protocols carries valuable information for network intrusion detection. Hence, the majority of modern IDS perform some form of protocol analysis to refine their signatures with application layer context. Protocol analysis, however, has been mainly used for misuse detection, which limits its application for the detection of unknown and novel attacks. In this contribution we address the issue of incorporating application layer context into anomaly-based intrusion detection. We extend a payload-based anomaly detection method by incorporating structural information obtained from a protocol analyzer. The basis for our extension is computation of similarity between attributed tokens derived from a protocol grammar. The enhanced anomaly detection method is evaluated in experiments on detection of web attacks, yielding an improvement of detection accuracy of 49%. While byte-level anomaly detection is sufficient for detection of buffer overflow attacks, identification of recent attacks such as SQL and PHP code injection strongly depends on the availability of application layer context.", "title": "" }, { "docid": "8080f4d757ef396959829f489fc44078", "text": "Abstract Recently, natural language processing researches have focused on data or processing techniques for paraphrasing. Unfortunately, however, we have little data for paraphrasing. There are some research reports on collecting synonymous expressions with parallel corpus, though no suitable corpus for collecting a set of paraphrases is yet available. Therefore, we obtain a few variations of expression in paraphrase sets when we tried to apply this method with a parallel corpus. In this paper, we propose a grouping method based on the basic idea of grouping synonymous sentences related to the translation recursively, and decompose incorrect groups using the DMdecomposition algorithm. The incorrect groups include expressions that cannot be paraphrased because some words or expressions have different meanings in different situations. We discuss our method and experimental results with respect to BTEC, which is a multilingual parallel corpus.", "title": "" } ]
scidocsrr
a0b7035945a4930dbed8efafd7b32fe1
A Hybrid Algorithm for Tracking of GMPP Based on P&O and PSO With Reduced Power Oscillation in String Inverters
[ { "docid": "d06e4f97786f8ecf9694ed270a36c24a", "text": "In this paper, an improved maximum power point (MPP) tracking (MPPT) with better performance based on voltage-oriented control (VOC) is proposed to solve a fast-changing irradiation problem. In VOC, a cascaded control structure with an outer dc link voltage control loop and an inner current control loop is used. The currents are controlled in a synchronous orthogonal d,q frame using a decoupled feedback control. The reference current of proportional-integral (PI) d-axis controller is extracted from the dc-side voltage regulator by applying the energy-balancing control. Furthermore, in order to achieve a unity power factor, the q-axis reference is set to zero. The MPPT controller is applied to the reference of the outer loop control dc voltage photovoltaic (PV). Without PV array power measurement, the proposed MPPT identifies the correct direction of the MPP by processing the d-axis current reflecting the power grid side and the signal error of the PI outer loop designed to only represent the change in power due to the changing atmospheric conditions. The robust tracking capability under rapidly increasing and decreasing irradiance is verified experimentally with a PV array emulator. Simulations and experimental results demonstrate that the proposed method provides effective, fast, and perfect tracking.", "title": "" }, { "docid": "180dd2107c6a39e466b3d343fa70174f", "text": "This paper presents simulation and hardware implementation of incremental conductance (IncCond) maximum power point tracking (MPPT) used in solar array power systems with direct control method. The main difference of the proposed system to existing MPPT systems includes elimination of the proportional-integral control loop and investigation of the effect of simplifying the control circuit. Contributions are made in several aspects of the whole system, including converter design, system simulation, controller programming, and experimental setup. The resultant system is capable of tracking MPPs accurately and rapidly without steady-state oscillation, and also, its dynamic performance is satisfactory. The IncCond algorithm is used to track MPPs because it performs precise control under rapidly changing atmospheric conditions. MATLAB and Simulink were employed for simulation studies, and Code Composer Studio v3.1 was used to program a TMS320F2812 digital signal processor. The proposed system was developed and tested successfully on a photovoltaic solar panel in the laboratory. Experimental results indicate the feasibility and improved functionality of the system.", "title": "" }, { "docid": "445685897a2e7c9c5b44a713690bd0a8", "text": "Maximum power point tracking (MPPT) is an integral part of a system of energy conversion using photovoltaic (PV) arrays. The power-voltage characteristic of PV arrays operating under partial shading conditions exhibits multiple local maximum power points (LMPPs). In this paper, a new method has been presented to track the global maximum power point (GMPP) of PV. Compared with the past proposed global MPPT techniques, the method proposed in this paper has the advantages of determining whether partial shading is present, calculating the number of peaks on P-V curves, and predicting the locations of GMPP and LMPP. The new method can quickly find GMPP, and avoid much energy loss due to blind scan. The experimental results verify that the proposed method guarantees convergence to the global MPP under partial shading conditions.", "title": "" }, { "docid": "e8758a9e2b139708ca472dd60397dc2e", "text": "Multiple photovoltaic (PV) modules feeding a common load is the most common form of power distribution used in solar PV systems. In such systems, providing individual maximum power point tracking (MPPT) schemes for each of the PV modules increases the cost. Furthermore, its v-i characteristic exhibits multiple local maximum power points (MPPs) during partial shading, making it difficult to find the global MPP using conventional single-stage (CSS) tracking. To overcome this difficulty, the authors propose a novel MPPT algorithm by introducing a particle swarm optimization (PSO) technique. The proposed algorithm uses only one pair of sensors to control multiple PV arrays, thereby resulting in lower cost, higher overall efficiency, and simplicity with respect to its implementation. The validity of the proposed algorithm is demonstrated through experimental studies. In addition, a detailed performance comparison with conventional fixed voltage, hill climbing, and Fibonacci search MPPT schemes are presented. Algorithm robustness was verified for several complicated partial shading conditions, and in all cases this method took about 2 s to find the global MPP.", "title": "" } ]
[ { "docid": "87e4bc893f46efdb50416e8386501d80", "text": "the boom in the technology has resulted in emergence of new concepts and challenges. Big data is one of those spoke about terms today. Big data is becoming a synonym for competitive advantages in business rivalries. Despite enormous benefits, big data accompanies some serious challenges and when it comes to analyzing of big data, it requires some serious thought. This study explores Big Data terminology and its analysis concepts using sample from Twitter data with the help of one of the most industry trusted real time processing and fault tolerant tool called Apache Storm. Keywords— Big Data, Apache Storm, real-time processing, open Source.", "title": "" }, { "docid": "15f4e03102c98a74e9e64eec75290656", "text": "With the continuous deepening of study of data mining, the application area of data mining gradually expanded, its influence also spread to the media industry. Data visualization technology has changed the traditional narrative mode, make the news becomes a product that be produced. This paper analyzes the history of computer aided reporting to data news, the main models of data news visualization, and the process of data news production through data mining. The study found that data news focuses on the way of data processing in the entire news workflow, it involves not only classical computer graphics technology, image-processing technology and computer audio technology, but also more data analysis and visual processing technologies based on new media and cloud computing involved in. Research data mining and visualization of data-driven journalism can help journalists use big data to do news work better, deepen people’s cognition of news events, find the logic which cannot be reflected in traditional news, and maximize the connotation of news report.", "title": "" }, { "docid": "3acb0ab9f20e1efece96a2414a9c9c8c", "text": "Artificial markers are successfully adopted to solve several vision tasks, ranging from tracking to calibration. While most designs share the same working principles, many specialized approaches exist to address specific application domains. Some are specially crafted to boost pose recovery accuracy. Others are made robust to occlusion or easy to detect with minimal computational resources. The sheer amount of approaches available in recent literature is indeed a statement to the fact that no silver bullet exists. Furthermore, this is also a hint to the level of scholarly interest that still characterizes this research topic. With this paper we try to add a novel option to the offer, by introducing a general purpose fiducial marker which exhibits many useful properties while being easy to implement and fast to detect. The key ideas underlying our approach are three. The first one is to exploit the projective invariance of conics to jointly find the marker and set a reading frame for it. Moreover, the tag identity is assessed by a redundant cyclic coded sequence implemented using the same circular features used for detection. Finally, the specific design and feature organization of the marker are well suited for several practical tasks, ranging from camera calibration to information payload delivery.", "title": "" }, { "docid": "504f0482ec674844a555869e12dd756b", "text": "The paper is written in the lecture format and dedicated to one of the main basal approaches, the orbitozygomatic approach, that has been widely used by neurosurgeons for several decades. The authors describe the historical background of the approach development and the surgical technique features and also analyze the published data about application of the orbitozygomatic approach in surgery for skull base tumors and cerebral aneurysms.", "title": "" }, { "docid": "ff91a931e4b546c791f8829eee25d2c7", "text": "Methods for reasoning under uncertainty are a key building block of accurate and reliable machine learning systems. Bayesian methods provide a general framework to quantify uncertainty. However, because of model misspecification and the use of approximate inference, Bayesian uncertainty estimates are often inaccurate — for example, a 90% credible interval may not contain the true outcome 90% of the time. Here, we propose a simple procedure for calibrating any regression algorithm; when applied to Bayesian and probabilistic models, it is guaranteed to produce calibrated uncertainty estimates given enough data. Our procedure is inspired by Platt scaling and extends previous work on classification. We evaluate this approach on Bayesian linear regression, feedforward, and recurrent neural networks, and find that it consistently outputs well-calibrated credible intervals while improving performance on time series forecasting and model-based reinforcement learning tasks.", "title": "" }, { "docid": "019138302eadaf18b2148db11720bcc5", "text": "Face recognition with still face images has been widely studied, while the research on video-based face recognition is inadequate relatively, especially in terms of benchmark datasets and comparisons. Real-world video-based face recognition applications require techniques for three distinct scenarios: 1) Videoto-Still (V2S); 2) Still-to-Video (S2V); and 3) Video-to-Video (V2V), respectively, taking video or still image as query or target. To the best of our knowledge, few datasets and evaluation protocols have benchmarked for all the three scenarios. In order to facilitate the study of this specific topic, this paper contributes a benchmarking and comparative study based on a newly collected still/video face database, named COX1 Face DB. Specifically, we make three contributions. First, we collect and release a largescale still/video face database to simulate video surveillance with three different video-based face recognition scenarios (i.e., V2S, S2V, and V2V). Second, for benchmarking the three scenarios designed on our database, we review and experimentally compare a number of existing set-based methods. Third, we further propose a novel Point-to-Set Correlation Learning (PSCL) method, and experimentally show that it can be used as a promising baseline method for V2S/S2V face recognition on COX Face DB. Extensive experimental results clearly demonstrate that video-based face recognition needs more efforts, and our COX Face DB is a good benchmark database for evaluation.", "title": "" }, { "docid": "bd8ae67f959a7b840eff7e8c400a41e0", "text": "Enabling a humanoid robot to drive a car, requires the development of a set of basic primitive actions. These include: walking to the vehicle, manually controlling its commands (e.g., ignition, gas pedal and steering), and moving with the whole-body, to ingress/egress the car. In this paper, we present a sensorbased reactive framework for realizing the central part of the complete task, consisting in driving the car along unknown roads. The proposed framework provides three driving strategies by which a human supervisor can teleoperate the car, ask for assistive driving, or give the robot full control of the car. A visual servoing scheme uses features of the road image to provide the reference angle for the steering wheel to drive the car at the center of the road. Simultaneously, a Kalman filter merges optical flow and accelerometer measurements, to estimate the car linear velocity and correspondingly compute the gas pedal command for driving at a desired speed. The steering wheel and gas pedal reference are sent to the robot control to achieve the driving task with the humanoid. We present results from a driving experience with a real car and the humanoid robot HRP-2Kai. Part of the framework has been used to perform the driving task at the DARPA Robotics Challenge.", "title": "" }, { "docid": "90e218a8ae79dc1d53e53d4eb63839b8", "text": "Doubly fed induction generator (DFIG) technology is the dominant technology in the growing global market for wind power generation, due to the combination of variable-speed operation and a cost-effective partially rated power converter. However, the DFIG is sensitive to dips in supply voltage and without specific protection to “ride-through” grid faults, a DFIG risks damage to its power converter due to overcurrent and/or overvoltage. Conventional converter protection via a sustained period of rotor-crowbar closed circuit leads to poor power output and sustained suppression of the stator voltages. A new minimum-threshold rotor-crowbar method is presented in this paper, improving fault response by reducing crowbar application periods to 11-16 ms, successfully diverting transient overcurrents, and restoring good power control within 45 ms of both fault initiation and clearance, thus enabling the DFIG to meet grid-code fault-ride-through requirements. The new method is experimentally verified and evaluated using a 7.5-kW test facility.", "title": "" }, { "docid": "e9cc899155bd5f88ae1a3d5b88de52af", "text": "This article reviews research evidence showing to what extent the chronic care model can improve the management of chronic conditions (using diabetes as an example) and reduce health care costs. Thirty-two of 39 studies found that interventions based on chronic care model components improved at least 1 process or outcome measure for diabetic patients. Regarding whether chronic care model interventions can reduce costs, 18 of 27 studies concerned with 3 examples of chronic conditions (congestive heart failure, asthma, and diabetes) demonstrated reduced health care costs or lower use of health care services. Even though the chronic care model has the potential to improve care and reduce costs, several obstacles hinder its widespread adoption.", "title": "" }, { "docid": "7e74cc21787c1e21fd64a38f1376c6a9", "text": "The Bidirectional Reflectance Distribution Function (BRDF) describes the appearance of a material by its interaction with light at a surface point. A variety of analytical models have been proposed to represent BRDFs. However, analysis of these models has been scarce due to the lack of high-resolution measured data. In this work we evaluate several well-known analytical models in terms of their ability to fit measured BRDFs. We use an existing high-resolution data set of a hundred isotropic materials and compute the best approximation for each analytical model. Furthermore, we have built a new setup for efficient acquisition of anisotropic BRDFs, which allows us to acquire anisotropic materials at high resolution. We have measured four samples of anisotropic materials (brushed aluminum, velvet, and two satins). Based on the numerical errors, function plots, and rendered images we provide insights into the performance of the various models. We conclude that for most isotropic materials physically-based analytic reflectance models can represent their appearance quite well. We illustrate the important difference between the two common ways of defining the specular lobe: around the mirror direction and with respect to the half-vector. Our evaluation shows that the latter gives a more accurate shape for the reflection lobe. Our analysis of anisotropic materials indicates current parametric reflectance models cannot represent their appearances faithfully in many cases. We show that using a sampled microfacet distribution computed from measurements improves the fit and qualitatively reproduces the measurements.", "title": "" }, { "docid": "4fa7ee44cdc4b0cd439723e9600131bd", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/ucpress.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.", "title": "" }, { "docid": "103d6713dd613bfe5a768c60d349bb4a", "text": "Mobile phones and tablets can be considered as the first incarnation of the post-PC era. Their explosive adoption rate has been driven by a number of factors, with the most signifcant influence being applications (apps) and app markets. Individuals and organizations are able to develop and publish apps, and the most popular form of monetization is mobile advertising.\n The mobile advertisement (ad) ecosystem has been the target of prior research, but these works typically focused on a small set of apps or are from a user privacy perspective. In this work we make use of a unique, anonymized data set corresponding to one day of traffic for a major European mobile carrier with more than 3 million subscribers. We further take a principled approach to characterize mobile ad traffic along a number of dimensions, such as overall traffic, frequency, as well as possible implications in terms of energy on a mobile device.\n Our analysis demonstrates a number of inefficiencies in today's ad delivery. We discuss the benefits of well-known techniques, such as pre-fetching and caching, to limit the energy and network signalling overhead caused by current systems. A prototype implementation on Android devices demonstrates an improvement of 50 % in terms of energy consumption for offline ad-sponsored apps while limiting the amount of ad related traffic.", "title": "" }, { "docid": "a2e39488b22164746b1b503450e2a8bc", "text": "Machine learning of ECG is a core component in any of the ECG-based healthcare informatics system. Since the ECG is a nonlinear signal, the subtle changes in its amplitude and duration are not well manifested in time and frequency domains. Therefore, in this chapter, we introduce a machine-learning approach to screen arrhythmia from normal sinus rhythm from the ECG. The methodology consists of R-point detection using the Pan-Tompkins algorithm, discrete wavelet transform (DWT) decomposition, sub-band principal component analysis (PCA), statistical validation of features, and subsequent pattern classification. The k-fold cross validation is used in order to reduce the bias in choosing training and testing sets for classification. The average accuracy of classification is used as a benchmark for comparison. Different classifiers used are Gaussian mixture model (GMM), error back propagation neural network (EBPNN), and support vector machine (SVM). The DWT basis functions used are Daubechies-4, Daubechies-6, Daubechies-8, Symlet-2, Symlet-4, Symlet-6, Symlet-8, Coiflet-2, and Coiflet-5. An attempt is made to exploit the energy compaction in the wavelet sub-bands to yield higher classification accuracy. Results indicate that the Symlet2 wavelet basis function provides the highest accuracy in classification. Among the classifiers, SVM yields the highest classification accuracy, whereas EBPNN yields a higher accuracy than GMM. The use of other time frequency representations using different time frequency kernels as a future direction is also observed. The developed machine-learning approach can be used in a web-based telemedicine system, which can be used in remote monitoring of patients in many healthcare informatics systems. R. J. Martis (&) C. Chakraborty School of Medical Science and Technology, IIT, Kharagpur, India e-mail: [email protected] A. K. Ray Department of Electronics and Electrical Communication Engineering, IIT, Kharagpur, India S. Dua et al. (eds.), Machine Learning in Healthcare Informatics, Intelligent Systems Reference Library 56, DOI: 10.1007/978-3-642-40017-9_2, Springer-Verlag Berlin Heidelberg 2014 25", "title": "" }, { "docid": "d80fbd6e24d93991c8a64a8ecfb37d92", "text": "THE DEVELOPMENT OF PHYSICAL FITNESS IN YOUNG ATHLETES IS A RAPIDLY EXPANDING FIELD OF INTEREST FOR STRENGTH AND CONDITIONING COACHES, PHYSICAL EDUCATORS, SPORTS COACHES, AND PARENTS. PREVIOUS LONG-TERM ATHLETE DEVELOPMENT MODELS HAVE CLASSIFIED YOUTH-BASED TRAINING METHODOLOGIES IN RELATION TO CHRONOLOGIC AGE GROUPS, AN APPROACH THAT HAS DISTINCT LIMITATIONS. MORE RECENT MODELS HAVE ATTEMPTED TO BRIDGE MATURATION AND PERIODS OF TRAINABILITY FOR A LIMITED NUMBER OF FITNESS QUALITIES, ALTHOUGH SUCH MODELS APPEAR TO BE BASED ON SUBJECTIVE ANALYSIS. THE YOUTH PHYSICAL DEVELOPMENT MODEL PROVIDES A LOGICAL AND EVIDENCE-BASED APPROACH TO THE SYSTEMATIC DEVELOPMENT OF PHYSICAL PERFORMANCE IN YOUNG ATHLETES.", "title": "" }, { "docid": "627f3c07a8ce5f0935ced97f685f44f4", "text": "Click-through rate (CTR) prediction plays a central role in search advertising. One needs CTR estimates unbiased by positional effect in order for ad ranking, allocation, and pricing to be based upon ad relevance or quality in terms of click propensity. However, the observed click-through data has been confounded by positional bias, that is, users tend to click more on ads shown in higher positions than lower ones, regardless of the ad relevance. We describe a probabilistic factor model as a general principled approach to studying these exogenous and often overwhelming phenomena. The model is simple and linear in nature, while empirically justified by the advertising domain. Our experimental results with artificial and real-world sponsored search data show the soundness of the underlying model assumption, which in turn yields superior prediction accuracy.", "title": "" }, { "docid": "711675a8e053e963ae59290db94cb75f", "text": "Heterogeneous multiprocessors are increasingly important in the multi-core era due to their potential for high performance and energy efficiency. In order for software to fully realize this potential, the step that maps computations to processing elements must be as automated as possible. However, the state-of-the-art approach is to rely on the programmer to specify this mapping manually and statically. This approach is not only labor intensive but also not adaptable to changes in runtime environments like problem sizes and hardware/software configurations. In this study, we propose adaptive mapping, a fully automatic technique to map computations to processing elements on a CPU+GPU machine. We have implemented it in our experimental heterogeneous programming system called Qilin. Our results show that, by judiciously distributing works over the CPU and GPU, automatic adaptive mapping achieves a 25% reduction in execution time and a 20% reduction in energy consumption than static mappings on average for a set of important computation benchmarks. We also demonstrate that our technique is able to adapt to changes in the input problem size and system configuration.", "title": "" }, { "docid": "f97086d856ebb2f1c5e4167f725b5890", "text": "In this paper, an ac-linked hybrid electrical energy system comprising of photo voltaic (PV) and fuel cell (FC) with electrolyzer for standalone applications is proposed. PV is the primary power source of the system, and an FC-electrolyzer combination is used as a backup and as long-term storage system. A Fuzzy Logic controller is developed for the maximum power point tracking for the PV system. A simple power management strategy is designed for the proposed system to manage power flows among the different energy sources. A simulation model for the hybrid energy has been developed using MATLAB/Simulink.", "title": "" }, { "docid": "efb78474b403972f7bffa3e29ded5804", "text": "The idea that memory is composed of distinct systems has a long history but became a topic of experimental inquiry only after the middle of the 20th century. Beginning about 1980, evidence from normal subjects, amnesic patients, and experimental animals converged on the view that a fundamental distinction could be drawn between a kind of memory that is accessible to conscious recollection and another kind that is not. Subsequent work shifted thinking beyond dichotomies to a view, grounded in biology, that memory is composed of multiple separate systems supported, for example, by the hippocampus and related structures, the amygdala, the neostriatum, and the cerebellum. This article traces the development of these ideas and provides a current perspective on how these brain systems operate to support behavior.", "title": "" }, { "docid": "6cf711826e5718507725ff6f887c7dbc", "text": "Electronic Support Measures (ESM) system is an important function of electronic warfare which provides the real time projection of radar activities. Such systems may encounter with very high density pulse sequences and it is the main task of an ESM system to deinterleave these mixed pulse trains with high accuracy and minimum computation time. These systems heavily depend on time of arrival analysis and need efficient clustering algorithms to assist deinterleaving process in modern evolving environments. On the other hand, self organizing neural networks stand very promising for this type of radar pulse clustering. In this study, performances of self organizing neural networks that meet such clustering criteria are evaluated in detail and the results are presented.", "title": "" }, { "docid": "ea7dd3adea885cb829effa9216f12a3b", "text": "In this paper, we present a new nonlinear filter for high-dimensional state estimation, which we have named the cubature Kalman filter (CKF). The heart of the CKF is a spherical-radial cubature rule, which makes it possible to numerically compute multivariate moment integrals encountered in the nonlinear Bayesian filter. Specifically, we derive a third-degree spherical-radial cubature rule that provides a set of cubature points scaling linearly with the state-vector dimension. The CKF may therefore provide a systematic solution for high-dimensional nonlinear filtering problems. The paper also includes the derivation of a square-root version of the CKF for improved numerical stability. The CKF is tested experimentally in two nonlinear state estimation problems. In the first problem, the proposed cubature rule is used to compute the second-order statistics of a nonlinearly transformed Gaussian random variable. The second problem addresses the use of the CKF for tracking a maneuvering aircraft. The results of both experiments demonstrate the improved performance of the CKF over conventional nonlinear filters.", "title": "" } ]
scidocsrr
e119824ae262a13ec75b312ae1c47fb1
Türkçe için Kelime Temsillerinin Ö˘ grenimi Learning Word Representations for Turkish
[ { "docid": "8da50eee8aaebe575eeaceae49c9fb37", "text": "In this paper, we propose a set of language resources for building Turkish language processing applications. Specifically, we present a finite-state implementation of a morphological parser, an averaged perceptron-based morphological disambiguator, and compilation of a web corpus. Turkish is an agglutinative language with a highly productive inflectional and derivational morphology. We present an implementation of a morphological parser based on two-level morphology. This parser is one of the most complete parsers for Turkish and it runs independent of any other external system such as PCKIMMO in contrast to existing parsers. Due to complex phonology and morphology of Turkish, parsing introduces some ambiguous parses. We developed a morphological disambiguator with accuracy of about 98% using averaged perceptron algorithm. We also present our efforts to build a Turkish web corpus of about 423 million words.", "title": "" }, { "docid": "062c970a14ac0715ccf96cee464a4fec", "text": "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.", "title": "" } ]
[ { "docid": "02a7675468d8e02aaf43ddcd2c36e3fd", "text": "Speech synthesis is the artificial production of human voice. A computer system used for this task is called a speech synthesizer. Anyone can use this synthesizer in software or hardware products. The main aim of text-to-speech (TTS) system is to convert normal language text into speech. Synthesized speech can be produced by concatenating pieces of recorded speech that are stored in a database. TTS Systems differ in size of the stored speech units. A system which stores phones or diphones provides the largest output range, but this may give low clarity. For specific application domains, the storage of entire words or sentences allows for highquality output. Alternatively, a synthesizer can constitute a model of the vocal tract and other human voice characteristics to create a fully synthetic voice output. The quality of a speech synthesizer is decided by its naturalness or simillarity to the human voice and by its ability to be understood clearly. This paper summarizes the published literatures on Text to Speech (TTS), with discussing about the efforts taken in each paper. This system will be more helpful for an illiterate and visually impaired people to hear and understand the text.", "title": "" }, { "docid": "ff933c57886cfb4ab74b9cbd9e4f3a58", "text": "Many systems, applications, and features that support cooperative work share two characteristics: A significant investment has been made in their development, and their successes have consistently fallen far short of expectations. Examination of several application areas reveals a common dynamic: 1) A factor contributing to the application’s failure is the disparity between those who will benefit from an application and those who must do additional work to support it. 2) A factor contributing to the decision-making failure that leads to ill-fated development efforts is the unique lack of management intuition for CSCW applications. 3) A factor contributing to the failure to learn from experience is the extreme difficulty of evaluating these applications. These three problem areas escape adequate notice due to two natural but ultimately misleading analogies: the analogy between multi-user application programs and multi-user computer systems, and the analogy between multi-user applications and single-user applications. These analogies influence the way we think about cooperative work applications and designers and decision-makers fail to recognize their limits. Several CSCW application areas are examined in some detail. Introduction. An illustrative example: automatic meeting", "title": "" }, { "docid": "63934cfd6042d8bb2227f4e83b005cc2", "text": "To support effective exploration, it is often stated that interactive visualizations should provide rapid response times. However, the effects of interactive latency on the process and outcomes of exploratory visual analysis have not been systematically studied. We present an experiment measuring user behavior and knowledge discovery with interactive visualizations under varying latency conditions. We observe that an additional delay of 500ms incurs significant costs, decreasing user activity and data set coverage. Analyzing verbal data from think-aloud protocols, we find that increased latency reduces the rate at which users make observations, draw generalizations and generate hypotheses. Moreover, we note interaction effects in which initial exposure to higher latencies leads to subsequently reduced performance in a low-latency setting. Overall, increased latency causes users to shift exploration strategy, in turn affecting performance. We discuss how these results can inform the design of interactive analysis tools.", "title": "" }, { "docid": "56f4e1cfafbd18810fc9b66832a49f1f", "text": "Increasing the efficiency of production and manufacturing processes is a key goal of initiatives like Industry 4.0. Within the context of the European research project ARROWHEAD, we enable and secure smart maintenance services. An overall goal is to proactively predict and optimize the Maintenance, Repair and Operations (MRO) processes carried out by a device maintainer, for industrial devices deployed at the customer. Therefore it is necessary to centrally acquire maintenance relevant equipment status data from remotely located devices over the Internet. Consequently, security and privacy issues arise from connecting devices to the Internet, and sending data from customer sites to the maintainer's back-end. In this paper we consider an exemplary automotive use case with an AVL Particle Counter (APC) as device. The APC transmits its status information by means of a fingerprint via the publish-subscribe protocol Message Queue Telemetry Transport (MQTT) to an MQTT Information Broker in the remotely located AVL back-end. In a threat analysis we focus on the MQTT routing information asset and identify two elementary security goals in regard to client authentication. Consequently we propose a system architecture incorporating a hardware security controller that processes the Transport Layer Security (TLS) client authentication step. We validate the feasibility of the concept by means of a prototype implementation. Experimental results indicate that no significant performance impact is imposed by the hardware security element. The security evaluation confirms the advanced security of our system, which we believe lays the foundation for security and privacy in future smart service infrastructures.", "title": "" }, { "docid": "c8984cf950244f0d300c6446bcb07826", "text": "The grounded theory approach to doing qualitative research in nursing has become very popular in recent years. I confess to never really having understood Glaser and Strauss' original book: The Discovery of Grounded Theory. Since they wrote it, they have fallen out over what grounded theory might be and both produced their own versions of it. I welcomed, then, Kathy Charmaz's excellent and practical guide.", "title": "" }, { "docid": "7cec5184716f387aae5232820c7b7995", "text": "This paper details the system NILC USP that participated in the Semeval 2014: Aspect Based Sentiment Analysis task. This system uses a Conditional Random Field (CRF) algorithm for extracting the aspects mentioned in the text. Our work added semantic labels into a basic feature set for measuring the efficiency of those for aspect extraction. We used the semantic roles and the highest verb frame as features for the machine learning. Overall, our results demonstrated that the system could not improve with the use of this semantic information, but its precision was increased.", "title": "" }, { "docid": "e28ba2ea209537cf9867428e3cf7fdd7", "text": "People take their mobile phones everywhere they go. In Saudi Arabia, the mobile penetration is very high and students use their phones for different reasons in the classroom. The use of mobile devices in classroom triggers an alert of the impact it might have on students’ learning. This study investigates the association between the use of mobile phones during classroom and the learners’ performance and satisfaction. Results showed that students get distracted, and that this diversion of their attention is reflected in their academic success. However, this is not applicable for all. Some students received high scores even though they declared using mobile phones in classroom, which triggers a request for a deeper study.", "title": "" }, { "docid": "c6ff28e06120ae3114b61d74fdcc0603", "text": "This paper deals with an integrated starter-alternator (ISA) drive which exhibits a high torque for the engine start, a wide constant-power speed range for the engine speedup, and a high-speed generator mode operation for electric energy generation. Peculiarities of this ISA drive are thus its flux-weakening capability and the possibility to large torque overload at low speed. The focus on the design, analysis, and test of an interior permanent-magnet motor and drive for a prototype of ISA is given in this paper. In details, this paper reports on the design of stator and rotor geometries, the results of finite-element computations, the description of control system, and the experimental results of prototype tests.", "title": "" }, { "docid": "62c93d1c3033208a609e4fc14a42a493", "text": "Evolutionary-related hypotheses about gender differences in mate selection preferences were derived from Triver's parental investment model, which contends that women are more likely than men to seek a mate who possesses nonphysical characteristics that maximize the survival or reproductive prospects of their offspring, and were examined in a meta-analysis of mate selection research (questionnaire studies, analyses of personal advertisements). As predicted, women accorded more weight than men to socioeconomic status, ambitiousness, character, and intelligence, and the largest gender differences were observed for cues to resource acquisition (status, ambitiousness). Also as predicted, gender differences were not found in preferences for characteristics unrelated to progeny survival (sense of humor, \"personality\"). Where valid comparisons could be made, the findings were generally invariant across generations, cultures, and research paradigms.", "title": "" }, { "docid": "01895415b6785dda28ac5fa133c97909", "text": "Lossy compression introduces complex compression artifacts, particularly blocking artifacts, ringing effects and blurring. Existing algorithms either focus on removing blocking artifacts and produce blurred output, or restore sharpened images that are accompanied with ringing effects. Inspired by the success of deep convolutional networks (DCN) on superresolution [6], we formulate a compact and efficient network for seamless attenuation of different compression artifacts. To meet the speed requirement of real-world applications, we further accelerate the proposed baseline model by layer decomposition and joint use of large-stride convolutional and deconvolutional layers. This also leads to a more general CNN framework that has a close relationship with the conventional Multi-Layer Perceptron (MLP). Finally, the modified network achieves a speed up of 7.5× with almost no performance loss compared to the baseline model. We also demonstrate that a deeper model can be effectively trained with features learned in a shallow network. Following a similar “easy to hard” idea, we systematically investigate three practical transfer settings and show the effectiveness of transfer learning in low-level vision problems. Our method shows superior performance than the state-of-the-art methods both on benchmark datasets and a real-world use case.", "title": "" }, { "docid": "d2c6e2e807376b63828da4037028f891", "text": "Cortical circuits in the brain are refined by experience during critical periods early in postnatal life. Critical periods are regulated by the balance of excitatory and inhibitory (E/I) neurotransmission in the brain during development. There is now increasing evidence of E/I imbalance in autism, a complex genetic neurodevelopmental disorder diagnosed by abnormal socialization, impaired communication, and repetitive behaviors or restricted interests. The underlying cause is still largely unknown and there is no fully effective treatment or cure. We propose that alteration of the expression and/or timing of critical period circuit refinement in primary sensory brain areas may significantly contribute to autistic phenotypes, including cognitive and behavioral impairments. Dissection of the cellular and molecular mechanisms governing well-established critical periods represents a powerful tool to identify new potential therapeutic targets to restore normal plasticity and function in affected neuronal circuits.", "title": "" }, { "docid": "5858927c35f9e050e65b101961945727", "text": "Percutaneous endoscopic gastrostomy (PEG) tube placement is a well-established procedure in adults as well as in pediatric patients who cannot be orally fed. However, potential serious complications may occur. The buried bumper syndrome is a well-recognized long-term complication of PEG. Overgrowth of gastric mucosa over the inner bumper of the tube will cause mechanical failure of formula delivery, rendering the tube useless. However, published experience in children with buried bumper syndrome is very scarce. In the authors' clinic, 76 PEG tubes were placed from 2001 to 2008, and buried bumper syndrome occurred in 1 patient. The authors report on their experience with buried bumper syndrome, an adapted safe endoscopic removal technique, as well as recommendations for prevention of buried bumper syndrome.", "title": "" }, { "docid": "9fa8133dcb3baef047ee887fea1ed5a3", "text": "In this paper, we present an effective hierarchical shot classification scheme for broadcast soccer video. We first partition a video into replay and non-replay shots with replay logo detection. Then, non-replay shots are further classified into Long, Medium, Close-up or Out-field types with color and texture features based on a decision tree. We tested the method on real broadcast FIFA soccer videos, and the experimental results demonstrate its effectiveness..", "title": "" }, { "docid": "f7a1624a4827e95b961eb164022aa2a2", "text": "Mitotic chromosome condensation, sister chromatid cohesion, and higher order folding of interphase chromatin are mediated by condensin and cohesin, eukaryotic members of the SMC (structural maintenance of chromosomes)-kleisin protein family. Other members facilitate chromosome segregation in bacteria [1]. A hallmark of these complexes is the binding of the two ends of a kleisin subunit to the apices of V-shaped Smc dimers, creating a tripartite ring capable of entrapping DNA (Figure 1A). In addition to creating rings, kleisins recruit regulatory subunits. One family of regulators, namely Kite dimers (Kleisin interacting winged-helix tandem elements), interact with Smc-kleisin rings from bacteria, archaea and the eukaryotic Smc5-6 complex, but not with either condensin or cohesin [2]. These instead possess proteins containing HEAT (Huntingtin/EF3/PP2A/Tor1) repeat domains whose origin and distribution have not yet been characterized. Using a combination of profile Hidden Markov Model (HMM)-based homology searches, network analysis and structural alignments, we identify a common origin for these regulators, for which we propose the name Hawks, i.e. HEAT proteins associated with kleisins.", "title": "" }, { "docid": "7749fd32da3e853f9e9cfea74ddda5f8", "text": "This study describes the roles of architects in scaling agile frameworks with the help of a structured literature review. We aim to provide a primary analysis of 20 identified scaling agile frameworks. Subsequently, we thoroughly describe three popular scaling agile frameworks: Scaled Agile Framework, Large Scale Scrum, and Disciplined Agile 2.0. After specifying the main concepts of scaling agile frameworks, we characterize roles of enterprise, software, solution, and information architects, as identified in four scaling agile frameworks. Finally, we provide a discussion of generalizable findings on the role of architects in scaling agile frameworks.", "title": "" }, { "docid": "4c85c55ba02b2823aad33bf78d224b61", "text": "We developed an affordance-based methodology to support environmentally conscious behavior (ECB) that conserves resources such as materials, energy, etc. While studying concepts that aim to support ECB, we noted that characteristics of products that enable ECB tend to be more accurately described as affordances than functions. Therefore, we became interested in affordances, and specifically how affordances can be used to design products that support ECB. Affordances have been described as possible ways of interacting with products, or context-dependent relations between artifacts and users. Other researchers have explored affordances in lieu of functions as a basis for design, and developed detailed deductive methods of discovering affordances in products. We abstracted desired affordances from patterns and principles we observed to support ECB, and generated concepts based on those affordances. As a possible shortcut to identifying and implementing relevant affordances, we introduced the affordance-transfer method. This method involves altering a product’s affordances to add desired features from related products. Promising sources of affordances include lead-user and other products that support resource conservation. We performed initial validation of the affordance-transfer method and observed that it can improve the usefulness of the concepts that novice designers generate to support ECB. [DOI: 10.1115/1.4025288]", "title": "" }, { "docid": "00b8207e783aed442fc56f7b350307f6", "text": "A mathematical tool to build a fuzzy model of a system where fuzzy implications and reasoning are used is presented. The premise of an implication is the description of fuzzy subspace of inputs and its consequence is a linear input-output relation. The method of identification of a system using its input-output data is then shown. Two applications of the method to industrial processes are also discussed: a water cleaning process and a converter in a steel-making process.", "title": "" }, { "docid": "1da7f4f8276d591428a9764432f146fd", "text": "In this paper, stochastic synchronization is studied for complex networks with delayed coupling and mixed impulses. Mixed impulses are composed of desynchronizing and synchronizing impulses. The delayed coupling term involves transmission delay and self-feedback delay. By using the average impulsive interval approach and the comparison principle, several conditions are derived to guarantee that exponential synchronization of complex networks is achieved in the mean square. The derived conditions are closely related to the impulsive strengths, the frequency of impulse occurrence, and the coupling structure of complex networks. Numerical simulations are presented to further demonstrate the effectiveness of the proposed approach.", "title": "" }, { "docid": "342d074c84d55b60a617d31026fe23e1", "text": "Fractured bones heal by a cascade of cellular events in which mesenchymal cells respond to unknown regulators by proliferating, differentiating, and synthesizing extracellular matrix. Current concepts suggest that growth factors may regulate different steps in this cascade (10). Recent studies suggest regulatory roles for PDGF, aFGF, bFGF, and TGF-beta in the initiation and the development of the fracture callus. Fracture healing begins immediately following injury, when growth factors, including TGF-beta 1 and PDGF, are released into the fracture hematoma by platelets and inflammatory cells. TGF-beta 1 and FGF are synthesized by osteoblasts and chondrocytes throughout the healing process. TGF-beta 1 and PDGF appear to have an influence on the initiation of fracture repair and the formation of cartilage and intramembranous bone in the initiation of callus formation. Acidic FGF is synthesized by chondrocytes, chondrocyte precursors, and macrophages. It appears to stimulate the proliferation of immature chondrocytes or precursors, and indirectly regulates chondrocyte maturation and the expression of the cartilage matrix. Presumably, growth factors in the callus at later times regulate additional steps in repair of the bone after fracture. These studies suggest that growth factors are central regulators of cellular proliferation, differentiation, and extracellular matrix synthesis during fracture repair. Abnormal growth factor expression has been implicated as causing impaired or abnormal healing in other tissues, suggesting that altered growth factor expression also may be responsible for abnormal or delayed fracture repair. As a complete understanding of fracture-healing regulation evolves, we expect new insights into the etiology of abnormal or delayed fracture healing, and possibly new therapies for these difficult clinical problems.", "title": "" } ]
scidocsrr
365d82dfd69ea3ae2ce987a94988bc16
Deep State Space Models for Time Series Forecasting
[ { "docid": "fa2c69161ab7955a4cab6d08acc806fe", "text": "Accurate time-series forecasting during high variance segments (e.g., holidays), is critical for anomaly detection, optimal resource allocation, budget planning and other related tasks. At Uber accurate prediction for completed trips during special events can lead to a more efficient driver allocation resulting in a decreased wait time for the riders. State of the art methods for handling this task often rely on a combination of univariate forecasting models (e.g., Holt-Winters) and machine learning methods (e.g., random forest). Such a system, however, is hard to tune, scale and add exogenous variables. Motivated by the recent resurgence of Long Short Term Memory networks we propose a novel endto-end recurrent neural network architecture that outperforms the current state of the art event forecasting methods on Uber data and generalizes well to a public M3 dataset used for time-series forecasting competitions.", "title": "" }, { "docid": "1a65a6e22d57bb9cd15ba01943eeaa25", "text": "+ optimal local factor – expensive for general obs. + exploit conj. graph structure + arbitrary inference queries + natural gradients – suboptimal local factor + fast for general obs. – does all local inference – limited inference queries – no natural gradients ± optimal given conj. evidence + fast for general obs. + exploit conj. graph structure + arbitrary inference queries + some natural gradients", "title": "" } ]
[ { "docid": "f437f971d7d553b69d438a469fd26d41", "text": "This paper introduces a single-chip, 200 200element sensor array implemented in a standard two-metal digital CMOS technology. The sensor is able to grab the fingerprint pattern without any use of optical and mechanical adaptors. Using this integrated sensor, the fingerprint is captured at a rate of 10 F/s by pressing the finger skin onto the chip surface. The fingerprint pattern is sampled by capacitive sensors that detect the electric field variation induced by the skin surface. Several design issues regarding the capacitive sensing problem are reported and the feedback capacitive sensing scheme (FCS) is introduced. More specifically, the problem of the charge injection in MOS switches has been revisited for charge amplifier design.", "title": "" }, { "docid": "8cfd1f3c111fb0b210ba141e81b8db41", "text": "Childhood obesity is a major public health concern. According to the World Health Organization, more than 22 million children worldwide are classified as overweight (WHO, 2009). In Australia, the most recent data available show that 4.5% of boys and 5.5% of girls ages 2–18 years old are obese (Magarey & Daniels, 2001). Cutoffs for body mass index, weight in kilograms divided by height in metres squared, ‡ 30 kg/m2 for obesity are universally accepted for adults. International cutoffs for obesity designed for children (Cole, Bellizzi, Flegal & Dietz, 2000) use age, gender and body mass index to define obesity (e.g. the cutoff for a 2-year-old boy is 20.09 kg/m2, whereas the cutoff for a 171⁄2-year-old girl is 29.84 kg/m2). Current research on the role of occupational therapy in addressing childhood obesity has focussed on weight loss, weight gain prevention, or increases in physical activity by restructuring environments and routines (Ziviani, Desha, Poulsen & Whiteford, 2010). However, weight loss is not immediate. Examining how to maintain children’s safety during weight loss is important. Obesity affects children’s ability to maintain safety (Bazelmans et al., 2004) while performing their occupations. Impairments in motor adaptation, altering actions to cope with continuously changing environments, result in increased safety risks for children who are obese. They also influence occupational performance, ‘the ability to perceive, desire, recall, plan and carry out roles, routines, tasks, and subtasks for the purpose of self-maintenance, productivity, leisure and rest in response to demands of", "title": "" }, { "docid": "f36617abd8f9429978d165a040640540", "text": "The ability to automatically recognize a wide range of sound events in real-world conditions is an important part of applications such as acoustic surveillance and machine hearing. Our approach takes inspiration from both audio and image processing fields, and is based on transforming the sound into a two-dimensional representation, then extracting an image feature for classification. This provided the motivation for our previous work on the spectrogram image feature (SIF). In this paper, we propose a novel method to improve the sound event classification performance in severe mismatched noise conditions. This is based on the subband power distribution (SPD) image - a novel two-dimensional representation that characterizes the spectral power distribution over time in each frequency subband. Here, the high-powered reliable elements of the spectrogram are transformed to a localized region of the SPD, hence can be easily separated from the noise. We then extract an image feature from the SPD, using the same approach as for the SIF, and develop a novel missing feature classification approach based on a nearest neighbor classifier (kNN). We carry out comprehensive experiments on a database of 50 environmental sound classes over a range of challenging noise conditions. The results demonstrate that the SPD-IF is both discriminative over the broad range of sound classes, and robust in severe non-stationary noise.", "title": "" }, { "docid": "ede1cfd85dbb2aaa6451128c222d99a2", "text": "Crowdsourcing is a crowd-based outsourcing, where a requester (task owner) can outsource tasks to workers (public crowd). Recently, mobile crowdsourcing, which can leverage workers' data from smartphones for data aggregation and analysis, has attracted much attention. However, when the data volume is getting large, it becomes a difficult problem for a requester to aggregate and analyze the incoming data, especially when the requester is an ordinary smartphone user or a start-up company with limited storage and computation resources. Besides, workers are concerned about their identity and data privacy. To tackle these issues, we introduce a three-party architecture for mobile crowdsourcing, where the cloud is implemented between workers and requesters to ease the storage and computation burden of the resource-limited requester. Identity privacy and data privacy are also achieved. With our scheme, a requester is able to verify the correctness of computation results from the cloud. We also provide several aggregated statistics in our work, together with efficient data update methods. Extensive simulation shows both the feasibility and efficiency of our proposed solution.", "title": "" }, { "docid": "3827b5f919c21dc7e228eaf78ffcfb46", "text": "In this paper we described the development of both the hardware and the algorithms for a novel laser vision system suitable for measuring distances from both solid and mesh-like targets in underwater environments. The system was developed as a part of the AQUABOT project that developed an underwater robotic system for autonomous inspection of offshore aquaculture installation. The system takes into account the hemispherical optics typical in underwater vehicle designs and implements an array of line-lasers to ensure that mesh-like targets provide reflections in a consistent manner. The developed algorithms for the laser vision system are capable of providing either raw pointcloud data sets from each laser or with additional processing high level information like distance and relative orientation of the target with respect to the ROV can be recovered. An automatic calibration procedure along with the accompanying hardware that was developed, is described in this paper, to reduce the calibration overhead required by regular maintenance operations as is typical for underwater vehicles operating in sea-water. A set of experimental results in controlled laboratory environment as well as at offshore aquaculture installations demonstrate the performance of the system.", "title": "" }, { "docid": "4aa6103dca92cf8663139baf93f78a80", "text": "We propose a unified approach for summarization based on the analysis of video structures and video highlights. Our approach emphasizes both the content balance and perceptual quality of a summary. Normalized cut algorithm is employed to globally and optimally partition a video into clusters. A motion attention model based on human perception is employed to compute the perceptual quality of shots and clusters. The clusters, together with the computed attention values, form a temporal graph similar to Markov chain that inherently describes the evolution and perceptual importance of video clusters. In our application, the flow of a temporal graph is utilized to group similar clusters into scenes, while the attention values are used as guidelines to select appropriate sub-shots in scenes for summarization.", "title": "" }, { "docid": "f33f67c5b6160e2cb680c85f06abad4b", "text": "Device-to-device (D2D) communication is developed as a new paradigm to enhance network performance according to LTE and WiMAX advanced standards. The D2D communication may have dedicated spectrum (overlay) or shared spectrum (underlay). However, the allocated dedicated spectrum may not be effectively used in the overlay mode, while interference between the D2D users and cellular users cause impairments in the underlay mode. Can the resource allocation of a D2D system be optimized using the cognitive approach where the D2D users opportunistically access the underutilized radio spectrum? That is the focus of this paper. In this paper, the transmission rate of the D2D users is optimized while simultaneously satisfying five sets of constraints related to power, interference, and data rate, modeling D2D users as cognitive secondary users. Furthermore, a two-stage approach is considered to allocate the radio resources efficiently. A new adaptive subcarrier allocation scheme is designed first, and then, a novel power allocation scheme is developed utilizing geometric water-filling approach that provides optimal solution with low computation complexity for this nonlinear problem. Numerical results show that the proposed approach achieved significant performance enhancement than the existing schemes.", "title": "" }, { "docid": "4230dbcca1ba43e5ba701367fc68be16", "text": "Echinacea commonly called the Purple coneflowers, is a genus of nine species of herbaceous plants in the Family Asteraceae. Three of them are important in commerce, with the majority of wild harvest being E. angustifolia. It has been used for a variety of ailments, including toothache, coughs, colds, sore throats, snakebite, and as a painkiller. In the current study, in vitro inhibitory activity of Echinacea angustifolia essential oils were screened against Coliform spp, Pseudomonas spp, Saccharomyces cerevisiae (EC1118), Zygosaccharomyces bailii (DSM 70492) and Lactobacillus plantarum (DSM2601). Agar well diffusion assay was adopted for the study. E. angustifolia oils showed very weak antimicrobial activity against the microorganisms tested with diameter of inhibition zone not exceeding 3 mm. The highest activities were observed for Z. baillii and S. cereviceae at a concentration of 10 and 100 ppm respectively, while for the rest of the strains the diameter of inhibition zone were ranged 1 and 2.5 mm, except Coliform spp which was not affected by the presence of the essential oil at a concentration of 50 ppm. The low bacteriostatic effect of this plant essential oil against some of the most important causes of infections provides an exciting potential for the future, especially in the light of the shift away from commonly used antibiotics and the move towards more natural alternatives.", "title": "" }, { "docid": "0c14a63112a99c13ac12440386da8c22", "text": "Assessment of food intake has a wide range of applications in public health and life-style related chronic disease management. In this paper, we propose a real-time food recognition platform combined with daily activity and energy expenditure estimation. In the proposed method, food recognition is based on hierarchical classification using multiple visual cues, supported by efficient software implementation suitable for realtime mobile device execution. A Fischer Vector representation together with a set of linear classifiers are used to categorize food intake. Daily energy expenditure estimation is achieved by using the built-in inertial motion sensors of the mobile device. The performance of the vision-based food recognition algorithm is compared to the current state-of-the-art, showing improved accuracy and high computational efficiency suitable for realtime feedback. Detailed user studies have also been performed to demonstrate the practical value of the software environment.", "title": "" }, { "docid": "8c8ece47107bc1580e925e42d266ec87", "text": "How do brains shape social networks, and how do social ties shape the brain? Social networks are complex webs by which ideas spread among people. Brains comprise webs by which information is processed and transmitted among neural units. While brain activity and structure offer biological mechanisms for human behaviors, social networks offer external inducers or modulators of those behaviors. Together, these two axes represent fundamental contributors to human experience. Integrating foundational knowledge from social and developmental psychology and sociology on how individuals function within dyads, groups, and societies with recent advances in network neuroscience can offer new insights into both domains. Here, we use the example of how ideas and behaviors spread to illustrate the potential of multilayer network models.", "title": "" }, { "docid": "f22bb0a0d3618ce05802e883da1c772f", "text": "OBJECTIVE: Obesity has increased at an alarming rate in recent years and is now a worldwide health problem. We investigated the effects of long-term feeding with tea catechins, which are naturally occurring polyphenolic compounds widely consumed in Asian countries, on the development of obesity in C57BL/6J mice.DESIGN: We measured body weight, adipose tissue mass and liver fat content in mice fed diets containing either low-fat (5% triglyceride (TG)), high-fat (30% TG), or high-fat supplemented with 0.1–0.5% (w/w) tea catechins for 11 months. The β-oxidation activities and related mRNA levels were measured after 1 month of feeding.RESULTS: Supplementation with tea catechins resulted in a significant reduction of high-fat diet-induced body weight gain, visceral and liver fat accumulation, and the development of hyperinsulinemia and hyperleptinemia. Feeding with tea catechins for 1 month significantly increased acyl-CoA oxidase and medium chain acyl-CoA dehydrogenase mRNA expression as well as β-oxidation activity in the liver.CONCLUSION: The stimulation of hepatic lipid metabolism might be a factor responsible for the anti-obesity effects of tea catechins. The present results suggest that long-term consumption of tea catechins is beneficial for the suppression of diet-induced obesity, and it may reduce the risk of associated diseases including diabetes and coronary heart disease.", "title": "" }, { "docid": "601318db5ca75c76cd44da78db9f4147", "text": "Many accidents were happened because of fast driving, habitual working overtime or tired spirit. This paper presents a solution of remote warning for vehicles collision avoidance using vehicular communication. The development system integrates dedicated short range communication (DSRC) and global position system (GPS) with embedded system into a powerful remote warning system. To transmit the vehicular information and broadcast vehicle position; DSRC communication technology is adopt as the bridge. The proposed system is divided into two parts of the positioning and vehicular units in a vehicle. The positioning unit is used to provide the position and heading information from GPS module, and furthermore the vehicular unit is used to receive the break, throttle, and other signals via controller area network (CAN) interface connected to each mechanism. The mobile hardware are built with an embedded system using X86 processor in Linux system. A vehicle is communicated with other vehicles via DSRC in non-addressed protocol with wireless access in vehicular environments (WAVE) short message protocol. From the position data and vehicular information, this paper provided a conflict detection algorithm to do time separation and remote warning with error bubble consideration. And the warning information is on-line displayed in the screen. This system is able to enhance driver assistance service and realize critical safety by using vehicular information from the neighbor vehicles. Keywords—Dedicated short range communication, GPS, Control area network, Collision avoidance warning system.", "title": "" }, { "docid": "13cb08194cf7254932b49b7f7aff97d1", "text": "When there are many people who don't need to expect something more than the benefits to take, we will suggest you to have willing to reach all benefits. Be sure and surely do to take this computer vision using local binary patterns that gives the best reasons to read. When you really need to get the reason why, this computer vision using local binary patterns book will probably make you feel curious.", "title": "" }, { "docid": "628840e66a3ea91e75856b7ae43cb9bb", "text": "Optimal shape design of structural elements based on boundary variations results in final designs that are topologically equivalent to the initial choice of design, and general, stable computational schemes for this approach often require some kind of remeshing of the finite element approximation of the analysis problem. This paper presents a methodology for optimal shape design where both these drawbacks can be avoided. The method is related to modern production techniques and consists of computing the optimal distribution in space of an anisotropic material that is constructed by introducing an infimum of periodically distributed small holes in a given homogeneous, i~otropic material, with the requirement that the resulting structure can carry the given loads as well as satisfy other design requirements. The computation of effective material properties for the anisotropic material is carried out using the method of homogenization. Computational results are presented and compared with results obtained by boundary variations.", "title": "" }, { "docid": "3cde70842ee80663cbdc04db6a871d46", "text": "Artificial perception, in the context of autonomous driving, is the process by which an intelligent system translates sensory data into an effective model of the environment surrounding a vehicle. In this paper, and considering data from a 3D-LIDAR mounted onboard an intelligent vehicle, a 3D perception system based on voxels and planes is proposed for ground modeling and obstacle detection in urban environments. The system, which incorporates time-dependent data, is composed of two main modules: (i) an effective ground surface estimation using a piecewise plane fitting algorithm and RANSAC-method, and (ii) a voxel-grid model for static and moving obstacles detection using discriminative analysis and ego-motion information. This perception system has direct application in safety systems for intelligent vehicles, particularly in collision avoidance and vulnerable road users detection, namely pedestrians and cyclists. Experiments, using point-cloud data from a Velodyne LIDAR and localization data from an Inertial Navigation System were conducted for both a quantitative and a qualitative assessment of the static/moving obstacle detection module and for the surface estimation approach. Reported results, from experiments using the KITTI database, demonstrate the applicability and efficiency of the proposed approach in urban scenarios.", "title": "" }, { "docid": "cfff07dbbc363a3e64b94648e19f2e4b", "text": "Nitrogen (N) starvation and excess have distinct effects on N uptake and metabolism in poplars, but the global transcriptomic changes underlying morphological and physiological acclimation to altered N availability are unknown. We found that N starvation stimulated the fine root length and surface area by 54 and 49%, respectively, decreased the net photosynthetic rate by 15% and reduced the concentrations of NH4+, NO3(-) and total free amino acids in the roots and leaves of Populus simonii Carr. in comparison with normal N supply, whereas N excess had the opposite effect in most cases. Global transcriptome analysis of roots and leaves elucidated the specific molecular responses to N starvation and excess. Under N starvation and excess, gene ontology (GO) terms related to ion transport and response to auxin stimulus were enriched in roots, whereas the GO term for response to abscisic acid stimulus was overrepresented in leaves. Common GO terms for all N treatments in roots and leaves were related to development, N metabolism, response to stress and hormone stimulus. Approximately 30-40% of the differentially expressed genes formed a transcriptomic regulatory network under each condition. These results suggest that global transcriptomic reprogramming plays a key role in the morphological and physiological acclimation of poplar roots and leaves to N starvation and excess.", "title": "" }, { "docid": "890da17049756c2da578d31fd3f06f90", "text": "A novel and compact planar multiband multiple-input-multiple-output (MIMO) antenna is presented. The proposed antenna is composed of two symmetrical radiating elements connected by neutralizing line to cancel the reactive coupling. The radiating element is designed for different frequencies operating in GSM 900 MHz, DCS 1800 MHz, LTE-E 2300 MHz, and LTE-D 2600 MHz, which consists of a folded monopole and a beveled rectangular metal patch. The presented antenna is fed by using 50-Ω coplanar waveguide (CPW) transmission lines. Four slits are etched into the ground plane for reducing the mutual coupling. The measured results show that the proposed antenna has good impedance matching, isolation, peak gain, and radiation patterns. The radiation efficiency and diversity gain (DG) in the servicing frequencies are pretty well. In the Ericsson indoor experiment, three kinds of antenna feed systems are discussed. The proposed antenna shows good performance in Long Term Evolution (LTE) reference signal receiving power (RSRP), download speed, and upload speed.", "title": "" }, { "docid": "e07b31a980b128673c7581276c20b706", "text": "Detection of events using voluntarily generated content in microblogs has been the objective of numerous recent studies. One essential challenge tackled in these studies is estimating the locations of events. In this paper, we review the state-of-the-art location estimation techniques used in the localization of events detected in microblogs, particularly in Twitter, which is one of the most popular microblogging platforms worldwide. We analyze these techniques with respect to the targeted event type, granularity of estimated locations, location-related features selected as sources of spatial evidence, and the method used to make aggregate decisions based on the extracted evidence. We discuss the strengths and advantages of alternative solutions to various problems related to location estimation, as well as their preconditions and limitations. We examine the most widely used evaluation methods to analyze the accuracy of estimations and present the results reported in the literature. We also discuss our findings and highlight important research challenges that may need further attention.", "title": "" }, { "docid": "0747f6d9bf71171f61d4cb8f6eb6aa62", "text": "Faculty in the College of Engineering at the University of Alabama developed a multidisciplinary course in applied spectral analysis that was first offered in 1996. The course is aimed at juniors majoring in electrical, mechanical, industrial, or aerospace engineering. No background in signal processing or Fourier analysis is assumed; the requisite fundamentals are covered early in the course and followed by a series of laboratories in which the fundamental concepts are applied. In this paper, a laboratory module on fault detection in rolling element bearings is presented. This module is one of two laboratory modules focusing on machine condition monitoring applications that were developed for this course. Background on the basic operational characteristics of rolling element bearings is presented, and formulas given for the calculation of the characteristic fault frequencies. The shortcomings of conventional vibration spectral analysis for the detection of bearing faults is examined in the context of a synthetic vibration signal that students generate in MATLAB. This signal shares several key features of vibration signatures measured on bearing housings. Envelope analysis and the connection between bearing fault signatures and amplitude modulation/demodulation is explained. Finally, a graphically driven software utility (a set of MATLAB m-files) is introduced. This software allows students to explore envelope analysis using measured data or the synthetic signal that they generated. The software utility and the material presented in this paper constitute an instructional module on bearing fault detection that can be used as a stand-alone tutorial or incorporated into a course.", "title": "" }, { "docid": "ca655b741316e8c65b6b7590833396e1", "text": "• A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers.", "title": "" } ]
scidocsrr
83737b9cbf2db088ddf872761945a48a
Cross-Lingual Word Representations via Spectral Graph Embeddings
[ { "docid": "43c39bc72a63acdc24a6683e69e34791", "text": "We introduce Trans-gram, a simple and computationally-efficient method to simultaneously learn and align wordembeddings for a variety of languages, using only monolingual data and a smaller set of sentence-aligned data. We use our new method to compute aligned wordembeddings for twenty-one languages using English as a pivot language. We show that some linguistic features are aligned across languages for which we do not have aligned data, even though those properties do not exist in the pivot language. We also achieve state of the art results on standard cross-lingual text classification and word translation tasks.", "title": "" } ]
[ { "docid": "37f2ed531daf16b41eb99f21cc065dbe", "text": "This paper combines three exploratory data analysis methods, principal component methods, hierarchical clustering and partitioning, to enrich the description of the data. Principal component methods are used as preprocessing step for the clustering in order to denoise the data, transform categorical data in continuous ones or balanced groups of variables. The principal component representation is also used to visualize the hierarchical tree and/or the partition in a 3D-map which allows to better understand the data. The proposed methodology is available in the HCPC (Hierarchical Clustering on Principal Components) function of the FactoMineR package.", "title": "" }, { "docid": "d5e54133fa5166f0e72884bd3501bbfb", "text": "In order to explore the characteristics of the evolution behavior of the time-varying relationships between multivariate time series, this paper proposes an algorithm to transfer this evolution process to a complex network. We take the causality patterns as nodes and the succeeding sequence relations between patterns as edges. We used four time series as sample data. The results of the analysis reveal some statistical evidences that the causalities between time series is in a dynamic process. It implicates that stationary long-term causalities are not suitable for some special situations. Some short-term causalities that our model recognized can be referenced to the dynamic adjustment of the decisions. The results also show that weighted degree of the nodes obeys power law distribution. This implies that a few types of causality patterns play a major role in the process of the transition and that international crude oil market is statistically significantly not random. The clustering effect appears in the transition process and different clusters have different transition characteristics which provide probability information for predicting the evolution of the causality. The approach presents a potential to analyze multivariate time series and provides important information for investors and decision makers.", "title": "" }, { "docid": "9536381bdddac7aeca9c60b8876d56d2", "text": "This study uses augmented reality (AR) technology to implement an AR-learning system for English vocabulary learning. Although previous studies have indicated that multimedia courseware enhances learning effectiveness, some important issues may cause negative effects. This study investigates learners' satisfaction and behavioral intention as well as the effectiveness of the AR-learning system. The results of this study showed that system quality was a critical factor affecting perceived satisfaction, perceived usefulness, and AR-learning effectiveness. Perceived self-efficacy also affected perceived satisfaction and perceived usefulness. On the other hand, multimedia instruction was a minor factor affecting perceived usefulness and AR-learning effectiveness. We verified that learners' behavioral intention was affected by perceived satisfaction and perceived usefulness of the AR-learning system. Furthermore, the design of system function and operation process must be more straightforward for learners when adopting new technology in the learning system.", "title": "" }, { "docid": "6dc5fc0e970c4ffe418805d5a1159500", "text": "Due to the high costs of live research, performance simulation has become a widely accepted method of assessment for the quality of proposed solutions in this field. Additionally, being able to simulate the behavior of the future occupants of a residential building can be very useful since it can support both design-time and run-time decisions leading to reduced energy consumption through, e.g., the design of model predictive controllers that incorporate user behavior predictions. In this work, we provide a framework for simulating user behavior in residential buildings. In fact, we are interested in how to deal with all user behavior aspects so that these computer simulations can provide a realistic framework for testing alternative policies for energy saving.", "title": "" }, { "docid": "771339711243897c18d565769e758a74", "text": "This paper presents Memory Augmented Policy Optimization (MAPO): a novel policy optimization formulation that incorporates a memory buffer of promising trajectories to reduce the variance of policy gradient estimates for deterministic environments with discrete actions. The formulation expresses the expected return objective as a weighted sum of two terms: an expectation over a memory of trajectories with high rewards, and a separate expectation over the trajectories outside the memory. We propose 3 techniques to make an efficient training algorithm for MAPO: (1) distributed sampling from inside and outside memory with an actor-learner architecture; (2) a marginal likelihood constraint over the memory to accelerate training; (3) systematic exploration to discover high reward trajectories. MAPO improves the sample efficiency and robustness of policy gradient, especially on tasks with a sparse reward. We evaluate MAPO on weakly supervised program synthesis from natural language with an emphasis on generalization. On the WIKITABLEQUESTIONS benchmark we improve the state-of-the-art by 2.5%, achieving an accuracy of 46.2%, and on the WIKISQL benchmark, MAPO achieves an accuracy of 74.9% with only weak supervision, outperforming several strong baselines with full supervision. Our code is open sourced at https://github.com/crazydonkey200/neural-symbolic-machines.", "title": "" }, { "docid": "f0db38ba0ff29c49c6c8014fff4225c9", "text": "The requirements engineering phase of software development projects is characterised by the intensity and importance of communication activities. During this phase, the various stakeholders must be able to communicate their requirements to the analysts, and the analysts need to be able to communicate the specifications they generate back to the stakeholders for validation. This paper describes a field investigation into the problems of communication between disparate communities involved in the requirements specification activities. The results of this study are discussed in terms of their relation to three major communication barriers : 1) ineffectiveness of the current communication channels; 2) restrictions on expressiveness imposed by notations; and 3) social and organisational barriers. The results confirm that organisational and social issues have great influence on the effectiveness of communication. They also show that in general, endusers find the notations used by software practitioners to model their requirements difficult to understand and validate.", "title": "" }, { "docid": "9ed2f6172271c6ccdba2ab16e2d6b3d6", "text": "An important problem in analyzing big data is subspace clustering, i.e., to represent a collection of points in a high-dimensional space via the union of low-dimensional subspaces. Sparse Subspace Clustering (SSC) and LowRank Representation (LRR) are the state-of-the-art methods for this task. These two methods are fundamentally similar in that both are based on convex optimization exploiting the intuition of “Self-Expressiveness”. The main difference is that SSC minimizes the vector `1 norm of the representation matrix to induce sparsity while LRR minimizes the nuclear norm (aka trace norm) to promote a low-rank structure. Because the representation matrix is often simultaneously sparse and low-rank, we propose a new algorithm, termed Low-Rank Sparse Subspace Clustering (LRSSC), by combining SSC and LRR, and develop theoretical guarantees of the success of the algorithm. The results reveal interesting insights into the strengths and weaknesses of SSC and LRR, and demonstrate how LRSSC can take advantage of both methods in preserving the “Self-Expressiveness Property” and “Graph Connectivity” at the same time. A byproduct of our analysis is that it also expands the theoretical guarantee of SSC to handle cases when the subspaces have arbitrarily small canonical angles but are “nearly independent”.", "title": "" }, { "docid": "e6d7399b88c57aebca0a43662d7fd855", "text": "UNLABELLED\nAlthough the brain relies on auditory information to calibrate vocal behavior, the neural substrates of vocal learning remain unclear. Here we demonstrate that lesions of the dopaminergic inputs to a basal ganglia nucleus in a songbird species (Bengalese finches, Lonchura striata var. domestica) greatly reduced the magnitude of vocal learning driven by disruptive auditory feedback in a negative reinforcement task. These lesions produced no measureable effects on the quality of vocal performance or the amount of song produced. Our results suggest that dopaminergic inputs to the basal ganglia selectively mediate reinforcement-driven vocal plasticity. In contrast, dopaminergic lesions produced no measurable effects on the birds' ability to restore song acoustics to baseline following the cessation of reinforcement training, suggesting that different forms of vocal plasticity may use different neural mechanisms.\n\n\nSIGNIFICANCE STATEMENT\nDuring skill learning, the brain relies on sensory feedback to improve motor performance. However, the neural basis of sensorimotor learning is poorly understood. Here, we investigate the role of the neurotransmitter dopamine in regulating vocal learning in the Bengalese finch, a songbird with an extremely precise singing behavior that can nevertheless be reshaped dramatically by auditory feedback. Our findings show that reduction of dopamine inputs to a region of the songbird basal ganglia greatly impairs vocal learning but has no detectable effect on vocal performance. These results suggest a specific role for dopamine in regulating vocal plasticity.", "title": "" }, { "docid": "38570075c31812866646d47d25667a49", "text": "Mercator is a program that uses hop-limited probes—the same primitive used in traceroute—to infer an Internet map. It uses informed random address probing to carefully exploring the IP address space when determining router adjacencies, uses source-route ca p ble routers wherever possible to enhance the fidelity of the resulting ma p, and employs novel mechanisms for resolvingaliases(interfaces belonging to the same router). This paper describes the design of these heuri stics and our experiences with Mercator, and presents some preliminary a nalysis of the resulting Internet map.", "title": "" }, { "docid": "3ad124875f073ff961aaf61af2832815", "text": "EVERY HUMAN CULTURE HAS SOME FORM OF MUSIC WITH A BEAT\na perceived periodic pulse that structures the perception of musical rhythm and which serves as a framework for synchronized movement to music. What are the neural mechanisms of musical beat perception, and how did they evolve? One view, which dates back to Darwin and implicitly informs some current models of beat perception, is that the relevant neural mechanisms are relatively general and are widespread among animal species. On the basis of recent neural and cross-species data on musical beat processing, this paper argues for a different view. Here we argue that beat perception is a complex brain function involving temporally-precise communication between auditory regions and motor planning regions of the cortex (even in the absence of overt movement). More specifically, we propose that simulation of periodic movement in motor planning regions provides a neural signal that helps the auditory system predict the timing of upcoming beats. This \"action simulation for auditory prediction\" (ASAP) hypothesis leads to testable predictions. We further suggest that ASAP relies on dorsal auditory pathway connections between auditory regions and motor planning regions via the parietal cortex, and suggest that these connections may be stronger in humans than in non-human primates due to the evolution of vocal learning in our lineage. This suggestion motivates cross-species research to determine which species are capable of human-like beat perception, i.e., beat perception that involves accurate temporal prediction of beat times across a fairly broad range of tempi.", "title": "" }, { "docid": "2c1de0ee482b3563c6b0b49bfdbbe508", "text": "The paper summarizes our research in the area of unsupervised categorization of Wikipedia articles. As a practical result of our research, we present an application of spectral clustering algorithm used for grouping Wikipedia search results. The main contribution of the paper is a representation method for Wikipedia articles that has been based on combination of words and links and used for categoriation of search results in this repository. We evaluate the proposed approach with Primary Component projections and show, on the test data, how usage of cosine transformation to create combined representations influence data variability. On sample test datasets, we also show how combined representation improves the data separation that increases overall results of data categorization. To implement the system, we review the main spectral clustering methods and we test their usability for text categorization. We give a brief description of the system architecture that groups online Wikipedia articles retrieved with user-specified keywords. Using the system, we show how clustering increases information retrieval effectiveness for Wikipedia data repository.", "title": "" }, { "docid": "16315b2fe950486ecc80cd2b055f534d", "text": "The aim of this meta-analysis was to quantify the effects of high-intensity interval training (HIIT) on markers of glucose regulation and insulin resistance compared with control conditions (CON) or continuous training (CT). Databases were searched for HIIT interventions based upon the inclusion criteria: training ≥2 weeks, adult participants and outcome measurements that included insulin resistance, fasting glucose, HbA1c or fasting insulin. Dual interventions and participants with type 1 diabetes were excluded. Fifty studies were included. There was a reduction in insulin resistance following HIIT compared with both CON and CT (HIIT vs. CON: standardized mean difference [SMD] = -0.49, confidence intervals [CIs] -0.87 to -0.12, P = 0.009; CT: SMD = -0.35, -0.68 to -0.02, P = 0.036). Compared with CON, HbA1c decreased by 0.19% (-0.36 to -0.03, P = 0.021) and body weight decreased by 1.3 kg (-1.9 to -0.7, P < 0.001). There were no statistically significant differences between groups in other outcomes overall. However, participants at risk of or with type 2 diabetes experienced reductions in fasting glucose (-0.92 mmol L(-1), -1.22 to -0.62, P < 0.001) compared with CON. HIIT appears effective at improving metabolic health, particularly in those at risk of or with type 2 diabetes. Larger randomized controlled trials of longer duration than those included in this meta-analysis are required to confirm these results.", "title": "" }, { "docid": "cd68f1e50052709d85cabf55bb1764df", "text": "Multi-label classification is one of the most challenging tasks in the computer vision community, owing to different composition and interaction (e.g. partial visibility or occlusion) between objects in multi-label images. Intuitively, some objects usually co-occur with some specific scenes, e.g. the sofa often appears in a living room. Therefore, the scene of a given image may provides informative cues for identifying those embedded objects. In this paper, we propose a novel scene-aware deep framework for addressing the challenging multi-label classification task. In particular, we incorporate two sub-networks that are pre-trained for different tasks (i.e. object classification and scene classification) into a unified framework, so that informative scene-aware cues can be leveraged for benefiting multi-label object classification. In addition, we also present a novel one vs. all multiple-cross-entropy (MCE) loss for optimizing the proposed scene-aware deep framework by independently penalizing the classification error for each label. The proposed method can be learned in an end-to-end manner and extensive experimental results on Pascal VOC 2007 and MS COCO demonstrate that our approach is able to make a noticeable improvement for the multi-label classification task.", "title": "" }, { "docid": "edd415b34d60495c4da0ffc9e714acf3", "text": "Nearly two decades ago, Ward Cunningham introduced us to the term \"technical debt\" as a means of describing the long term costs associated with a suboptimal software design and implementation. For most programs, especially those with a large legacy code baseline, achieving zero absolute debt is an unnecessary and unrealistic goal. It is important to recall that a primary reason for managing and eliminating debt is to drive down maintenance costs and to reduce defects. A sufficiently low, manageable level of debt can minimize the long-term impact, i.e., \"low debt interest payments\". In this article, we define an approach for establishing program specific thresholds to define manageable levels of technical debt.", "title": "" }, { "docid": "0c01132904f2c580884af1391069addd", "text": "BACKGROUND\nThe inclusion of qualitative studies in systematic reviews poses methodological challenges. This paper presents worked examples of two methods of data synthesis (textual narrative and thematic), used in relation to one review, with the aim of enabling researchers to consider the strength of different approaches.\n\n\nMETHODS\nA systematic review of lay perspectives of infant size and growth was conducted, locating 19 studies (including both qualitative and quantitative). The data extracted from these were synthesised using both a textual narrative and a thematic synthesis.\n\n\nRESULTS\nThe processes of both methods are presented, showing a stepwise progression to the final synthesis. Both methods led us to similar conclusions about lay views toward infant size and growth. Differences between methods lie in the way they dealt with study quality and heterogeneity.\n\n\nCONCLUSION\nOn the basis of the work reported here, we consider textual narrative and thematic synthesis have strengths and weaknesses in relation to different research questions. Thematic synthesis holds most potential for hypothesis generation, but may obscure heterogeneity and quality appraisal. Textual narrative synthesis is better able to describe the scope of existing research and account for the strength of evidence, but is less good at identifying commonality.", "title": "" }, { "docid": "6ce429d7974c9593f4323ec306488b1f", "text": "The encoder-decoder framework for neural machine translation (NMT) has been shown effective in large data scenarios, but is much less effective for low-resource languages. We present a transfer learning method that significantly improves BLEU scores across a range of low-resource languages. Our key idea is to first train a high-resource language pair (the parent model), then transfer some of the learned parameters to the low-resource pair (the child model) to initialize and constrain training. Using our transfer learning method we improve baseline NMT models by an average of 5.6 BLEU on four low-resource language pairs. Ensembling and unknown word replacement add another 2 BLEU which brings the NMT performance on low-resource machine translation close to a strong syntax based machine translation (SBMT) system, exceeding its performance on one language pair. Additionally, using the transfer learning model for re-scoring, we can improve the SBMT system by an average of 1.3 BLEU, improving the state-of-the-art on low-resource machine translation.", "title": "" }, { "docid": "d3afec9fcaabe6db91aa433370d0b4f1", "text": "Low-rank modeling generally refers to a class of methods that solves problems by representing variables of interest as low-rank matrices. It has achieved great success in various fields including computer vision, data mining, signal processing, and bioinformatics. Recently, much progress has been made in theories, algorithms, and applications of low-rank modeling, such as exact low-rank matrix recovery via convex programming and matrix completion applied to collaborative filtering. These advances have brought more and more attention to this topic. In this article, we review the recent advances of low-rank modeling, the state-of-the-art algorithms, and the related applications in image analysis. We first give an overview of the concept of low-rank modeling and the challenging problems in this area. Then, we summarize the models and algorithms for low-rank matrix recovery and illustrate their advantages and limitations with numerical experiments. Next, we introduce a few applications of low-rank modeling in the context of image analysis. Finally, we conclude this article with some discussions.", "title": "" }, { "docid": "6d55978aa80f177f6a859a55380ffed8", "text": "This paper investigates the effect of lowering the supply and threshold voltages on the energy efficiency of CMOS circuits. Using a first-order model of the energy and delay of a CMOS circuit, we show that lowering the supply and threshold voltage is generally advantageous, especially when the transistors are velocity saturated and the nodes have a high activity factor. In fact, for modern submicron technologies, this simple analysis suggests optimal energy efficiency at supply voltages under 0.5 V. Other process and circuit parameters have almost no effect on this optimal operating point. If there is some uncertainty in the value of the threshold or supply voltage, however, the power advantage of this very low voltage operation diminishes. Therefore, unless active feedback is used to control the uncertainty, in the future the supply and threshold voltage will not decrease drastically, but rather will continue to scale down to maintain constant electric fields.", "title": "" }, { "docid": "e2a5a97b60e01ac4ff6367989ff89756", "text": "This paper presents a half-select free 9T SRAM to facilitate reliable SRAM operation in the near-threshold voltage region. In the proposed SRAM, the half-select disturbance, which results in instable operations in 6T SRAM cell, can be completely eliminated by adopting cross-access selection of row and column word-lines. To minimize the area overhead of the half-select free 9T SRAM cell, a bit-line and access transistors between the adjacent cells are shared using a symmetric shared node that connects two cells. In addition, a selective pre-charge scheme considering the preferably isolated unselected cells has also been proposed to reduce the dynamic power consumption. The simulation results with the most probable failure point method show that the proposed 9T SRAM cell has a minimum operating voltage (VMIN) of 0.45 V among the half-select free SRAM cells. The test chip with 65-nm CMOS technology shows that the proposed 9T SRAM is fully operated at 0.35 V and 25 °C condition. Under the supply voltages between 0.35 and 1.1 V, the 4-kb SRAM macro is operated between 640 kHz and 560 MHz, respectively. The proposed 9T SRAM shows the best voltage scalability without any assist circuit while maintaining small macro area and fast operation frequency.", "title": "" } ]
scidocsrr
21112bc4536e1d51ae245f45126d43e0
Linked data partitioning for RDF processing on Apache Spark
[ { "docid": "576aa36956f37b491382b0bdd91f4bea", "text": "The generation of RDF data has accelerated to the point where many data sets need to be partitioned across multiple machines in order to achieve reasonable performance when querying the data. Although tremendous progress has been made in the Semantic Web community for achieving high performance data management on a single node, current solutions that allow the data to be partitioned across multiple machines are highly inefficient. In this paper, we introduce a scalable RDF data management system that is up to three orders of magnitude more efficient than popular multi-node RDF data management systems. In so doing, we introduce techniques for (1) leveraging state-of-the-art single node RDF-store technology (2) partitioning the data across nodes in a manner that helps accelerate query processing through locality optimizations and (3) decomposing SPARQL queries into high performance fragments that take advantage of how data is partitioned in a cluster.", "title": "" }, { "docid": "efb124a26b0cdc9b022975dd83ec76c8", "text": "Apache Spark is an open-source cluster computing framework for big data processing. It has emerged as the next generation big data processing engine, overtaking Hadoop MapReduce which helped ignite the big data revolution. Spark maintains MapReduce's linear scalability and fault tolerance, but extends it in a few important ways: it is much faster (100 times faster for certain applications), much easier to program in due to its rich APIs in Python, Java, Scala (and shortly R), and its core data abstraction, the distributed data frame, and it goes far beyond batch applications to support a variety of compute-intensive tasks, including interactive queries, streaming, machine learning, and graph processing. This tutorial will provide an accessible introduction to Spark and its potential to revolutionize academic and commercial data science practices.", "title": "" } ]
[ { "docid": "8ba192226a3c3a4f52ca36587396e85c", "text": "For many years I have been engaged in psychotherapy with individuals in distress. In recent years I have found myself increasingly concerned with the process of abstracting from that experience the general principles which appear to be involved in it. I have endeavored to discover any orderliness, any unity which seems to inhere in the subtle, complex tissue of interpersonal relationship in which I have so constantly been immersed in therapeutic work. One of the current products of this concern is an attempt to state, in formal terms, a theory of psychotherapy, of personality, and of interpersonal relationships which will encompass and contain the phenomena of my experience. What I wish to do in this paper is to take one very small segment of that theory, spell it out more completely, and explore its meaning and usefulness.", "title": "" }, { "docid": "ec369ae7aa038ab688173a7583c51a22", "text": "OBJECTIVE\nTo examine longitudinal associations of parental report of household food availability and parent intakes of fruits, vegetables and dairy foods with adolescent intakes of the same foods. This study expands upon the limited research of longitudinal studies examining the role of parents and household food availability in adolescent dietary intakes.\n\n\nDESIGN\nLongitudinal study. Project EAT-II followed an ethnically and socio-economically diverse sample of adolescents from 1999 (time 1) to 2004 (time 2). In addition to the Project EAT survey, adolescents completed the Youth Adolescent Food-Frequency Questionnaire in both time periods, and parents of adolescents completed a telephone survey at time 1. General linear modelling was used to examine the relationship between parent intake and home availability and adolescent intake, adjusting for time 1 adolescent intakes. Associations were examined separately for the high school and young adult cohorts and separately for males and females in combined cohorts.\n\n\nSUBJECTS/SETTING\nThe sample included 509 pairs of parents/guardians and adolescents.\n\n\nRESULTS\nVegetables served at dinner significantly predicted adolescent intakes of vegetables for males (P = 0.037), females (P = 0.009), high school (P = 0.033) and young adults (P = 0.05) at 5-year follow-up. Among young adults, serving milk at dinner predicted dairy intake (P = 0.002). Time 1 parental intakes significantly predicted intakes of young adults for fruit (P = 0.044), vegetables (P = 0.041) and dairy foods (P = 0.008). Parental intake predicted intake of dairy for females (P = 0.02).\n\n\nCONCLUSIONS\nThe findings suggest the importance of providing parents of adolescents with knowledge and skills to enhance the home food environment and improve their own eating behaviours.", "title": "" }, { "docid": "3882687dfa4f053d6ae128cf09bb8994", "text": "In recent years, we have seen tremendous progress in the field of object detection. Most of the recent improvements have been achieved by targeting deeper feedforward networks. However, many hard object categories such as bottle, remote, etc. require representation of fine details and not just coarse, semantic representations. But most of these fine details are lost in the early convolutional layers. What we need is a way to incorporate finer details from lower layers into the detection architecture. Skip connections have been proposed to combine high-level and low-level features, but we argue that selecting the right features from low-level requires top-down contextual information. Inspired by the human visual pathway, in this paper we propose top-down modulations as a way to incorporate fine details into the detection framework. Our approach supplements the standard bottom-up, feedforward ConvNet with a top-down modulation (TDM) network, connected using lateral connections. These connections are responsible for the modulation of lower layer filters, and the top-down network handles the selection and integration of contextual information and lowlevel features. The proposed TDM architecture provides a significant boost on the COCO benchmark, achieving 28.6 AP for VGG16 and 35.2 AP for ResNet101 networks. Using InceptionResNetv2, our TDM model achieves 37.3 AP, which is the best single-model performance to-date on the COCO testdev benchmark, without any bells and whistles.", "title": "" }, { "docid": "ad808ef13f173eda961b6157a766f1a9", "text": "Written text often provides sufficient clues to identify the author, their gender, age, and other important attributes. Consequently, the authorship of training and evaluation corpora can have unforeseen impacts, including differing model performance for different user groups, as well as privacy implications. In this paper, we propose an approach to explicitly obscure important author characteristics at training time, such that representations learned are invariant to these attributes. Evaluating on two tasks, we show that this leads to increased privacy in the learned representations, as well as more robust models to varying evaluation conditions, including out-of-domain corpora.", "title": "" }, { "docid": "d50b6e7c130080eba98bf4437c333f16", "text": "In this paper we provide a brief review of how out-of-sample methods can be used to construct tests that evaluate a time-series model's ability to predict. We focus on the role that parameter estimation plays in constructing asymptotically valid tests of predictive ability. We illustrate why forecasts and forecast errors that depend upon estimated parameters may have statistical properties that differ from those of their population counterparts. We explain how to conduct asymptotic inference, taking due account of dependence on estimated parameters.", "title": "" }, { "docid": "91affcd02ba981189eeaf25d94657276", "text": "In this paper, we develop a 2D and 3D segmentation pipelines for fully automated cardiac MR image segmentation using Deep Convolutional Neural Networks (CNN). Our models are trained end-to-end from scratch using the ACD Challenge 2017 dataset comprising of 100 studies, each containing Cardiac MR images in End Diastole and End Systole phase. We show that both our segmentation models achieve near state-of-the-art performance scores in terms of distance metrics and have convincing accuracy in terms of clinical parameters. A comparative analysis is provided by introducing a novel dice loss function and its combination with cross entropy loss. By exploring different network structures and comprehensive experiments, we discuss several key insights to obtain optimal model performance, which also is central to the theme of this challenge.", "title": "" }, { "docid": "121fc3a009e8ce2938f822ba437bdaa3", "text": "Due to an increased awareness and significant environmental pressures from various stakeholders, companies have begun to realize the significance of incorporating green practices into their daily activities. This paper proposes a framework using Fuzzy TOPSIS to select green suppliers for a Brazilian electronics company; our framework is built on the criteria of green supply chain management (GSCM) practices. An empirical analysis is made, and the data are collected from a set of 12 available suppliers. We use a fuzzy TOPSIS approach to rank the suppliers, and the results of the proposed framework are compared with the ranks obtained by both the geometric mean and the graded mean methods of fuzzy TOPSIS methodology. Then a Spearman rank correlation coefficient is used to find the statistical difference between the ranks obtained by the three methods. Finally, a sensitivity analysis has been performed to examine the influence of the preferences given by the decision makers for the chosen GSCM practices on the selection of green suppliers. Results indicate that the four dominant criteria are Commitment of senior management to GSCM; Product designs that reduce, reuse, recycle, or reclaim materials, components, or energy; Compliance with legal environmental requirements and auditing programs; and Product designs that avoid or reduce toxic or hazardous material use. 2013 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "5125f5099f77a32ff9a1f2054ef1e664", "text": "Human activities are inherently translation invariant and hierarchical. Human activity recognition (HAR), a field that has garnered a lot of attention in recent years due to its high demand in various application domains, makes use of time-series sensor data to infer activities. In this paper, a deep convolutional neural network (convnet) is proposed to perform efficient and effective HAR using smartphone sensors by exploiting the inherent characteristics of activities and 1D time-series signals, at the same time providing a way to automatically and data-adaptively extract robust features from raw data. Experiments show that convnets indeed derive relevant and more complex features with every additional layer, although difference of feature complexity level decreases with every additional layer. A wider time span of temporal local correlation can be exploited (1x9~1x14) and a low pooling size (1x2~1x3) is shown to be beneficial. Convnets also achieved an almost perfect classification on moving activities, especially very similar ones which were previously perceived to be very difficult to classify. Lastly, convnets outperform other state-of-the-art data mining techniques in HAR for the benchmark dataset collected from 30 volunteer subjects, achieving an overall performance of 94.79% on the test set with raw sensor data, and 95.75% with additional information of temporal fast Fourier transform of the HAR data set.", "title": "" }, { "docid": "7a10f559d9bbf1b6853ff6b89f5857f7", "text": "Despite the much-ballyhooed increase in outsourcing, most companies are in do-it-yourself mode for the bulk of their processes, in large part because there's no way to compare outside organizations' capabilities with those of internal functions. Given the lack of comparability, it's almost surprising that anyone outsources today. But it's not surprising that cost is by far companies' primary criterion for evaluating outsourcers or that many companies are dissatisfied with their outsourcing relationships. A new world is coming, says the author, and it will lead to dramatic changes in the shape and structure of corporations. A broad set of process standards will soon make it easy to determine whether a business capability can be improved by outsourcing it. Such standards will also help businesses compare service providers and evaluate the costs versus the benefits of outsourcing. Eventually these costs and benefits will be so visible to buyers that outsourced processes will become a commodity, and prices will drop significantly. The low costs and low risk of outsourcing will accelerate the flow of jobs offshore, force companies to reassess their strategies, and change the basis of competition. The speed with which some businesses have already adopted process standards suggests that many previously unscrutinized areas are ripe for change. In the field of technology, for instance, the Carnegie Mellon Software Engineering Institute has developed a global standard for software development processes, called the Capability Maturity Model (CMM). For companies that don't have process standards in place, it makes sense for them to create standards by working with customers, competitors, software providers, businesses that processes may be outsourced to, and objective researchers and standard-setters. Setting standards is likely to lead to the improvement of both internal and outsourced processes.", "title": "" }, { "docid": "8564762ca6de73d72236f94bc5fe0a7a", "text": "The current work examines the phenomenon of Virtual Interpersonal Touch (VIT), people touching one another via force-feedback haptic devices. As collaborative virtual environments become utilized more effectively, it is only natural that interactants will have the ability to touch one another. In the current work, we used relatively basic devices to begin to explore the expression of emotion through VIT. In Experiment 1, participants utilized a 2 DOF force-feedback joystick to express seven emotions. We examined various dimensions of the forces generated and subjective ratings of the difficulty of expressing those emotions. In Experiment 2, a separate group of participants attempted to recognize the recordings of emotions generated in Experiment 1. In Experiment 3, pairs of participants attempted to communicate the seven emotions using physical handshakes. Results indicated that humans were above chance when recognizing emotions via VIT, but not as accurate as people expressing emotions through non-mediated handshakes. We discuss a theoretical framework for understanding emotions expressed through touch as well as the implications of the current findings for the utilization of VIT in human computer interaction. Virtual Interpersonal Touch 3 Virtual Interpersonal Touch: Expressing and Recognizing Emotions through Haptic Devices There are many reasons to support the development of collaborative virtual environments (Lanier, 2001). One major criticism of collaborative virtual environments, however, is that they do not provide emotional warmth and nonverbal intimacy (Mehrabian, 1967; Sproull & Kiesler, 1986). In the current work, we empirically explore the augmentation of collaborative virtual environments with simple networked haptic devices to allow for the transmission of emotion through virtual interpersonal touch (VIT). EMOTION IN SOCIAL INTERACTION Interpersonal communication is largely non-verbal (Argyle, 1988), and one of the primary purposes of nonverbal behavior is to communicate subtleties of emotional states between individuals. Clearly, if social interaction mediated by virtual reality and other digital communication systems is to be successful, it will be necessary to allow for a full range of emotional expressions via a number of communication channels. In face-to-face communication, we express emotion primarily through facial expressions, voice, and through touch. While emotion is also communicated through other nonverbal gestures such as posture and hand signals (Cassell & Thorisson, in press; Collier, 1985), in the current review we focus on emotions transmitted via face, voice and touch. In a review of the emotion literature, Ortony and Turner (1990) discuss the concept of basic emotions. These fundamental emotions (e.g., fear) are the building blocks of other more complex emotions (e.g., jealousy). Furthermore, many people argue that these emotions are innate and universal across cultures (Plutchik, 2001). In terms of defining the set of basic emotions, previous work has provided very disparate sets of such emotions. Virtual Interpersonal Touch 4 For example, Watson (1930) has limited his list to “hardwired” emotions such as fear, love, and rage. On the other hand, Ekman & Friesen (1975) have limited their list to those discernable through facial movements such as anger, disgust, fear, joy, sadness, and surprise. The psychophysiology literature adds to our understanding of emotions by suggesting a fundamental biphasic model (Bradley, 2000). In other words, emotions can be thought of as variations on two axes hedonic valence and intensity. Pleasurable emotions have high hedonic valences, while negative emotions have low hedonic valences. This line of research suggests that while emotions may appear complex, much of the variation may nonetheless be mapped onto a two-dimensional scale. This notion also dovetails with research in embodied cognition that has shown that human language is spatially organized (Richardson, Spivey, Edelman, & Naples, 2001). For example, certain words are judged to be more “horizontal” while other words are judged to be more “vertical”. In the current work, we were not concerned predominantly with what constitutes a basic or universal emotion. Instead, we attempted to identify emotions that could be transmitted through virtual touch, and provide an initial framework for classifying and interpreting those digital haptic emotions. To this end, we reviewed theoretical frameworks that have attempted to accomplish this goal with other nonverbal behaviors— most notably facial expressions and paralinguistics. Facial Expressions Research in facial expressions has received much attention from social scientists for the past fifty years. Some researchers argue that the face is a portal to one’s internal mental state (Ekman & Friesen 1978; Izard, 1971). These scholars argue that when an Virtual Interpersonal Touch 5 emotion occurs, a series of biological events follow that produce changes in a person—one of those manifestations is movement in facial muscles. Moreover, these changes in facial expressions are also correlated with other physiological changes such as heart rate or blood pressure (Ekman & Friesen, 1976). Alternatively, other researchers argue that the correspondence of facial expressions to actual emotion is not as high as many think. For example, Fridland (1994) believes that people use facial expressions as a tool to strategically elicit behaviors from others or to accomplish social goals in interaction. Similarly, other researchers argue that not all emotions have corresponding facial expressions (Cacioppo et al., 1997). Nonetheless, most scholars would agree that there is some value to examining facial expressions of another if one’s goal is to gain an understanding of that person’s current mental state. Ekman’s groundbreaking work on emotions has provided tools to begin forming dimensions on which to classify his set of six basic emotions (Ekman & Friesen, 1975). Figure 1 provides a framework for the facial classifications developed by those scholars.", "title": "" }, { "docid": "5e8154a99b4b0cc544cab604b680ebd2", "text": "This work presents performance of robust wearable antennas intended to operate in Wireless Body Area Networks (W-BAN) in UHF, TETRAPOL communication band, 380-400 MHz. We propose a Planar Inverted F Antenna (PIFA) as reliable antenna type for UHF W-BAN applications. In order to satisfy the robustness requirements of the UHF band, both from communication and mechanical aspect, a new technology for building these antennas was proposed. The antennas are built out of flexible conductive sheets encapsulated inside a silicone based elastomer, Polydimethylsiloxane (PDMS). The proposed antennas are resistive to washing, bending and perforating. From the communication point of view, opting for a PIFA antenna type we solve the problem of coupling to the wearer and thus improve the overall communication performance of the antenna. Several different tests and comparisons were performed in order to check the stability of the proposed antennas when they are placed on the wearer or left in a common everyday environ- ment, on the ground, table etc. S11 deviations are observed and compared with the commercially available wearable antennas. As a final check, the antennas were tested in the frame of an existing UHF TETRAPOL communication system. All the measurements were performed in a real university campus scenario, showing reliable and good performance of the proposed PIFA antennas.", "title": "" }, { "docid": "213daea0f909e9731aa77e001c447654", "text": "In the wake of a polarizing election, social media is laden with hateful content. To address various limitations of supervised hate speech classification methods including corpus bias and huge cost of annotation, we propose a weakly supervised twopath bootstrapping approach for an online hate speech detection model leveraging large-scale unlabeled data. This system significantly outperforms hate speech detection systems that are trained in a supervised manner using manually annotated data. Applying this model on a large quantity of tweets collected before, after, and on election day reveals motivations and patterns of inflammatory language.", "title": "" }, { "docid": "2629277b98d661006e90358fa27f4ac5", "text": "In this paper, a well known problem called the Shortest Path Problem (SPP) has been considered in an uncertain environment. The cost parameters for traveling each arc have been considered as Intuitionistic Fuzzy Numbers (IFNs) which are the more generalized form of fuzzy numbers involving a degree of acceptance and a degree of rejection. A heuristic methodology for solving the SPP has been developed, which aim to exploit tolerance for imprecision, uncertainty and partial truth to achieve tractability, robustness and low cost solution corresponding to the minimum-cost path or the shortest path. The Modified Intuitionistic Fuzzy Dijkstra’s Algorithm (MIFDA) has been proposed in this paper for solving Intuitionistic Fuzzy Shortest Path Problem (IFSPP) using the Intuitionistic Fuzzy Hybrid Geometric (IFHG) operator. A numerical example illustrates the effectiveness of the proposed method.", "title": "" }, { "docid": "189709296668a8dd6f7be8e1b2f2e40f", "text": "Uncertain data management, querying and mining have become important because the majority of real world data is accompanied with uncertainty these days. Uncertainty in data is often caused by the deficiency in underlying data collecting equipments or sometimes manually introduced to preserve data privacy. This work discusses the problem of distance-based outlier detection on uncertain datasets of Gaussian distribution. The Naive approach of distance-based outlier on uncertain data is usually infeasible due to expensive distance function. Therefore a cell-based approach is proposed in this work to quickly identify the outliers. The infinite nature of Gaussian distribution prevents to devise effective pruning techniques. Therefore an approximate approach using bounded Gaussian distribution is also proposed. Approximating Gaussian distribution by bounded Gaussian distribution enables an approximate but more efficient cell-based outlier detection approach. An extensive empirical study on synthetic and real datasets show that our proposed approaches are effective, efficient and scalable.", "title": "" }, { "docid": "869ad7b6bf74f283c8402958a6814a21", "text": "In this paper, we make a move to build a dialogue system for automatic diagnosis. We first build a dataset collected from an online medical forum by extracting symptoms from both patients’ self-reports and conversational data between patients and doctors. Then we propose a taskoriented dialogue system framework to make the diagnosis for patients automatically, which can converse with patients to collect additional symptoms beyond their self-reports. Experimental results on our dataset show that additional symptoms extracted from conversation can greatly improve the accuracy for disease identification and our dialogue system is able to collect these symptoms automatically and make a better diagnosis.", "title": "" }, { "docid": "bee25514d15321f4f0bdcf867bb07235", "text": "We propose a randomized block-coordinate variant of the classic Frank-Wolfe algorithm for convex optimization with block-separable constraints. Despite its lower iteration cost, we show that it achieves a similar convergence rate in duality gap as the full FrankWolfe algorithm. We also show that, when applied to the dual structural support vector machine (SVM) objective, this yields an online algorithm that has the same low iteration complexity as primal stochastic subgradient methods. However, unlike stochastic subgradient methods, the block-coordinate FrankWolfe algorithm allows us to compute the optimal step-size and yields a computable duality gap guarantee. Our experiments indicate that this simple algorithm outperforms competing structural SVM solvers.", "title": "" }, { "docid": "ff2beca595c408f3ea5df6a8494301c4", "text": "The objective of the present study was to examine to what extent autonomy in problem-based learning (PBL) results in cognitive engagement with the topic at hand. To that end, a short self-report instrument was devised and validated. Moreover, it was examined how cognitive engagement develops as a function of the learning process and the extent to which cognitive engagement determines subsequent levels of cognitive engagement during a one-day PBL event. Data were analyzed by means of confirmatory factor analysis, repeated measures ANOVA, and path analysis. The results showed that the new measure of situational cognitive engagement is valid and reliable. Furthermore, the results revealed that students' cognitive engagement significantly increased as a function of the learning event. Implications of these findings for PBL are discussed.", "title": "" }, { "docid": "84f9a6913a7689a5bbeb04f3173237b2", "text": "BACKGROUND\nPsychosocial treatments are the mainstay of management of autism in the UK but there is a notable lack of a systematic evidence base for their effectiveness. Randomised controlled trial (RCT) studies in this area have been rare but are essential because of the developmental heterogeneity of the disorder. We aimed to test a new theoretically based social communication intervention targeting parental communication in a randomised design against routine care alone.\n\n\nMETHODS\nThe intervention was given in addition to existing care and involved regular monthly therapist contact for 6 months with a further 6 months of 2-monthly consolidation sessions. It aimed to educate parents and train them in adapted communication tailored to their child's individual competencies. Twenty-eight children with autism were randomised between this treatment and routine care alone, stratified for age and baseline severity. Outcome was measured at 12 months from commencement of intervention, using standardised instruments.\n\n\nRESULTS\nAll cases studied met full Autism Diagnostic Interview (ADI) criteria for classical autism. Treatment and controls had similar routine care during the study period and there were no study dropouts after treatment had started. The active treatment group showed significant improvement compared with controls on the primary outcome measure--Autism Diagnostic Observation Schedule (ADOS) total score, particularly in reciprocal social interaction--and on secondary measures of expressive language, communicative initiation and parent-child interaction. Suggestive but non-significant results were found in Vineland Adaptive Behaviour Scales (Communication Sub-domain) and ADOS stereotyped and restricted behaviour domain.\n\n\nCONCLUSIONS\nA Randomised Treatment Trial design of this kind in classical autism is feasible and acceptable to patients. This pilot study suggests significant additional treatment benefits following a targeted (but relatively non-intensive) dyadic social communication treatment, when compared with routine care. The study needs replication on larger and independent samples. It should encourage further RCT designs in this area.", "title": "" }, { "docid": "08bde5682e7fe0c775fabb7c051ab3db", "text": "We propose a higher-level associative memory for learning adversarial networks. Generative adversarial network (GAN) framework has a discriminator and a generator network. The generator (G) maps white noise (z) to data samples while the discriminator (D) maps data samples to a single scalar. To do so, G learns how to map from high-level representation space to data space, and D learns to do the opposite. We argue that higher-level representation spaces need not necessarily follow a uniform probability distribution. In this work, we use Restricted Boltzmann Machines (RBMs) as a higher-level associative memory and learn the probability distribution for the high-level features generated by D. The associative memory samples its underlying probability distribution and G learns how to map these samples to data space. The proposed associative adversarial networks (AANs) are generative models in the higher-levels of the learning, and use adversarial nonstochastic models D and G for learning the mapping between data and higher-level representation spaces. Experiments show the potential of the proposed networks.", "title": "" }, { "docid": "f33f6263ef10bd702ddb18664b68a09f", "text": "Research over the past five years has shown significant performance improvements using a technique called adaptive compilation. An adaptive compiler uses a compile-execute-analyze feedback loop to find the combination of optimizations and parameters that minimizes some performance goal, such as code size or execution time.Despite its ability to improve performance, adaptive compilation has not seen widespread use because of two obstacles: the large amounts of time that such systems have used to perform the many compilations and executions prohibits most users from adopting these systems, and the complexity inherent in a feedback-driven adaptive system has made it difficult to build and hard to use.A significant portion of the adaptive compilation process is devoted to multiple executions of the code being compiled. We have developed a technique called virtual execution to address this problem. Virtual execution runs the program a single time and preserves information that allows us to accurately predict the performance of different optimization sequences without running the code again. Our prototype implementation of this technique significantly reduces the time required by our adaptive compiler.In conjunction with this performance boost, we have developed a graphical-user interface (GUI) that provides a controlled view of the compilation process. By providing appropriate defaults, the interface limits the amount of information that the user must provide to get started. At the same time, it lets the experienced user exert fine-grained control over the parameters that control the system.", "title": "" } ]
scidocsrr
5d875137f9fa521289dcf8a2cd52e47c
Echo State Gaussian Process
[ { "docid": "2ea9e1cebaf85f5129a2a5344e02975a", "text": "We introduce Gaussian process dynamical models (GPDMs) for nonlinear time series analysis, with applications to learning models of human pose and motion from high-dimensional motion capture data. A GPDM is a latent variable model. It comprises a low-dimensional latent space with associated dynamics, as well as a map from the latent space to an observation space. We marginalize out the model parameters in closed form by using Gaussian process priors for both the dynamical and the observation mappings. This results in a nonparametric model for dynamical systems that accounts for uncertainty in the model. We demonstrate the approach and compare four learning algorithms on human motion capture data, in which each pose is 50-dimensional. Despite the use of small data sets, the GPDM learns an effective representation of the nonlinear dynamics in these spaces.", "title": "" }, { "docid": "acae3496fd9954a5d86ae3139852ed98", "text": "Echo State Networks and Liquid State Machines introduced a new paradigm in artificial recurrent neural network (RNN) training, where an RNN (the reservoir) is generated randomly and only a readout is trained. The paradigm, becoming known as reservoir computing, greatly facilitated the practical application of RNNs and outperformed classical fully trained RNNs in many tasks. It has lately become a vivid research field with numerous extensions of the basic idea, including reservoir adaptation, thus broadening the initial paradigm to using different methods for training the reservoir and the readout. This review systematically surveys both: current ways of generating/adapting the reservoirs and training different types of readouts. It offers a natural conceptual classification of the techniques, which transcends boundaries of the current “brand-names” of reservoir methods, and thus aims to help unifying the field and providing the reader with a detailed “map” of", "title": "" } ]
[ { "docid": "460a296de1bd13378d71ce19ca5d807a", "text": "Many books discuss applications of data mining. For financial data analysis and financial modeling, see Benninga and Czaczkes [BC00] and Higgins [Hig03]. For retail data mining and customer relationship management, see books by Berry and Linoff [BL04] and Berson, Smith, and Thearling [BST99], and the article by Kohavi [Koh01]. For telecommunication-related data mining, see the book by Mattison [Mat97]. Chen, Hsu, and Dayal [CHD00] reported their work on scalable telecommunication tandem traffic analysis under a data warehouse/OLAP framework. For bioinformatics and biological data analysis, there are a large number of introductory references and textbooks. An introductory overview of bioinformatics for computer scientists was presented by Cohen [Coh04]. Recent textbooks on bioinformatics include Krane and Raymer [KR03], Jones and Pevzner [JP04], Durbin, Eddy, Krogh and Mitchison [DEKM98], Setubal and Meidanis [SM97], Orengo, Jones, and Thornton [OJT03], and Pevzner [Pev03]. Summaries of biological data analysis methods and algorithms can also be found in many other books, such as Gusfield [Gus97], Waterman [Wat95], Baldi and Brunak [BB01], and Baxevanis and Ouellette [BO04]. There are many books on scientific data analysis, such as Grossman, Kamath, Kegelmeyer, et al. (eds.) [GKK01]. For geographic data mining, see the book edited by Miller and Han [MH01]. Valdes-Perez [VP99] discusses the principles of human-computer collaboration for knowledge discovery in science. For intrusion detection, see Barbará [Bar02] and Northcutt and Novak [NN02].", "title": "" }, { "docid": "5bc2b92a3193c36bac5ae848da7974a3", "text": "Robust real-time tracking of non-rigid objects is a challenging task. Particle filtering has proven very successful for non-linear and nonGaussian estimation problems. The article presents the integration of color distributions into particle filtering, which has typically been used in combination with edge-based image features. Color distributions are applied, as they are robust to partial occlusion, are rotation and scale invariant and computationally efficient. As the color of an object can vary over time dependent on the illumination, the visual angle and the camera parameters, the target model is adapted during temporally stable image observations. An initialization based on an appearance condition is introduced since tracked objects may disappear and reappear. Comparisons with the mean shift tracker and a combination between the mean shift tracker and Kalman filtering show the advantages and limitations of the new approach. q 2002 Published by Elsevier Science B.V.", "title": "" }, { "docid": "973fa990e13734f060ae13b138e99c39", "text": "Parallel algorithm for line and circle drawing that are based on J.E. Bresenham's line and circle algorithms (see Commun. ACM, vol.20, no.2, p.100-6 (1977)) are presented. The new algorithms are applicable on raster scan CRTs, incremental pen plotters, and certain types of printers. The line algorithm approaches a perfect speedup of P as the line length approaches infinity, and the circle algorithm approaches a speedup greater than 0.9P as the circle radius approaches infinity. It is assumed that the algorithm are run in a multiple-instruction-multiple-data (MIMD) environment, that the raster memory is shared, and that the processors are dedicated and assigned to the task (of line or circle drawing).<<ETX>>", "title": "" }, { "docid": "bc8e1f2bf0b652c5041e4f3dc02c9612", "text": "The inability to interpret the model prediction in semantically and visually meaningful ways is a well-known shortcoming of most existing computer-aided diagnosis methods. In this paper, we propose MDNet to establish a direct multimodal mapping between medical images and diagnostic reports that can read images, generate diagnostic reports, retrieve images by symptom descriptions, and visualize attention, to provide justifications of the network diagnosis process. MDNet includes an image model and a language model. The image model is proposed to enhance multi-scale feature ensembles and utilization efficiency. The language model, integrated with our improved attention mechanism, aims to read and explore discriminative image feature descriptions from reports to learn a direct mapping from sentence words to image pixels. The overall network is trained end-to-end by using our developed optimization strategy. Based on a pathology bladder cancer images and its diagnostic reports (BCIDR) dataset, we conduct sufficient experiments to demonstrate that MDNet outperforms comparative baselines. The proposed image model obtains state-of-the-art performance on two CIFAR datasets as well.", "title": "" }, { "docid": "85ff5c1787aa943c152d136b752e8172", "text": "This paper proposes a new family of algorithms for training neural networks (NNs). These are based on recent developments in the field of nonconvex optimization, going under the general name of successive convex approximation techniques. The basic idea is to iteratively replace the original (nonconvex, highly dimensional) learning problem with a sequence of (strongly convex) approximations, which are both accurate and simple to optimize. Different from similar ideas (e.g., quasi-Newton algorithms), the approximations can be constructed using only first-order information of the NN function, in a stochastic fashion, while exploiting the overall structure of the learning problem for a faster convergence. We discuss several use cases, based on different choices for the loss function (e.g., squared loss and cross-entropy loss), and for the regularization of the NN’s weights. We experiment on several medium-sized benchmark problems and on a large-scale data set involving simulated physical data. The results show how the algorithm outperforms the state-of-the-art techniques, providing faster convergence to a better minimum. Additionally, we show how the algorithm can be easily parallelized over multiple computational units without hindering its performance. In particular, each computational unit can optimize a tailored surrogate function defined on a randomly assigned subset of the input variables, whose dimension can be selected depending entirely on the available computational power.", "title": "" }, { "docid": "a55b44543510713a7fdc4f7cb8c123b2", "text": "The mechanisms that allow cancer cells to adapt to the typical tumor microenvironment of low oxygen and glucose and high lactate are not well understood. GPR81 is a lactate receptor recently identified in adipose and muscle cells that has not been investigated in cancer. In the current study, we examined GPR81 expression and function in cancer cells. We found that GPR81 was present in colon, breast, lung, hepatocellular, salivary gland, cervical, and pancreatic carcinoma cell lines. Examination of tumors resected from patients with pancreatic cancer indicated that 94% (148 of 158) expressed high levels of GPR81. Functionally, we observed that the reduction of GPR81 levels using shRNA-mediated silencing had little effect on pancreatic cancer cells cultured in high glucose, but led to the rapid death of cancer cells cultured in conditions of low glucose supplemented with lactate. We also observed that lactate addition to culture media induced the expression of genes involved in lactate metabolism, including monocarboxylase transporters in control, but not in GPR81-silenced cells. In vivo, GPR81 expression levels correlated with the rate of pancreatic cancer tumor growth and metastasis. Cells in which GPR81 was silenced showed a dramatic decrease in growth and metastasis. Implantation of cancer cells in vivo was also observed to lead to greatly elevated levels of GPR81. These data support that GPR81 is important for cancer cell regulation of lactate transport mechanisms. Furthermore, lactate transport is important for the survival of cancer cells in the tumor microenvironment. Cancer Res; 74(18); 5301-10. ©2014 AACR.", "title": "" }, { "docid": "22e3a0e31a70669f311fb51663a76f9c", "text": "A communication infrastructure is an essential part to the success of the emerging smart grid. A scalable and pervasive communication infrastructure is crucial in both construction and operation of a smart grid. In this paper, we present the background and motivation of communication infrastructures in smart grid systems. We also summarize major requirements that smart grid communications must meet. From the experience of several industrial trials on smart grid with communication infrastructures, we expect that the traditional carbon fuel based power plants can cooperate with emerging distributed renewable energy such as wind, solar, etc, to reduce the carbon fuel consumption and consequent green house gas such as carbon dioxide emission. The consumers can minimize their expense on energy by adjusting their intelligent home appliance operations to avoid the peak hours and utilize the renewable energy instead. We further explore the challenges for a communication infrastructure as the part of a complex smart grid system. Since a smart grid system might have over millions of consumers and devices, the demand of its reliability and security is extremely critical. Through a communication infrastructure, a smart grid can improve power reliability and quality to eliminate electricity blackout. Security is a challenging issue since the on-going smart grid systems facing increasing vulnerabilities as more and more automation, remote monitoring/controlling and supervision entities are interconnected.", "title": "" }, { "docid": "eea45eb670d380e722f3148479a0864d", "text": "In this paper, we propose a hybrid Differential Evolution (DE) algorithm based on the fuzzy C-means clustering algorithm, referred to as FCDE. The fuzzy C-means clustering algorithm is incorporated with DE to utilize the information of the population efficiently, and hence it can generate good solutions and enhance the performance of the original DE. In addition, the population-based algorithmgenerator is adopted to efficiently update the population with the clustering offspring. In order to test the performance of our approach, 13 high-dimensional benchmark functions of diverse complexities are employed. The results show that our approach is effective and efficient. Compared with other state-of-the-art DE approaches, our approach performs better, or at least comparably, in terms of the quality of the final solutions and the reduction of the number of fitness function evaluations (NFFEs).", "title": "" }, { "docid": "493c45304bd5b7dd1142ace56e94e421", "text": "While closed timelike curves (CTCs) are not known to exist, studying their consequences has led to nontrivial insights in general relativity, quantum information, and other areas. In this paper we show that if CTCs existed, then quantum computers would be no more powerful than classical computers: both would have the (extremely large) power of the complexity class PSPACE, consisting of all problems solvable by a conventional computer using a polynomial amount of memory. This solves an open problem proposed by one of us in 2005, and gives an essentially complete understanding of computational complexity in the presence of CTCs. Following the work of Deutsch, we treat a CTC as simply a region of spacetime where a “causal consistency” condition is imposed, meaning that Nature has to produce a (probabilistic or quantum) fixed-point of some evolution operator. Our conclusion is then a consequence of the following theorem: given any quantum circuit (not necessarily unitary), a fixed-point of the circuit can be (implicitly) computed in polynomial space. This theorem might have independent applications in quantum information.", "title": "" }, { "docid": "ad86262394b1633243ae44d1f43c1e68", "text": "OBJECTIVE\nTo study dimensional alterations of the alveolar ridge that occurred following tooth extraction as well as processes of bone modelling and remodelling associated with such change.\n\n\nMATERIAL AND METHODS\nTwelve mongrel dogs were included in the study. In both quadrants of the mandible incisions were made in the crevice region of the 3rd and 4th premolars. Minute buccal and lingual full thickness flaps were elevated. The four premolars were hemi-sected. The distal roots were removed. The extraction sites were covered with the mobilized gingival tissue. The extractions of the roots and the sacrifice of the dogs were staggered in such a manner that all dogs contributed with sockets representing 1, 2, 4 and 8 weeks of healing. The animals were sacrificed and tissue blocks containing the extraction socket were dissected, decalcified in EDTA, embedded in paraffin and cut in the buccal-lingual plane. The sections were stained in haematoxyline-eosine and examined in the microscope.\n\n\nRESULTS\nIt was demonstrated that marked dimensional alterations occurred during the first 8 weeks following the extraction of mandibular premolars. Thus, in this interval there was a marked osteoclastic activity resulting in resorption of the crestal region of both the buccal and the lingual bone wall. The reduction of the height of the walls was more pronounced at the buccal than at the lingual aspect of the extraction socket. The height reduction was accompanied by a \"horizontal\" bone loss that was caused by osteoclasts present in lacunae on the surface of both the buccal and the lingual bone wall.\n\n\nCONCLUSIONS\nThe resorption of the buccal/lingual walls of the extraction site occurred in two overlapping phases. During phase 1, the bundle bone was resorbed and replaced with woven bone. Since the crest of the buccal bone wall was comprised solely of bundle this modelling resulted in substantial vertical reduction of the buccal crest. Phase 2 included resorption that occurred from the outer surfaces of both bone walls. The reason for this additional bone loss is presently not understood.", "title": "" }, { "docid": "7853936d58687b143bc135e6e60092ce", "text": "Multilabel learning has become a relevant learning paradigm in the past years due to the increasing number of fields where it can be applied and also to the emerging number of techniques that are being developed. This article presents an up-to-date tutorial about multilabel learning that introduces the paradigm and describes the main contributions developed. Evaluation measures, fields of application, trending topics, and resources are also presented.", "title": "" }, { "docid": "7063d3eb38008bcd344f0ae1508cca61", "text": "The fitness of an evolutionary individual can be understood in terms of its two basic components: survival and reproduction. As embodied in current theory, trade-offs between these fitness components drive the evolution of life-history traits in extant multicellular organisms. Here, we argue that the evolution of germ-soma specialization and the emergence of individuality at a new higher level during the transition from unicellular to multicellular organisms are also consequences of trade-offs between the two components of fitness-survival and reproduction. The models presented here explore fitness trade-offs at both the cell and group levels during the unicellular-multicellular transition. When the two components of fitness negatively covary at the lower level there is an enhanced fitness at the group level equal to the covariance of components at the lower level. We show that the group fitness trade-offs are initially determined by the cell level trade-offs. However, as the transition proceeds to multicellularity, the group level trade-offs depart from the cell level ones, because certain fitness advantages of cell specialization may be realized only by the group. The curvature of the trade-off between fitness components is a basic issue in life-history theory and we predict that this curvature is concave in single-celled organisms but becomes increasingly convex as group size increases in multicellular organisms. We argue that the increasingly convex curvature of the trade-off function is driven by the initial cost of reproduction to survival which increases as group size increases. To illustrate the principles and conclusions of the model, we consider aspects of the biology of the volvocine green algae, which contain both unicellular and multicellular members.", "title": "" }, { "docid": "49e786f66641194a22bf488c5e97ed7f", "text": "The non-negative matrix factorization (NMF) determines a lower rank approximation of a matrix where an interger \"!$# is given and nonnegativity is imposed on all components of the factors % & (' and % )'* ( . The NMF has attracted much attention for over a decade and has been successfully applied to numerous data analysis problems. In applications where the components of the data are necessarily nonnegative such as chemical concentrations in experimental results or pixels in digital images, the NMF provides a more relevant interpretation of the results since it gives non-subtractive combinations of non-negative basis vectors. In this paper, we introduce an algorithm for the NMF based on alternating non-negativity constrained least squares (NMF/ANLS) and the active set based fast algorithm for non-negativity constrained least squares with multiple right hand side vectors, and discuss its convergence properties and a rigorous convergence criterion based on the Karush-Kuhn-Tucker (KKT) conditions. In addition, we also describe algorithms for sparse NMFs and regularized NMF. We show how we impose a sparsity constraint on one of the factors by +-, -norm minimization and discuss its convergence properties. Our algorithms are compared to other commonly used NMF algorithms in the literature on several test data sets in terms of their convergence behavior.", "title": "" }, { "docid": "a14afa0d14a0fcfb890c8f2944750230", "text": "RNA turnover is an integral part of cellular RNA homeostasis and gene expression regulation. Whereas the cytoplasmic control of protein-coding mRNA is often the focus of study, we discuss here the less appreciated role of nuclear RNA decay systems in controlling RNA polymerase II (RNAPII)-derived transcripts. Historically, nuclear RNA degradation was found to be essential for the functionalization of transcripts through their proper maturation. Later, it was discovered to also be an important caretaker of nuclear hygiene by removing aberrant and unwanted transcripts. Recent years have now seen a set of new protein complexes handling a variety of new substrates, revealing functions beyond RNA processing and the decay of non-functional transcripts. This includes an active contribution of nuclear RNA metabolism to the overall cellular control of RNA levels, with mechanistic implications during cellular transitions. RNA is controlled at various stages of transcription and processing to achieve appropriate gene regulation. Whereas much research has focused on the cytoplasmic control of RNA levels, this Review discusses our emerging appreciation of the importance of nuclear RNA regulation, including the molecular machinery involved in nuclear RNA decay, how functional RNAs bypass degradation and roles for nuclear RNA decay in physiology and disease.", "title": "" }, { "docid": "c49716c60f96c2454fdb56dc539cf012", "text": "This paper deals with the dynamic modeling and design optimization of a three Degree-of-Freedom spherical parallel manipulator. Using the method of Lagrange multipliers, the equation of motion is derived by considering its motion characteristics, namely, all the components rotating about the center of rotation. Using the derived dynamic model, a multiobjective optimization problem is formulated to optimize the structural and geometric parameters of the spherical parallel manipulator. The proposed approach is illustrated with the design optimization of an unlimited-roll spherical parallel manipulator with a main objective to minimize the mechanism mass in order to enhance both kinematic and dynamic performances.", "title": "" }, { "docid": "c675a2f1fed4ccb5708be895190b02cd", "text": "Decompilation is important for many security applications; it facilitates the tedious task of manual malware reverse engineering and enables the use of source-based security tools on binary code. This includes tools to find vulnerabilities, discover bugs, and perform taint tracking. Recovering high-level control constructs is essential for decompilation in order to produce structured code that is suitable for human analysts and sourcebased program analysis techniques. State-of-the-art decompilers rely on structural analysis, a pattern-matching approach over the control flow graph, to recover control constructs from binary code. Whenever no match is found, they generate goto statements and thus produce unstructured decompiled output. Those statements are problematic because they make decompiled code harder to understand and less suitable for program analysis. In this paper, we present DREAM, the first decompiler to offer a goto-free output. DREAM uses a novel patternindependent control-flow structuring algorithm that can recover all control constructs in binary programs and produce structured decompiled code without any goto statement. We also present semantics-preserving transformations that can transform unstructured control flow graphs into structured graphs. We demonstrate the correctness of our algorithms and show that we outperform both the leading industry and academic decompilers: Hex-Rays and Phoenix. We use the GNU coreutils suite of utilities as a benchmark. Apart from reducing the number of goto statements to zero, DREAM also produced more compact code (less lines of code) for 72.7% of decompiled functions compared to Hex-Rays and 98.8% compared to Phoenix. We also present a comparison of Hex-Rays and DREAM when decompiling three samples from Cridex, ZeusP2P, and SpyEye malware families.", "title": "" }, { "docid": "294ac617bbd49afe95c278836fa4c9ec", "text": "We present a practical lock-free shared data structure that efficiently implements the operations of a concurrent deque as well as a general doubly linked list. The implementation supports parallelism for disjoint accesses and uses atomic primitives which are available in modern computer systems. Previously known lock-free algorithms of doubly linked lists are either based on non-available atomic synchronization primitives, only implement a subset of the functionality, or are not designed for disjoint accesses. Our algorithm only requires single-word compare-and-swap atomic primitives, supports fully dynamic list sizes, and allows traversal also through deleted nodes and thus avoids unnecessary operation retries. We have performed an empirical study of our new algorithm on two different multiprocessor platforms. Results of the experiments performed under high contention show that the performance of our implementation scales linearly with increasing number of processors. Considering deque implementations and systems with low concurrency, the algorithm by Michael shows the best performance. However, as our algorithm is designed for disjoint accesses, it performs significantly better on systems with high concurrency and non-uniform memory architecture. © 2008 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "b83ac50f2ad17b53e2413ef368053738", "text": "Case Presentation: E.C. is a 53-year-old postmenopausal female, referred for treatment of hypertension, with a family history of type 2 diabetes, hypertension, and coronary heart disease (CHD). Until learning that her blood pressure was “too high” during a routine physical examination, she felt well, and her postmenopausal symptoms had responded to hormone replacement therapy. She was not overweight (her body mass index [BMI] was 23.7 kg/m), and the only abnormality on physical examination was a blood pressure of 145/95 RAR. Laboratory results revealed a normal blood count and urinalysis, with the following fasting plasma concentrations of relevant metabolic variables (in mg/dL): glucose 102, triglycerides (TG) 238, low-density lipoprotein cholesterol (LDL-C) 147, and high-density lipoprotein cholesterol (HDL-C) 52. E.C. is hypertensive and hypertriglyceridemic and at increased risk for CHD. Less obvious is that these metabolic abnormalities are highly likely to be the manifestations of a more fundamental defect—resistance to insulin-mediated glucose disposal and compensatory hyperinsulinemia, changes that greatly increase CHD risk.1,2 The importance of insulin resistance as a CHD risk factor was first explicated in 1998, and the cluster of abnormalities likely to appear as manifestations of the defect in insulin action designated as syndrome X.1 Support for this notion has grown almost as fast as the names used to describe the phenomenon. The Adult Treatment Panel III (ATP III) has recently3 recognized the importance as CHD risk factors of a “constellation of lipid and nonlipid risk factors of metabolic origin,” designated this cluster of abnormalities as “the metabolic syndrome,” and indicated that “this syndrome is closely linked to insulin resistance.” Table 1 lists the criteria the ATP III stipulated be used to diagnose the metabolic syndrome, and a recent report4 has applied these criteria to the database of the Third National Health and Nutrition Examination Survey (NHANES III) and estimated that 1 out of 4 adults living in the United States merits this diagnosis. The goal of this presentation is to put into perspective the importance of insulin resistance and compensatory hyperinsulinemia in the pathogenesis and clinical course of CHD, as well as the implications of both the ATP III guidelines concerning the diagnosis of the metabolic syndrome.", "title": "" }, { "docid": "9c1cd8978d482e05285fb3a9a776ddd0", "text": "BACKGROUND\nLaparoscopic surgery has led to great clinical improvements in many fields of surgery; however, it requires the use of trocars, which may lead to complications as well as postoperative pain. The complications include intra-abdominal vascular and visceral injury, trocar site bleeding, herniation and infection. Many of these are extremely rare, such as vascular and visceral injury, but may be life-threatening; therefore, it is important to determine how these types of complications may be prevented. It is hypothesised that trocar-related complications and pain may be attributable to certain types of trocars. This systematic review was designed to improve patient safety by determining which, if any, specific trocar types are less likely to result in complications and postoperative pain.\n\n\nOBJECTIVES\nTo analyse the rates of trocar-related complications and postoperative pain for different trocar types used in people undergoing laparoscopy, regardless of the condition.\n\n\nSEARCH METHODS\nTwo experienced librarians conducted a comprehensive search for randomised controlled trials (RCTs) in the Menstrual Disorders and Subfertility Group Specialised Register, Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, PsycINFO, CINAHL, CDSR and DARE (up to 26 May 2015). We checked trial registers and reference lists from trial and review articles, and approached content experts.\n\n\nSELECTION CRITERIA\nRCTs that compared rates of trocar-related complications and postoperative pain for different trocar types used in people undergoing laparoscopy. The primary outcomes were major trocar-related complications, such as mortality, conversion due to any trocar-related adverse event, visceral injury, vascular injury and other injuries that required intensive care unit (ICU) management or a subsequent surgical, endoscopic or radiological intervention. Secondary outcomes were minor trocar-related complications and postoperative pain. We excluded trials that studied non-conventional laparoscopic incisions.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo review authors independently conducted the study selection, risk of bias assessment and data extraction. We used GRADE to assess the overall quality of the evidence. We performed sensitivity analyses and investigation of heterogeneity, where possible.\n\n\nMAIN RESULTS\nWe included seven RCTs (654 participants). One RCT studied four different trocar types, while the remaining six RCTs studied two different types. The following trocar types were examined: radially expanding versus cutting (six studies; 604 participants), conical blunt-tipped versus cutting (two studies; 72 participants), radially expanding versus conical blunt-tipped (one study; 28 participants) and single-bladed versus pyramidal-bladed (one study; 28 participants). The evidence was very low quality: limitations were insufficient power, very serious imprecision and incomplete outcome data. Primary outcomesFour of the included studies reported on visceral and vascular injury (571 participants), which are two of our primary outcomes. These RCTs examined 473 participants where radially expanding versus cutting trocars were used. We found no evidence of a difference in the incidence of visceral (Peto odds ratio (OR) 0.95, 95% confidence interval (CI) 0.06 to 15.32) and vascular injury (Peto OR 0.14, 95% CI 0.0 to 7.16), both very low quality evidence. However, the incidence of these types of injuries were extremely low (i.e. two cases of visceral and one case of vascular injury for all of the included studies). There were no cases of either visceral or vascular injury for any of the other trocar type comparisons. No studies reported on any other primary outcomes, such as mortality, conversion to laparotomy, intensive care admission or any re-intervention. Secondary outcomesFor trocar site bleeding, the use of radially expanding trocars was associated with a lower risk of trocar site bleeding compared to cutting trocars (Peto OR 0.28, 95% CI 0.14 to 0.54, five studies, 553 participants, very low quality evidence). This suggests that if the risk of trocar site bleeding with the use of cutting trocars is assumed to be 11.5%, the risk with the use of radially expanding trocars would be 3.5%. There was insufficient evidence to reach a conclusion regarding other trocar types, their related complications and postoperative pain, as no studies reported data suitable for analysis.\n\n\nAUTHORS' CONCLUSIONS\nData were lacking on the incidence of major trocar-related complications, such as visceral or vascular injury, when comparing different trocar types with one another. However, caution is urged when interpreting these results because the incidence of serious complications following the use of a trocar was extremely low. There was very low quality evidence for minor trocar-related complications suggesting that the use of radially expanding trocars compared to cutting trocars leads to reduced incidence of trocar site bleeding. These secondary outcomes are viewed to be of less clinical importance.Large, well-conducted observational studies are necessary to answer the questions addressed in this review because serious complications, such as visceral or vascular injury, are extremely rare. However, for other outcomes, such as trocar site herniation, bleeding or infection, large observational studies may be needed as well. In order to answer these questions, it is advisable to establish an international network for recording these types of complications following laparoscopic surgery.", "title": "" }, { "docid": "dc1bd4603d9673fb4cd0fd9d7b0b6952", "text": "We investigate the contribution of option markets to price discovery, using a modification of Hasbrouck’s (1995) “information share” approach. Based on five years of stock and options data for 60 firms, we estimate the option market’s contribution to price discovery to be about 17 percent on average. Option market price discovery is related to trading volume and spreads in both markets, and stock volatility. Price discovery across option strike prices is related to leverage, trading volume, and spreads. Our results are consistent with theoretical arguments that informed investors trade in both stock and option markets, suggesting an important informational role for options. ∗Chakravarty is from Purdue University; Gulen is from the Pamplin College of Business, Virginia Tech; and Mayhew is from the Terry College of Business, University of Georgia and the U.S. Securities and Exchange Commission. We would like to thank the Institute for Quantitative Research in Finance (the Q-Group) for funding this research. Gulen acknowledges funding from a Virginia Tech summer grant and Mayhew acknowledges funding from the TerrySanford Research Grant at the Terry College of Business and from the University of Georgia Research Foundation. We would like to thank the editor, Rick Green; Michael Cliff; Joel Hasbrouck; Raman Kumar; an anonymous referee; and seminar participants at Purdue University, the University of Georgia, Texas Christian University, the University of South Carolina, the Securities and Exchange Commission, the University of Delaware, George Washington University, the Commodity Futures Trading Commission, the Batten Conference at the College of William and Mary, the 2002 Q-Group Conference, and the 2003 INQUIRE conference. The U.S. Securities and Exchange Commission disclaims responsibility for any private publication or statement of any SEC employee or Commissioner. This study expresses the author’s views and does not necessarily reflect those of the Commission, the Commissioners, or other members of the staff.", "title": "" } ]
scidocsrr
c2e9071fad41dd5ddf1d7681e2871ddb
VFH*: Local Obstacle Avoidance with Look-Ahead Verification
[ { "docid": "2c4ee0d42347cf75096caec62dda97f3", "text": "A new real-time obstacle avoidance method for mobile robots has been developed and implemented. This method, named the Vector Field Hisfog\" (VFH), permits the detection of unknown obstacles and avoids collisions while simultaneously steering the mobile robot toward the target. A VFH-controlled mobile robot maneuvers quickly and without stopping among densely cluttered obstacles. The VFH method uses a two-dimensional Cartesian Histognun Gfid as a world model. This world model is updated continuously and in real-time with range data sampled by the onboard ultrasonic range sensors. Based on the accumulated environmental data, the VFH method then computes a one-dimensional Polar Histogram that is constructed around the robot's momentary location. Each sector in the Polar Histogram holds thepolar obstacle density in that direction. Finally, the algorithm selects the most suitable sector from among aU Polar Hisfogmi sectors with low obstacle density, and the steering of the robot is aligned with that direction. Experimental results from a mobile robot traversing a densely cluttered obstacle course at an average speed of 0.7 m/sec demonstrate the power of the VFH method.", "title": "" }, { "docid": "5728682e998b89cb23b12ba9acc3d993", "text": "Potential field methods are rapidly gaining popularity in obstacle avoidance applications for mobile robots and manipulators. While the potential field principle is particularly attractive because of its elegance and simplicity, substantial shortcomings have been identified as problems that are inherent to this principle. Based upon mathematical analysis, this paper presents a systematic criticism of the inherent problems. The heart of this analysis is a differential equation that combines the robot and the environment into a unified system. The identified problems are discussed in qualitative and theoretical terms and documented with experimental results from actual mobile robot runs.", "title": "" } ]
[ { "docid": "916f56b4e63c01ca7156dab615fd7ef1", "text": "Designing the structure of neural networks is considered one of the most challenging tasks in deep learning, especially when there is few prior knowledge about the task domain. In this paper, we propose an Ecologically-Inspired GENetic (EIGEN) approach that uses the concept of succession, extinction, mimicry, and gene duplication to search neural network structure from scratch with poorly initialized simple network and few constraints forced during the evolution, as we assume no prior knowledge about the task domain. Specifically, we first use primary succession to rapidly evolve a population of poorly initialized neural network structures into a more diverse population, followed by a secondary succession stage for fine-grained searching based on the networks from the primary succession. Extinction is applied in both stages to reduce computational cost. Mimicry is employed during the entire evolution process to help the inferior networks imitate the behavior of a superior network and gene duplication is utilized to duplicate the learned blocks of novel structures, both of which help to find better network structures. Experimental results show that our proposed approach can achieve similar or better performance compared to the existing genetic approaches with dramatically reduced computation cost. For example, the network discovered by our approach on CIFAR-100 dataset achieves 78.1% test accuracy under 120 GPU hours, compared to 77.0% test accuracy in more than 65, 536 GPU hours in [36].", "title": "" }, { "docid": "f2e62e761c357c8490f1b53f125f8f28", "text": "The credit crisis and the ongoing European sovereign debt crisis have highlighted the native form of credit risk, namely the counterparty risk. The related Credit Valuation Adjustment (CVA), Debt Valuation Adjustment (DVA), Liquidity Valuation Adjustment (LVA) and Replacement Cost (RC) issues, jointly referred to in this paper as Total Valuation Adjustment (TVA), have been thoroughly investigated in the theoretical papers Crépey (2012a, 2012b). The present work provides an executive summary and numerical companion to these papers, through which the TVA pricing problem can be reduced to Markovian pre-default TVA BSDEs. The first step consists in the counterparty clean valuation of a portfolio of contracts, which is the valuation in a hypothetical situation where the two parties would be risk-free and funded at a risk-free rate. In the second step, the TVA is obtained as the value of an option on the counterparty clean value process called Contingent Credit Default Swap (CCDS). Numerical results are presented for interest rate swaps in the Vasicek, as well as in the inverse Gaussian Hull-White short rate model, also allowing one to assess the related model risk issue.", "title": "" }, { "docid": "e82459841d697a538f3ab77817ed45e7", "text": "A mm-wave digital transmitter based on a 60 GHz all-digital phase-locked loop (ADPLL) with wideband frequency modulation (FM) for FMCW radar applications is proposed. The fractional-N ADPLL employs a high-resolution 60 GHz digitally-controlled oscillator (DCO) and is capable of multi-rate two-point FM. It achieves a measured rms jitter of 590.2 fs, while the loop settles within 3 μs. The measured reference spur is only -74 dBc, the fractional spurs are below -62 dBc, with no other significant spurs. A closed-loop DCO gain linearization scheme realizes a GHz-level triangular chirp across multiple DCO tuning banks with a measured frequency error (i.e., nonlinearity) in the FMCW ramp of only 117 kHz rms for a 62 GHz carrier with 1.22 GHz bandwidth. The synthesizer is transformer-coupled to a 3-stage neutralized power amplifier (PA) that delivers +5 dBm to a 50 Ω load. Implemented in 65 nm CMOS, the transmitter prototype (including PA) consumes 89 mW from a 1.2 V supply.", "title": "" }, { "docid": "22b259233ffe842e91347792bd7b48e0", "text": "The increase of the complexity and advancement in ecological and environmental sciences encourages scientists across the world to collect data from multiple places, times, and thematic scales to verify their hypotheses. Accumulated over time, such data not only increases in amount, but also in the diversity of the data sources spread around the world. This poses a huge challenge for scientists who have to manually search for information. To alleviate such problems, ONEMercury has recently been implemented as part of the DataONE project to serve as a portal for accessing environmental and observational data across the globe. ONEMercury harvests metadata from the data hosted by multiple repositories and makes it searchable. However, harvested metadata records sometimes are poorly annotated or lacking meaningful keywords, which could affect effective retrieval. Here, we develop algorithms for automatic annotation of metadata. We transform the problem into a tag recommendation problem with a controlled tag library, and propose two variants of an algorithm for recommending tags. Our experiments on four datasets of environmental science metadata records not only show great promises on the performance of our method, but also shed light on the different natures of the datasets.", "title": "" }, { "docid": "a497d0e4de19d5660deb54b6dee42ebc", "text": "The aim of our study was to provide a contribution to the research field of the critical success factors (CSFs) of ERP projects, with specific focus on smaller enterprises (SMEs). Therefore, we conducted a systematic literature review in order to update the existing reviews of CSFs. On the basis of that review, we led several interviews within German SMEs and with ERP consultants experienced with ERP implementations in SMEs. As a result, we showed that all factors found in the literature also affected the success of ERP projects in SMEs. However, within those projects, technological factors gained much more importance compared to the factors which influence the success from larger ERP projects the most. For SMEs, factors like Organizational fit of the ERP system as well as ERP system tests are even more important than Top management support or Project management, which were the most important factors for large-scaled companies.", "title": "" }, { "docid": "eff17ece2368b925f0db8e18ea0fc897", "text": "Blockchain, as the backbone technology of the current popular Bitcoin digital currency, has become a promising decentralized data management framework. Although blockchain has been widely adopted in many applications (e.g., finance, healthcare, and logistics), its application in mobile services is still limited. This is due to the fact that blockchain users need to solve preset proof-of-work puzzles to add new data (i.e., a block) to the blockchain. Solving the proof of work, however, consumes substantial resources in terms of CPU time and energy, which is not suitable for resource-limited mobile devices. To facilitate blockchain applications in future mobile Internet of Things systems, multiple access mobile edge computing appears to be an auspicious solution to solve the proof-of-work puzzles for mobile users. We first introduce a novel concept of edge computing for mobile blockchain. Then we introduce an economic approach for edge computing resource management. Moreover, a prototype of mobile edge computing enabled blockchain systems is presented with experimental results to justify the proposed concept.", "title": "" }, { "docid": "becd66e0637b9b6dd07b45e6966227d6", "text": "In real life, when telling a person’s age from his/her face, we tend to look at his/her whole face first and then focus on certain important regions like eyes. After that we will focus on each particular facial feature individually like the nose or the mouth so that we can decide the age of the person. Similarly, in this paper, we propose a new framework for age estimation, which is based on human face sub-regions. Each sub-network in our framework takes the input of two images each from human facial region. One of them is the global face, and the other is a vital sub-region. Then, we combine the predictions from different sub-regions based on a majority voting method. We call our framework Multi-Region Network Prediction Ensemble (MRNPE) and evaluate our approach using two popular public datasets: MORPH Album II and Cross Age Celebrity Dataset (CACD). Experiments show that our method outperforms the existing state-of-the-art age estimation methods by a significant margin. The Mean Absolute Errors (MAE) of age estimation are dropped from 3.03 to 2.73 years on the MORPH Album II and 4.79 to 4.40 years on the CACD.", "title": "" }, { "docid": "df30bd2a221c915f47569afc6205062a", "text": "5G is the next cellular generation and is expected to quench the growing thirst for taxing data rates and to enable the Internet of Things. Focused research and standardization work have been addressing the corresponding challenges from the radio perspective while employing advanced features, such as network densification, massive multiple-input-multiple-output antennae, coordinated multi-point processing, inter-cell interference mitigation techniques, carrier aggregation, and new spectrum exploration. Nevertheless, a new bottleneck has emerged: the backhaul. The ultra-dense and heavy traffic cells should be connected to the core network through the backhaul, often with extreme requirements in terms of capacity, latency, availability, energy, and cost efficiency. This pioneering survey explains the 5G backhaul paradigm, presents a critical analysis of legacy, cutting-edge solutions, and new trends in backhauling, and proposes a novel consolidated 5G backhaul framework. A new joint radio access and backhaul perspective is proposed for the evaluation of backhaul technologies which reinforces the belief that no single solution can solve the holistic 5G backhaul problem. This paper also reveals hidden advantages and shortcomings of backhaul solutions, which are not evident when backhaul technologies are inspected as an independent part of the 5G network. This survey is key in identifying essential catalysts that are believed to jointly pave the way to solving the beyond-2020 backhauling challenge. Lessons learned, unsolved challenges, and a new consolidated 5G backhaul vision are thus presented.", "title": "" }, { "docid": "1d9790263cc91a4bd027129094aaf9af", "text": "This paper proposes an approach to recognize English words corresponding to digits Zero to Nine spoken in an isolated way by different male and female speakers. A set of features consisting of a combination of Mel Frequency Cepstral Coefficients (MFCC), Linear Predictive Coding (LPC), Zero Crossing Rate (ZCR), and Short Time Energy (STE) of the audio signal, is used to generate a 63-element feature vector, which is subsequently used for discrimination. Classification is done using artificial neural networks (ANN) with feedforward back-propagation architectures. An accuracy of 85% is obtained by the combination of features, when the proposed approach is tested using a dataset of 280 speech samples, which is more than those obtained by using the features singly.", "title": "" }, { "docid": "78b128de1f20e2d9937414ebd598ab52", "text": "This paper addresses the problem of enabling robots to interactively learn visual and spatial models from multi-modal interactions involving speech, gesture and images. Our approach, called Logical Semantics with Perception (LSP), provides a natural and intuitive interface by significantly reducing the amount of supervision that a human is required to provide. This paper demonstrates LSP in an interactive setting. Given speech and gesture input, LSP is able to learn object and relation classifiers for objects like mugs and relations like left and right. We extend LSP to generate complex natural language descriptions of selected objects using adjectives, nouns and relations, such as “the orange mug to the right of the green book.” Furthermore, we extend LSP to incorporate determiners (e.g., “the”) into its training procedure, enabling the model to generate acceptable relational language 20% more often than the unaugmented model.", "title": "" }, { "docid": "23ff4a40f9a62c8a26f3cc3f8025113d", "text": "In the early ages of implantable devices, radio frequency (RF) technologies were not commonplace due to the challenges stemming from the inherent nature of biological tissue boundaries. As technology improved and our understanding matured, the benefit of RF in biomedical applications surpassed the implementation challenges and is thus becoming more widespread. The fundamental challenge is due to the significant electromagnetic (EM) effects of the body at high frequencies. The EM absorption and impedance boundaries of biological tissue result in significant reduction of power and signal integrity for transcutaneous propagation of RF fields. Furthermore, the dielectric properties of the body tissue surrounding the implant must be accounted for in the design of its RF components, such as antennas and inductors, and the tissue is often heterogeneous and the properties are highly variable. Additional challenges for implantable applications include the need for miniaturization, power minimization, and often accounting for a conductive casing due to biocompatibility and hermeticity requirements [1]?[3]. Today, wireless technologies are essentially a must have in most electrical implants due to the need to communicate with the device and even transfer usable energy to the implant [4], [5]. Low-frequency wireless technologies face fewer challenges in this implantable setting than its higher frequency, or RF, counterpart, but are limited to much lower communication speeds and typically have a very limited operating distance. The benefits of high-speed communication and much greater communication distances in biomedical applications have spawned numerous wireless standards committees, and the U.S. Federal Communications Commission (FCC) has allocated numerous frequency bands for medical telemetry as well as those to specifically target implantable applications. The development of analytical models, advanced EM simulation software, and representative RF human phantom recipes has significantly facilitated design and optimization of RF components for implantable applications.", "title": "" }, { "docid": "d51ef75ccf464cc03656210ec500db44", "text": "The choice of a business process modelling (BPM) tool in combination with the selection of a modelling language is one of the crucial steps in BPM project preparation. Different aspects influence the decision: tool functionality, price, modelling language support, etc. In this paper we discuss the aspect of usability, which has already been recognized as an important topic in software engineering and web design. We conduct a literature review to find out the current state of research on the usability in the BPM field. The results of the literature review show, that although a number of research papers mention the importance of usability for BPM tools, real usability evaluation studies have rarely been undertaken. Based on the results of the literature analysis, the possible research directions in the field of usability of BPM tools are suggested.", "title": "" }, { "docid": "80e4d60d0687e44b027074c193fe2083", "text": "Sexual activity involves excitement with high arousal and pleasure as typical features of emotions. Brain activations specifically related to erotic feelings and those related to general emotional processing are therefore hard to disentangle. Using fMRI in 21 healthy subjects (11 males and 10 females), we investigated regions that show activations specifically related to the viewing of sexually intense pictures while controlling for general emotional arousal (GEA) or pleasure. Activations in the ventral striatum and hypothalamus were found to be modulated by the stimulus' specific sexual intensity (SSI) while activations in the anterior cingulate cortex were associated with an interaction between sexual intensity and emotional valence. In contrast, activation in other regions like the dorsomedial prefrontal cortex, the mediodorsal thalamus and the amygdala was associated only with a general emotional component during sexual arousal. No differences were found in these effects when comparing females and males. Our findings demonstrate for the first time neural differentiation between emotional and sexual components in the neural network underlying sexual arousal.", "title": "" }, { "docid": "4a677dae1152d5d69369ac76590f52f3", "text": "This paper presents a digital implementation of power control for induction cooking appliances with domestic low-cost vessels. The proposed control strategy is based on the asymmetrical duty-cycle with automatic switching-frequency tracking control employing a digital phase locked-loop (DPLL) control on high performance microcontroller. With the use of a phase locked-loop control, this method ensures the zero voltage switching (ZVS) operation under load parameter variation and power control at any power levels. Experimental results have shown that the proposed control method can reach the minimum output power at 15% of the rated value.", "title": "" }, { "docid": "16924ee2e6f301d962948884eeafc934", "text": "Companies have realized they need to hire data scientists, academic institutions are scrambling to put together data-science programs, and publications are touting data science as a hot-even \"sexy\"-career choice. However, there is confusion about what exactly data science is, and this confusion could lead to disillusionment as the concept diffuses into meaningless buzz. In this article, we argue that there are good reasons why it has been hard to pin down exactly what is data science. One reason is that data science is intricately intertwined with other important concepts also of growing importance, such as big data and data-driven decision making. Another reason is the natural tendency to associate what a practitioner does with the definition of the practitioner's field; this can result in overlooking the fundamentals of the field. We believe that trying to define the boundaries of data science precisely is not of the utmost importance. We can debate the boundaries of the field in an academic setting, but in order for data science to serve business effectively, it is important (i) to understand its relationships to other important related concepts, and (ii) to begin to identify the fundamental principles underlying data science. Once we embrace (ii), we can much better understand and explain exactly what data science has to offer. Furthermore, only once we embrace (ii) should we be comfortable calling it data science. In this article, we present a perspective that addresses all these concepts. We close by offering, as examples, a partial list of fundamental principles underlying data science.", "title": "" }, { "docid": "3f82f5b9f146e38311334bd71ea4588b", "text": "We present a novel algorithm for performing integrated segmentation and 3D pose estimation of a human body from multiple views. Unlike other related state of the art techniques which focus on either segmentation or pose estimation individually, our approach tackles these two tasks together. Normally, when optimizing for pose, it is traditional to use some fixed set of features, e.g. edges or chamfer maps. In contrast, our novel approach consists of optimizing a cost function based on a Markov Random Field (MRF). This has the advantage that we can use all the information in the image: edges, background and foreground appearances, as well as the prior information on the shape and pose of the subject and combine them in a Bayesian framework. Previously, optimizing such a cost function would have been computationally infeasible. However, our recent research in dynamic graph cuts allows this to be done much more efficiently than before. We demonstrate the efficacy of our approach on challenging motion sequences. Note that although we target the human pose inference problem in the paper, our method is completely generic and can be used to segment and infer the pose of any specified rigid, deformable or articulated object.", "title": "" }, { "docid": "79ff4bd891538a0d1b5a002d531257f2", "text": "Reverse conducting IGBTs are fabricated in a large productive volume for soft switching applications, such as inductive heaters, microwave ovens or lamp ballast, since several years. To satisfy the requirements of hard switching applications, such as inverters in refrigerators, air conditioners or general purpose drives, the reverse recovery behavior of the integrated diode has to be optimized. Two promising concepts for such an optimization are based on a reduction of the charge- carrier lifetime or the anti-latch p+ implantation dose. It is shown that a combination of both concepts will lead to a device with a good reverse recovery behavior, low forward and reverse voltage drop and excellent over current turn- off capability of a trench field-stop IGBT.", "title": "" }, { "docid": "785ca963ea1f9715cdea9baede4c6081", "text": "In this paper, factor analysis is applied on a set of data that was collected to study the effectiveness of 58 different agile practices. The analysis extracted 15 factors, each was associated with a list of practices. These factors with the associated practices can be used as a guide for agile process improvement. Correlations between the extracted factors were calculated, and the significant correlation findings suggested that people who applied iterative and incremental development and quality assurance practices had a high success rate, that communication with the customer was not very popular as it had negative correlations with governance and iterative and incremental development. Also, people who applied governance practices also applied quality assurance practices. Interestingly success rate related negatively with traditional analysis methods such as Gantt chart and detailed requirements specification.", "title": "" }, { "docid": "94d66ffd9d9c2ccb08be7059075cd018", "text": "Query expansion is generally a useful technique in improving search performance. However, some expanded query terms obtained by traditional statistical methods (e.g., pseudo-relevance feedback) may not be relevant to the user’s information need, while some relevant terms may not be contained in the feedback documents at all. Recent studies utilize external resources to detect terms that are related to the query, and then adopt these terms in query expansion. In this paper, we present a study in the use of Freebase [6], which is an open source general-purpose ontology, as a source for deriving expansion terms. FreeBase provides a graphbased model of human knowledge, from which a rich and multi-step structure of instances related to the query concept can be extracted, as a complement to the traditional statistical approaches to query expansion. We propose a novel method, based on the well-principled DempsterShafer’s (D-S) evidence theory, to measure the certainty of expansion terms from the Freebase structure. The expanded query model is then combined with a state of the art statistical query expansion model – the Relevance Model (RM3). Experiments show that the proposed method achieves significant improvements over RM3.", "title": "" }, { "docid": "e8523816ead27edc299397d2cad68bc4", "text": "This research investigated the link between ethical leadership and performance using data from the People’s Republic of China. Consistent with social exchange, social learning, and social identity theories, we examined leader–member exchange (LMX), self-efficacy, and organizational identification as mediators of the ethical leadership to performance relationship. Results from 72 supervisors and 201 immediate direct reports revealed that ethical leadership was positively and significantly related to employee performance as rated by their immediate supervisors and that this relationship was fully mediated by LMX, self-efficacy, and organizational identification, controlling for procedural fairness. We discuss implications of our findings for theory and practice.", "title": "" } ]
scidocsrr
f753efce12f912b664bee62369f28e8f
Regular Linear Temporal Logic
[ { "docid": "b79b3497ae4987e00129eab9745e1398", "text": "The automata-theoretic approach to linear temporal logic uses the theory of automata as a unifying paradigm for program specification, verification, and synthesis. Both programs and specifications are in essence descriptions of computations. These computations can be viewed as words over some alphabet. Thus,programs and specificationscan be viewed as descriptions of languagesover some alphabet. The automata-theoretic perspective considers the relationships between programs and their specifications as relationships between languages.By translating programs and specifications to automata, questions about programs and their specifications can be reduced to questions about automata. More specifically, questions such as satisfiability of specifications and correctness of programs with respect to their specifications can be reduced to questions such as nonemptiness and containment of automata. Unlike classical automata theory, which focused on automata on finite words, the applications to program specification, verification, and synthesis, use automata on infinite words, since the computations in which we are interested are typically infinite. This paper provides an introduction to the theory of automata on infinite words and demonstrates its applications to program specification, verification, and synthesis.", "title": "" } ]
[ { "docid": "b9f774ccd37e0bf0e399dd2d986f258d", "text": "Predicting the final state of a running process, the remaining time to completion or the next activity of a running process are important aspects of runtime process management. Runtime management requires the ability to identify processes that are at risk of not meeting certain criteria in order to offer case managers decision information for timely intervention. This in turn requires accurate prediction models for process outcomes and for the next process event, based on runtime information available at the prediction and decision point. In this paper, we describe an initial application of deep learning with recurrent neural networks to the problem of predicting the next process event. This is both a novel method in process prediction, which has previously relied on explicit process models in the form of Hidden Markov Models (HMM) or annotated transition systems, and also a novel application for deep learning methods.", "title": "" }, { "docid": "8bcb5def2a0b847a5d0800849443e5bc", "text": "BACKGROUND\nMMPs play a crucial role in the process of cancer invasion and metastasis.\n\n\nMETHODS\nThe influence of NAC on invasion and MMP-9 production of human bladder cancer cell line T24 was investigated using an in vitro invasion assay, gelatin zymography, Western and Northern blot analyses and RT-PCR assays.\n\n\nRESULTS\nTPA increased the number of invading T24 cells through reconstituted basement membrane more than 10-fold compared to basal condition. NAC inhibited TPA-enhanced invasion dose-dependently. TPA increased the MMP-9 production by T24 cells without altering expression of TIMP-1 gene, while NAC suppressed TPA-enhanced production of MMP-9. Neither TPA nor NAC altered TIMP-1 mRNA level in T24 cells. In vitro experiments demonstrated that MMP-9 was directly inhibited by NAC but was not influenced by TPA.\n\n\nCONCLUSION\nNAC limits invasion of T24 human bladder cancer cells by inhibiting the MMP-9 production in addition to a direct inhibition of MMP-9 activity.", "title": "" }, { "docid": "03cea891c4a9fdc77832979267f9dca9", "text": "Any multiprocessing facility must include three features: elementary exclusion, data protection, and process saving. While elementary exclusion must rest on some hardware facility (e.g. a test-and-set instruction), the other two requirements are fulfilled by features already present in applicative languages. Data protection may be obtained through the use of procedures (closures or funargs),and process saving may be obtained through the use of the CATCH operator. The use of CATCH, in particular, allows an elegant treatment of process saving.\n We demonstrate these techniques by writing the kernel and some modules for a multiprocessing system. The kernel is very small. Many functions which one would normally expect to find inside the kernel are completely decentralized. We consider the implementation of other schedulers, interrupts, and the implications of these ideas for language design.", "title": "" }, { "docid": "b66be42a294208ec31d44e57ae434060", "text": "Emotional speech recognition aims to automatically classify speech units (e.g., utterances) into emotional states, such as anger, happiness, neutral, sadness and surprise. The major contribution of this paper is to rate the discriminating capability of a set of features for emotional speech recognition when gender information is taken into consideration. A total of 87 features has been calculated over 500 utterances of the Danish Emotional Speech database. The Sequential Forward Selection method (SFS) has been used in order to discover the 5-10 features which are able to classify the samples in the best way for each gender. The criterion used in SFS is the crossvalidated correct classification rate of a Bayes classifier where the class probability distribution functions (pdfs) are approximated via Parzen windows or modeled as Gaussians. When a Bayes classifier with Gaussian pdfs is employed, a correct classification rate of 61.1% is obtained for male subjects and a corresponding rate of 57.1% for female ones. In the same experiment, a random Emotional speech recognition aims to automatically classify speech units (e.g., utterances) into emotional states, such as anger, happiness, neutral, sadness and surprise. The major contribution of this paper is to rate the discriminating capability of a set of features for emotional speech recognition when gender information is taken into consideration. A total of 87 features has been calculated over 500 utterances of the Danish Emotional Speech database. The Sequential Forward Selection method (SFS) has been used in order to discover the 5-10 features which are able to classify the samples in the best way for each gender. The criterion used in SFS is the crossvalidated correct classification rate of a Bayes classifier where the class probability distribution functions (pdfs) are approximated via Parzen windows or modeled as Gaussians. When a Bayes classifier with Gaussian PDFs is employed, a correct classification rate of 61.1% is obtained for male subjects and a corresponding rate of 57.1% for female ones. In the same experiment, a random classification would result in a correct classification rate of 20%. When gender information is not considered a correct classification score of 50.6% is obtained.classification would result in a correct classification rate of 20%. When gender information is not considered a correct classification score of 50.6% is obtained.", "title": "" }, { "docid": "8e0e77e78c33225922b5a45fee9b4242", "text": "In this paper, we address the issues of maintaining sensing coverage and connectivity by keeping a minimum number of sensor nodes in the active mode in wireless sensor networks. We investigate the relationship between coverage and connectivity by solving the following two sub-problems. First, we prove that if the radio range is at least twice the sensing range, complete coverage of a convex area implies connectivity among the working set of nodes. Second, we derive, under the ideal case in which node density is sufficiently high, a set of optimality conditions under which a subset of working sensor nodes can be chosen for complete coverage. Based on the optimality conditions, we then devise a decentralized density control algorithm, Optimal Geographical Density Control (OGDC), for density control in large scale sensor networks. The OGDC algorithm is fully localized and can maintain coverage as well as connectivity, regardless of the relationship between the radio range and the sensing range. Ns-2 simulations show that OGDC outperforms existing density control algorithms [25, 26, 29] with respect to the number of working nodes needed and network lifetime (with up to 50% improvement), and achieves almost the same coverage as the algorithm with the best result.", "title": "" }, { "docid": "f5d6bfa66e4996bddc6ca1fbecc6c25d", "text": "Internet-connected consumer electronics marketed as smart devices (also known as Internet-of-Things devices) usually lack essential security protection mechanisms. This puts user privacy and security in great danger. One of the essential steps to compromise vulnerable devices is locating them through horizontal port scans. In this paper, we focus on the problem of detecting horizontal port scans in home networks. We propose a software-defined networking (SDN)-based firewall platform that is capable of detecting horizontal port scans. Current SDN implementations (e.g., OpenFlow) do not provide access to packet-level information, which is essential for network security applications, due to performance limitations. Our platform uses FleXight, our proposed new information channel between SDN controller and data path elements to access packet-level information. FleXight uses per-flow sampling and dynamical sampling rate adjustments to provide the necessary information to the controller while keeping the overhead very low. We evaluate our solution on a large real-world packet trace from an ISP and show that our system can identify all attackers and 99% of susceptible victims with only 0.75% network overhead. We also present a detailed usability analysis of our system.", "title": "" }, { "docid": "405bcd759da950aa0d4b8aeb9d8488bb", "text": "Background/Aim: Using machine learning approaches as non-invasive methods have been used recently as an alternative method in staging chronic liver diseases for avoiding the drawbacks of biopsy. This study aims to evaluate different machine learning techniques in prediction of advanced fibrosis by combining the serum bio-markers and clinical information to develop the classification models. Methods: A prospective cohort of 39,567 patients with chronic hepatitis C was divided into two sets—one categorized as mild to moderate fibrosis F0-F2, and the other categorized as advanced fibrosis F3-F4 according to METAVIR score. Decision tree, genetic algorithm, particle swarm optimization, and multi-linear regression models for advanced fibrosis risk prediction were developed. Receiver operating characteristic curve analysis was performed to evaluate the performance of the proposed models. Results: Age, platelet count, AST, and albumin were found to be statistically significant to advanced fibrosis. The machine learning algorithms under study were able to predict advanced fibrosis in patients with HCC with AUROC ranging between 0.73 and 0.76 and accuracy between 66.3 and 84.4 percent. Conclusions: Machine-learning approaches could be used as alternative methods in prediction of the risk of advanced liver fibrosis due to chronic hepatitis C.", "title": "" }, { "docid": "b79fc7fb12d1ac2fc8d6ad3f7123364a", "text": "We characterize the structural and electronic changes during the photoinduced enol-keto tautomerization of 2-(2'-hydroxyphenyl)-benzothiazole (HBT) in a nonpolar solvent (tetrachloroethene). We quantify the redistribution of electronic charge and intramolecular proton translocation in real time by combining UV-pump/IR-probe spectroscopy and quantum chemical modeling. We find that the photophysics of this prototypical molecule involves proton coupled electron transfer (PCET), from the hydroxyphenyl to the benzothiazole rings, resulting from excited state intramolecular proton transfer (ESIPT) coupled to electron transfer through the conjugated double bond linking the two rings. The combination of polarization-resolved mid-infrared spectroscopy of marker modes and time-dependent density functional theory (TD-DFT) provides key insights into the transient structures of the molecular chromophore during ultrafast isomerization dynamics.", "title": "" }, { "docid": "ffb7754f7ecabf639aba0ef257615558", "text": "Novel approaches have taken Augmented Reality (AR) beyond traditional body-worn or hand-held displays, leading to the creation of a new branch of AR: Spatial Augmented Reality (SAR) providing additional application areas. SAR is a rapidly emerging field that uses digital projectors to render virtual objects onto 3D objects in the real space. When mounting digital projectors on robots, this collaboration paves the way for unique Human-Robot Interactions (HRI) that otherwise would not be possible. Adding to robots the capability of projecting interactive Augmented Reality content enables new forms of interactions between humans, robots, and virtual objects, enabling new applications. In this work it is investigated the use of SAR techniques on mobile robots for better enabling this to interact in the future with elderly or injured people during rehabilitation, or with children in the pediatric ward of a hospital.", "title": "" }, { "docid": "dc3495ec93462e68f606246205a8416d", "text": "State-of-the-art methods for zero-shot visual recognition formulate learning as a joint embedding problem of images and side information. In these formulations the current best complement to visual features are attributes: manually-encoded vectors describing shared characteristics among categories. Despite good performance, attributes have limitations: (1) finer-grained recognition requires commensurately more attributes, and (2) attributes do not provide a natural language interface. We propose to overcome these limitations by training neural language models from scratch, i.e. without pre-training and only consuming words and characters. Our proposed models train end-to-end to align with the fine-grained and category-specific content of images. Natural language provides a flexible and compact way of encoding only the salient visual aspects for distinguishing categories. By training on raw text, our model can do inference on raw text as well, providing humans a familiar mode both for annotation and retrieval. Our model achieves strong performance on zero-shot text-based image retrieval and significantly outperforms the attribute-based state-of-the-art for zero-shot classification on the Caltech-UCSD Birds 200-2011 dataset.", "title": "" }, { "docid": "4c12b827ee445ab7633aefb8faf222a2", "text": "Research shows that speech dereverberation (SD) with Deep Neural Network (DNN) achieves the state-of-the-art results by learning spectral mapping, which, simultaneously, lacks the characterization of the local temporal spectral structures (LTSS) of speech signal and calls for a large storage space that is impractical in real applications. Contrarily, the Convolutional Neural Network (CNN) offers a better modeling ability by considering local patterns and has less parameters with its weights sharing property, which motivates us to employ the CNN for SD task. In this paper, to our knowledge, a Deep Convolutional Encoder-Decoder (DCED) model is proposed for the first time in dealing with the SD task (DCED-SD), where the advantage of the DCED-SD model lies in its powerful LTSS modeling capability via convolutional encoder-decoder layers with smaller storage requirement. By taking the reverberant and anechoic spectrum as training pairs, the proposed DCED-SD is well-trained in a supervised manner with less convergence time. Additionally, the DCED-SD model size is 23 times smaller than the size of DNN-SD model with better performance achieved. By using the simulated and real-recorded data, extensive experiments have been conducted to demonstrate the superiority of DCED-based SD method over the DNN-based SD method under different unseen reverberant conditions.", "title": "" }, { "docid": "728215fb8bb89c7830768e705e5f1c1c", "text": "Human and automated tutors attempt to choose pedagogical activities that will maximize student learning, informed by their estimates of the student's current knowledge. There has been substantial research on tracking and modeling student learning, but significantly less attention on how to plan teaching actions and how the assumed student model impacts the resulting plans. We frame the problem of optimally selecting teaching actions using a decision-theoretic approach and show how to formulate teaching as a partially observable Markov decision process planning problem. This framework makes it possible to explore how different assumptions about student learning and behavior should affect the selection of teaching actions. We consider how to apply this framework to concept learning problems, and we present approximate methods for finding optimal teaching actions, given the large state and action spaces that arise in teaching. Through simulations and behavioral experiments, we explore the consequences of choosing teacher actions under different assumed student models. In two concept-learning tasks, we show that this technique can accelerate learning relative to baseline performance.", "title": "" }, { "docid": "d780db3ec609d74827a88c0fa0d25f56", "text": "Highly automated test vehicles are rare today, and (independent) researchers have often limited access to them. Also, developing fully functioning system prototypes is time and effort consuming. In this paper, we present three adaptions of the Wizard of Oz technique as a means of gathering data about interactions with highly automated vehicles in early development phases. Two of them address interactions between drivers and highly automated vehicles, while the third one is adapted to address interactions between pedestrians and highly automated vehicles. The focus is on the experimental methodology adaptations and our lessons learned.", "title": "" }, { "docid": "2e0fb1af3cb0fdd620144eb93d55ef3e", "text": "A privacy policy is a legal document, used by websites to communicate how the personal data that they collect will be managed. By accepting it, the user agrees to release his data under the conditions stated by the policy. Privacy policies should provide enough information to enable users to make informed decisions. Privacy regulations support this by specifying what kind of information has to be provided. As privacy policies can be long and difficult to understand, users tend not to read them. Because of this, users generally agree with a policy without knowing what it states and whether aspects important to him are covered at all. In this paper we present a solution to assist the user by providing a structured way to browse the policy content and by automatically assessing the completeness of a policy, i.e. the degree of coverage of privacy categories important to the user. The privacy categories are extracted from privacy regulations, while text categorization and machine learning techniques are used to verify which categories are covered by a policy. The results show the feasibility of our approach; an automatic classifier, able to associate the right category to paragraphs of a policy with an accuracy approximating that obtainable by a human judge, can be effectively created.", "title": "" }, { "docid": "1db14c8cb5434bd28a2d4b3e6b928a9a", "text": "Nested virtualization [1] provides an extra layer of virtualization to enhance security with fairly reasonable performance impact. Usercentric vision of cloud computing gives a high-level of control on the whole infrastructure [2], such as untrusted dom0 [3, 4]. This paper introduces RetroVisor, a security architecture to seamlessly run a virtual machine (VM) on multiple hypervisors simultaneously. We argue that this approach delivers high-availability and provides strong guarantees on multi IaaS infrastructures. The user can perform detection and remediation against potential hypervisors weaknesses, unexpected behaviors and exploits.", "title": "" }, { "docid": "dcef528dbd89bc2c26820bdbe52c3d8d", "text": "The evolution of digital libraries and the Internet has dramatically transformed the processing, storage, and retrieval of information. Efforts to digitize text, images, video, and audio now consume a substantial portion of both academic anld industrial activity. Even when there is no shortage of textual materials on a particular topic, procedures for indexing or extracting the knowledge or conceptual information contained in them can be lacking. Recently developed information retrieval technologies are based on the concept of a vector space. Data are modeled as a matrix, and a user's query of the database is represented as a vector. Relevant documents in the database are then identified via simple vector operations. Orthogonal factorizations of the matrix provide mechanisms for handling uncertainty in the database itself. The purpose of this paper is to show how such fundamental mathematical concepts from linear algebra can be used to manage and index large text collections.", "title": "" }, { "docid": "af6464d1e51cb59da7affc73977eed71", "text": "Recommender systems leverage both content and user interactions to generate recommendations that fit users' preferences. The recent surge of interest in deep learning presents new opportunities for exploiting these two sources of information. To recommend items we propose to first learn a user-independent high-dimensional semantic space in which items are positioned according to their substitutability, and then learn a user-specific transformation function to transform this space into a ranking according to the user's past preferences. An advantage of the proposed architecture is that it can be used to effectively recommend items using either content that describes the items or user-item ratings. We show that this approach significantly outperforms state-of-the-art recommender systems on the MovieLens 1M dataset.", "title": "" }, { "docid": "a46954af087b37ebfc04866dca1552d2", "text": "An exoskeleton has to be lightweight, compliant, yet powerful to fulfill the demanding task of walking. This imposes a great challenge for the actuator design. Electric motors, by far the most common actuator in robotic, orthotic, and prosthetic devices, cannot provide sufficiently high peak and average power and force/torque output, and they normally require high-ratio, heavy reducer to produce the speeds and high torques needed for human locomotion. Studies on the human muscle-tendon system have shown that muscles (including tendons and ligaments) function as a spring, and by storing energy and releasing it at a proper moment, locomotion becomes more energy efficient. Inspired by the muscle behavior, we propose a novel actuation strategy for exoskeleton design. In this paper, the collected gait data are analyzed to identify the spring property of the human muscle-tendon system. Theoretical optimization results show that adding parallel springs can reduce the peak torque by 66%, 53%, and 48% for hip flexion/extension (F/E), hip abduction/adduction (A/A), and ankle dorsi/plantar flexion (D/PF), respectively, and the rms power by 50%, 45%, and 61%, respectively. Adding a series spring (forming a Series Elastic Actuator, SEA) reduces the peak power by 79% for ankle D/PF, and by 60% for hip A/A. A SEA does not reduce the peak power demand at other joints. The optimization approach can be used for designing other wearable robots as well.", "title": "" }, { "docid": "26d0809a2c8ab5d5897ca43c19fc2b57", "text": "This study outlines a simple 'Profilometric' method for measuring the size and function of the wrinkles. Wrinkle size was measured in relaxed conditions and the representative parameters were considered to be the mean 'Wrinkle Depth', the mean 'Wrinkle Area', the mean 'Wrinkle Volume', and the mean 'Wrinkle Tissue Reservoir Volume' (WTRV). These parameters were measured in the wrinkle profiles under relaxed conditions. The mean 'Wrinkle to Wrinkle Distance', which measures the distance between two adjacent wrinkles, is an accurate indicator of the muscle relaxation level during replication. This parameter, identified as the 'Muscle Relaxation Level Marker', and its reduction are related to increased muscle tone or contraction and vice versa. The mean Wrinkle to Wrinkle Distance is very important in experiments where the effectiveness of an anti-wrinkle preparation is tested. Thus, the correlative wrinkles' replicas, taken during follow up in different periods, are only those that show the same mean Wrinkle to Wrinkle Distance. The wrinkles' functions were revealed by studying the morphological changes of the wrinkles and their behavior during relaxed conditions, under slight increase of muscle tone and under maximum wrinkling. Facial wrinkles are not a single groove, but comprise an anatomical and functional unit (the 'Wrinkle Unit') along with the surrounding skin. This Wrinkle Unit participates in the functions of a central neuro-muscular system of the face responsible for protection, expression, and communication. Thus, the Wrinkle Unit, the superficial musculoaponeurotic system (superficial fascia of the face), the underlying muscles controlled by the CNS and Psyche, are considered to be a 'Functional Psycho-Neuro-Muscular System of the Face for Protection, Expression and Communication'. The three major functions of this system exerted in the central part of the face and around the eyes are: (1) to open and close the orifices (eyes, nose, and mouth), contributing to their functions; (2) to protect the eyes from sun, foreign bodies, etc.; (3) to contribute to facial expression, reflecting emotions (real, pretended, or theatrical) during social communication. These functions are exercised immediately and easily, without any opposition ('Wrinkling Ability') because of the presence of the Wrinkle Unit that gives (a) the site of refolding (the wrinkle is a waiting fold, ready to respond quickly at any moment for any skin mobility need) and (b) the appropriate skin tissue for extension or compression (this reservoir of tissue is measured by the parameter of WTRV). The Wrinkling Ability of a skin area is linked to the wrinkle's functions and can be measured by the parameter of 'Skin Tissue Volume Compressed around the Wrinkle' in mm(3) per 30 mm wrinkle during maximum wrinkling. The presence of wrinkles is a sign that the skin's 'Recovery Ability' has declined progressively with age. The skin's Recovery Ability is linked to undesirable cosmetic effects of ageing and wrinkling. This new Profilometric method can be applied in studies where the effectiveness of anti-wrinkle preparations or the cosmetic results of surgery modalities are tested, as well as in studies focused on the functional physiology of the Wrinkle Unit.", "title": "" }, { "docid": "3e80dc7319f1241e96db42033c16f6b4", "text": "Automatic expert assignment is a common problem encountered in both industry and academia. For example, for conference program chairs and journal editors, in order to collect \"good\" judgments for a paper, it is necessary for them to assign the paper to the most appropriate reviewers. Choosing appropriate reviewers of course includes a number of considerations such as expertise and authority, but also diversity and avoiding conflicts. In this paper, we explore the expert retrieval problem and implement an automatic paper-reviewer recommendation system that considers aspects of expertise, authority, and diversity. In particular, a graph is first constructed on the possible reviewers and the query paper, incorporating expertise and authority information. Then a Random Walk with Restart (RWR) [1] model is employed on the graph with a sparsity constraint, incorporating diversity information. Extensive experiments on two reviewer recommendation benchmark datasets show that the proposed method obtains performance gains over state-of-the-art reviewer recommendation systems in terms of expertise, authority, diversity, and, most importantly, relevance as judged by human experts.", "title": "" } ]
scidocsrr
c7b76e8dedac804f9cc551b6ac58b189
Groupwise Maximin Fair Allocation of Indivisible Goods
[ { "docid": "1a0889892eb87ebd26abb0a295fae51b", "text": "The fairness notion of maximin share (MMS) guarantee underlies a deployed algorithm for allocating indivisible goods under additive valuations. Our goal is to understand when we can expect to be able to give each player his MMS guarantee. Previous work has shown that such an MMS allocation may not exist, but the counterexample requires a number of goods that is exponential in the number of players; we give a new construction that uses only a linear number of goods. On the positive side, we formalize the intuition that these counterexamples are very delicate by designing an algorithm that provably finds an MMS allocation with high probability when valuations are drawn at random.", "title": "" }, { "docid": "30e47a275e7e00f80c8f12061575ee82", "text": "Spliddit is a first-of-its-kind fair division website, which offers provably fair solutions for the division of rent, goods, and credit. In this note, we discuss Spliddit's goals, methods, and implementation.", "title": "" } ]
[ { "docid": "c5ee2a4e38dfa27bc9d77edcd062612f", "text": "We perform transaction-level analyses of entrusted loans – the largest component of shadow banking in China. There are two types – affiliated and non-affiliated. The latter involve a much higher interest rate than the former and official bank loan rates, and largely flow into the real estate industry. Both involve firms with privileged access to cheap capital to channel funds to less privileged firms and increase when credit is tight. The pricing of entrusted loans, especially that of non-affiliated loans, incorporates fundamental and informational risks. Stock market reactions suggest that both affiliated and non-affiliated loans are fairly-compensated investments.", "title": "" }, { "docid": "981a03df711c7c9aabdf163487887824", "text": "We introduce a new paradigm to investigate unsupervised learning, reducing unsupervised learning to supervised learning. Specifically, we mitigate the subjectivity in unsupervised decision-making by leveraging knowledge acquired from prior, possibly heterogeneous, supervised learning tasks. We demonstrate the versatility of our framework via comprehensive expositions and detailed experiments on several unsupervised problems such as (a) clustering, (b) outlier detection, and (c) similarity prediction under a common umbrella of meta-unsupervised-learning. We also provide rigorous PAC-agnostic bounds to establish the theoretical foundations of our framework, and show that our framing of metaclustering circumvents Kleinberg’s impossibility theorem for clustering.", "title": "" }, { "docid": "40d5b2cb10e7b6ca51e8845d68313b93", "text": "With the advance of computer technology and smart device, many applications, such as face recognition and object recognition, have been developed to facilitate human-computer interaction (HCI) efficiently. In this respect, the hand-held object recognition plays an important role in HCI. It can be used not only to help computer understand useros intentions but also to meet useros requirements. In recent years the appearance of convolutional neural networks (CNNs) greatly enhances the performance of object recognition and this technology has been applied to hand-held object recognition in some works. However, these supervised learning models need large number of labelled data and many iterations to train their large number of parameters. This is a huge challenge for HCI, because HCI need to deal with in-time and itos difficult to collect enough labeled data. Especially when a new category need to be learnt, it will spend a lot of time to update the model. In this work, we adopt the one-shot learning method to solve this problem. This method does not need to update the model when a new category need to be learnt. Moreover, depth image is robust to light and color variation. We fuse depth image information to harness the complementary relationship between the two modalities to improve the performance of hand-held object recognition. Experimental results on our handheld object dataset demonstrate that our method for hand-held object recognition achieves an improvement of performance.", "title": "" }, { "docid": "5168f7f952d937460d250c44b43f43c0", "text": "This letter presents the design of a coplanar waveguide (CPW) circularly polarized antenna for the central frequency 900 MHz, it comes in handy for radio frequency identification (RFID) short-range reading applications within the band of 902-928 MHz where the axial ratio of proposed antenna model is less than 3 dB. The proposed design has an axial-ratio bandwidth of 36 MHz (4%) and impedance bandwidth of 256 MHz (28.5%).", "title": "" }, { "docid": "c4df97f3db23c91f0ce02411d2e1e999", "text": "One important challenge for probabilistic logics is reasoning with very large knowledge bases (KBs) of imperfect information, such as those produced by modern web-scale information extraction systems. One scalability problem shared by many probabilistic logics is that answering queries involves “grounding” the query—i.e., mapping it to a propositional representation—and the size of a “grounding” grows with database size. To address this bottleneck, we present a first-order probabilistic language called ProPPR in which approximate “local groundings” can be constructed in time independent of database size. Technically, ProPPR is an extension to stochastic logic programs that is biased towards short derivations; it is also closely related to an earlier relational learning algorithm called the path ranking algorithm. We show that the problem of constructing proofs for this logic is related to computation of personalized PageRank on a linearized version of the proof space, and based on this connection, we develop a provably-correct approximate grounding scheme, based on the PageRank–Nibble algorithm. Building on this, we develop a fast and easily-parallelized weight-learning algorithm for ProPPR. In our experiments, we show that learning for ProPPR is orders of magnitude faster than learning for Markov logic networks; that allowing mutual recursion (joint learning) in KB inference leads to improvements in performance; and that ProPPR can learn weights for a mutually recursive program with hundreds of clauses defining scores of interrelated predicates over a KB containing one million entities.", "title": "" }, { "docid": "0cc61499ca4eaba9d23214fc7985f71c", "text": "We review the recent progress of the latest 100G to 1T class coherent PON technology using a simplified DSP suitable for forthcoming 5G era optical access systems. The highlight is the presentation of the first demonstration of 100 Gb/s/λ × 8 (800 Gb/s) based PON.", "title": "" }, { "docid": "016112b04486159e02c7e356b0ff63b9", "text": "The contribution of this paper is two-fold. First, we presen t indexing by Latent Dirichlet Allocation(LDI), an automatic document indexing method with a probabilistic concept search. The pr obability distributions in LDI utilizes those in Latent Dir ichlet Allocation (LDA), which is a generative topic model that has been pr eviously used in applications for document indexing tasks. However, those ad hoc applications, or their variants with smoothing techniques as prompted by previous studies in LDA-based lan guage modeling, would result in unsatisfactory performance as th e terms in documents may not properly reflect concept space. T o improve the performances, we introduce a new definition of docu ment probability vectors in the context of LDA and present a n ovel scheme for automatic document indexing based on it. Second, we propose an ensemble model (EnM) for document indexing. Th e EnM combines basis indexing models by assigning di fferent weights and tries to uncover the optimal weights with w hich the mean average precision (MAP) is maximized. To solve the optimiza tion problem, we propose three algorithms, EnM.B, EnM,CD an d EnM.PCD. EnM.B is derived based on the boosting method, EnM. CD the coordinate descent method, and EnM.PCD the parallel property of the EnM.CD. The results of our computational exp eriment on a benchmark data set indicate that both the propos ed approaches are viable options in the document indexing task s. c © 2013 Published by Elsevier Ltd.", "title": "" }, { "docid": "893e1e17570e5daa83827d91b1503185", "text": "We introduce a similarity-based machine learning approach for detecting non-market, adversarial, malicious Android apps. By adversarial, we mean those apps designed to avoid detection. Our approach relies on identifying the Android applications that are similar to an adversarial known Android malware. In our approach, similarity is detected statically by computing the similarity score between two apps based on their methods similarity. The similarity between methods is computed using the normalized compression distance (NCD) in dependence of either zlib or bz2 compressors. The NCD calculates the semantic similarity between pair of methods in two compared apps. The first app is one of the sample apps in the input dataset, while the second app is one of malicious apps stored in a malware database. Later all the computed similarity scores are used as features for training a supervised learning classifier to detect suspicious apps with high similarity score to the malicious ones in the database.", "title": "" }, { "docid": "3b0cebf20d6b71c7b28232fa95117572", "text": "FS chmod mkdir open read stat unlink write BetrFS 4913 ± 0.27 67072 ± 25.68 1697 ± 0.12 561 ± 0.01 1076 ± 0.01 47873 ± 7.7 32142 ± 4.35 btrfs 4574 ± 0.27 24805 ± 13.92 1812 ± 0.12 561 ± 0.01 1258 ± 0.01 26131 ± 0.73 3891 ± 0.08 ext4 4970 ± 0.14 41478 ± 18.99 1886 ± 0.13 556 ± 0.01 1167 ± 0.05 16209 ± 0.2 3359 ± 0.04 XFS 5342 ± 0.21 73782 ± 19.27 1757 ± 0.12 1384 ± 0.07 1134 ± 0.02 19124 ± 0.32 9192 ± 0.28 zfs 36449 ± 118.37 171080 ± 307.73 2681 ± 0.08 6467 ± 0.06 1913 ± 0.04 78946 ± 7.37 18382 ± 0.42", "title": "" }, { "docid": "7ea3acae058dc6067214eced603ffff0", "text": "We address the problem of learning an efficient and adaptive physical layer encoding to communicate binary information over an impaired channel. In contrast to traditional work, we treat the problem an unsupervised machine learning problem focusing on optimizing reconstruction loss through artificial impairment layers in an autoencoder (we term this a channel autoencoder) and introduce several new regularizing layers which emulate common wireless channel impairments. We also discuss the role of attention models in the form of the radio transformer network for helping to recover canonical signal representations before decoding. We demonstrate some promising initial capacity results from this approach and address remaining challenges before such a system could become practical.", "title": "" }, { "docid": "1f18297ddf254636ad0a3117abff45f3", "text": "Squared planar markers are a popular tool for fast, accurate and robust camera localization, but its use is frequently limited to a single marker, or at most, to a small set of them for which their relative pose is known beforehand. Mapping and localization from a large set of planar markers is yet a scarcely treated problem in favour of keypoint-based approaches. However, while keypoint detectors are not robust to rapid motion, large changes in viewpoint, or significant changes in appearance, fiducial markers can be robustly detected under a wider range of conditions. This paper proposes a novel method to simultaneously solve the problems of mapping and localization from a set of squared planar markers. First, a quiver of pairwise relative marker poses is created, from which an initial pose graph is obtained. The pose graph may contain small pairwise pose errors, that when propagated, leads to large errors. Thus, we distribute the rotational and translational error along the basis cycles of the graph so as to obtain a corrected pose graph. Finally, we perform a global pose optimization by minimizing the reprojection errors of the planar markers in all observed frames. The experiments conducted show that our method performs better than Structure from Motion and visual SLAM techniques.", "title": "" }, { "docid": "89322e0d2b3566aeb85eeee9f505d5b2", "text": "Parkinson's disease is a neurological disorder with evolving layers of complexity. It has long been characterised by the classical motor features of parkinsonism associated with Lewy bodies and loss of dopaminergic neurons in the substantia nigra. However, the symptomatology of Parkinson's disease is now recognised as heterogeneous, with clinically significant non-motor features. Similarly, its pathology involves extensive regions of the nervous system, various neurotransmitters, and protein aggregates other than just Lewy bodies. The cause of Parkinson's disease remains unknown, but risk of developing Parkinson's disease is no longer viewed as primarily due to environmental factors. Instead, Parkinson's disease seems to result from a complicated interplay of genetic and environmental factors affecting numerous fundamental cellular processes. The complexity of Parkinson's disease is accompanied by clinical challenges, including an inability to make a definitive diagnosis at the earliest stages of the disease and difficulties in the management of symptoms at later stages. Furthermore, there are no treatments that slow the neurodegenerative process. In this Seminar, we review these complexities and challenges of Parkinson's disease.", "title": "" }, { "docid": "5158832e811a52ba1ff150d6335578cb", "text": "Many smart home devices provide home automation technology, but the smart home security system offers many benefits that can ensure the safety of the homeowner. Thus, Security has been an important issue in the smart home applications. Home security has two aspects, inside and outside. Inside security covers the concept of securing home from threats like fire etc. whereas, outside security is meant to secure home against any burglar/intruder etc. This study is aimed to provide an intelligent solution for home security that takes decision dynamically using the pervasive devices. In particular, smart home security can be regarded as a process with multiple outputs. In this study, to deal with nonlinear outputs, the system is modeled by multiple ANFIS, and the optimization of multiple outputs is formulated as a multiple objective decision making.", "title": "" }, { "docid": "a29bfaa53d0802bb16972539d1d878bd", "text": "This paper systematically advocates a robust and efficient unsupervised multi-class co-segmentation approach by leveraging underlying subspace manifold propagation to exploit the cross-image coherency. It can combat certain image co-segmentation difficulties due to viewpoint change, partial occlusion, complex background, transient illumination, and cluttering texture patterns. Our key idea is to construct a powerful hyper-graph joint-cut framework, which incorporates mid-level image regions-based intra-image feature representation and $L_{1}$ -manifold graph-based inter-image coherency exploration. For local image region generation, we propose a bi-harmonic distance distribution difference metric to govern the super-pixel clustering in a bottom-up way. It not only affords drastic data reduction but also gives rise to discriminative and structure meaningful feature representation. As for the inter-image coherency, we leverage multi-type features involved $L_{1}$ -graph to detect the underlying local manifold from cross-image regions. As a result, the implicit supervising information could be encoded into the unsupervised hyper-graph joint-cut framework. We conduct extensive experiments and make comprehensive evaluations with other state-of-the-art methods over various benchmarks, including iCoseg, MSRC, and Oxford flower. All the results demonstrate the superiorities of our method in terms of accuracy, robustness, efficiency, and versatility.", "title": "" }, { "docid": "aa50aeb6c1c4b52ff677a313d49fd8df", "text": "Monocular depth estimation, which plays a key role in understanding 3D scene geometry, is fundamentally an illposed problem. Existing methods based on deep convolutional neural networks (DCNNs) have examined this problem by learning convolutional networks to estimate continuous depth maps from monocular images. However, we find that training a network to predict a high spatial resolution continuous depth map often suffers from poor local solutions. In this paper, we hypothesize that achieving a compromise between spatial and depth resolutions can improve network training. Based on this “compromise principle”, we propose a regression-classification cascaded network (RCCN), which consists of a regression branch predicting a low spatial resolution continuous depth map and a classification branch predicting a high spatial resolution discrete depth map. The two branches form a cascaded structure allowing the main classification branch to benefit from the auxiliary regression branch. By leveraging large-scale raw training datasets and some data augmentation strategies, our network achieves competitive or state-of-the-art results on three challenging benchmarks, including NYU Depth V2 [1], KITTI [2], and Make3D [3].", "title": "" }, { "docid": "a5fae52eeb8ca38d99091d72c91e1153", "text": "Machine learning is a popular approach to signatureless malware detection because it can generalize to never-beforeseen malware families and polymorphic strains. This has resulted in its practical use for either primary detection engines or supplementary heuristic detections by anti-malware vendors. Recent work in adversarial machine learning has shown that models are susceptible to gradient-based and other attacks. In this whitepaper, we summarize the various attacks that have been proposed for machine learning models in information security, each which require the adversary to have some degree of knowledge about the model under attack. Importantly, even when applied to attacking machine learning malware classifier based on static features for Windows portable executable (PE) files, these attacks, previous attack methodologies may break the format or functionality of the malware. We investigate a more general framework for attacking static PE anti-malware engines based on reinforcement learning, which models more realistic attacker conditions, and subsequently has provides much more modest evasion rates. A reinforcement learning (RL) agent is equipped with a set of functionality-preserving operations that it may perform on the PE file. It learns through a series of games played against the anti-malware engine which sequence of operations is most likely to result in evasion for a given malware sample. Given the general framework, it is not surprising that the evasion rates are modest. However, the resulting RL agent can succinctly summarize blind spots of the anti-malware model. Additionally, evasive variants generated by the agent may be used to harden machine learning anti-malware engine via adversarial training.", "title": "" }, { "docid": "ab97caed9c596430c3d76ebda55d5e6e", "text": "A 1.5 GHz low noise amplifier for a Global Positioning System (GPS) receiver has been implemented in a 0.6 /spl mu/m CMOS process. This amplifier provides a forward gain of 22 dB with a noise figure of only 3.5 dB while drawing 30 mW from a 1.5 V supply. To the authors' knowledge, this represents the lowest noise figure reported to date for a CMOS amplifier operating above 1 GHz.", "title": "" }, { "docid": "7100b0adb93419a50bbaeb1b7e32edf5", "text": "Fractals have been very successful in quantifying the visual complexity exhibited by many natural patterns, and have captured the imagination of scientists and artists alike. Our research has shown that the poured patterns of the American abstract painter Jackson Pollock are also fractal. This discovery raises an intriguing possibility - are the visual characteristics of fractals responsible for the long-term appeal of Pollock's work? To address this question, we have conducted 10 years of scientific investigation of human response to fractals and here we present, for the first time, a review of this research that examines the inter-relationship between the various results. The investigations include eye tracking, visual preference, skin conductance, and EEG measurement techniques. We discuss the artistic implications of the positive perceptual and physiological responses to fractal patterns.", "title": "" }, { "docid": "648a6b999afd246cf2b1882ff9f4faf0", "text": "Vehicular Ad-hoc Network (VANET) is a key technology in the domain of transportation which serves as a platform for vehicles to communicate with each other and intelligently exchange critical information, such as collision avoidance messages. Given that potentially life critical information could be exchanged in real time among network entities, it is paramount that this information is authentic (from a legitimate source), reliable and accurate. Moreover, mobility of vehicles creates different contexts in VANET resulting in a diversity of requirements for trust management. This work focuses on the effective modelling and management of trust both as a prerequisite to inter-vehicle communication and as a security measure. To this end, we propose a comprehensive set of criteria which work towards the effective modelling and management of trust in VANET to ensure that the unique characteristics of the network (such as high mobility, dispersion of vehicles, and lack of central architecture) are considered. The contribution of this paper is twofold. (1) We propose 16 criteria for effective trust management in VANET, and (2) evaluate various available trust models based on these proposed criteria.", "title": "" }, { "docid": "ced0dfa1447b86cc5af2952012960511", "text": "OBJECTIVE\nThe pathophysiology of peptic ulcer disease (PUD) in liver cirrhosis (LC) and chronic hepatitis has not been established. The aim of this study was to assess the role of portal hypertension from PUD in patients with LC and chronic hepatitis.\n\n\nMATERIALS AND METHODS\nWe analyzed the medical records of 455 hepatic vein pressure gradient (HVPG) and esophagogastroduodenoscopy patients who had LC or chronic hepatitis in a single tertiary hospital. The association of PUD with LC and chronic hepatitis was assessed by univariate and multivariate analysis.\n\n\nRESULTS\nA total of 72 PUD cases were detected. PUD was associated with LC more than with chronic hepatitis (odds ratio [OR]: 4.13, p = 0.03). In the univariate analysis, taking an ulcerogenic medication was associated with PUD in patients with LC (OR: 4.34, p = 0.04) and smoking was associated with PUD in patients with chronic hepatitis (OR: 3.61, p = 0.04). In the multivariate analysis, taking an ulcerogenic medication was associated with PUD in patients with LC (OR: 2.93, p = 0.04). However, HVPG was not related to PUD in patients with LC or chronic hepatitis.\n\n\nCONCLUSION\nAccording to the present study, patients with LC have a higher risk of PUD than those with chronic hepatitis. The risk factor was taking ulcerogenic medication. However, HVPG reflecting portal hypertension was not associated with PUD in LC or chronic hepatitis (Clinicaltrial number NCT01944878).", "title": "" } ]
scidocsrr
f31cfb337c6776f7a3cce94b2958b738
Dynamic Humanoid Locomotion: A Scalable Formulation for HZD Gait Optimization
[ { "docid": "ae780733af9737eb5c007c8bc7b68551", "text": "Contact constraints, such as those between a foot and the ground or a hand and an object, are inherent in many robotic tasks. These constraints define a manifold of feasible states; while well understood mathematically, they pose numerical challenges to many algorithms for planning and controlling whole-body dynamic motions. In this paper, we present an approach to the synthesis and stabilization of complex trajectories for both fully-actuated and underactuated robots subject to contact constraints. We introduce a trajectory optimization algorithm (DIRCON) that extends the direct collocation method, naturally incorporating manifold constraints to produce a nominal trajectory with third-order integration accuracy-a critical feature for achieving reliable tracking control. We adapt the classical time-varying linear quadratic regulator to produce a local cost-to-go in the manifold tangent plane. Finally, we descend the cost-to-go using a quadratic program that incorporates unilateral friction and torque constraints. This approach is demonstrated on three complex walking and climbing locomotion examples in simulation.", "title": "" } ]
[ { "docid": "a9768bced10c55345f116d7d07d2bc5a", "text": "In this paper, we propose a variety of distance measures for hesitant fuzzy sets, based on which the corresponding similarity measures can be obtained. We investigate the connections of the aforementioned distance measures and further develop a number of hesitant ordered weighted distance measures and hesitant ordered weighted similarity measures. They can alleviate the influence of unduly large (or small) deviations on the aggregation results by assigning them low (or high) weights. Several numerical examples are provided to illustrate these distance and similarity measures. 2011 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "0afe679d5b022cc31a3ce69b967f8d77", "text": "Cyber-crime has reached unprecedented proportions in this day and age. In addition, the internet has created a world with seemingly no barriers while making a countless number of tools available to the cyber-criminal. In light of this, Computer Forensic Specialists employ state-of-the-art tools and methodologies in the extraction and analysis of data from storage devices used at the digital crime scene. The focus of this paper is to conduct an investigation into some of these Forensic tools eg.Encase®. This investigation will address commonalities across the Forensic tools, their essential differences and ultimately point out what features need to be improved in these tools to allow for effective autopsies of storage devices.", "title": "" }, { "docid": "febc387da7c4ee2c576393d54a0c142e", "text": "Sensors measure physical quantities of the environment for sensing and actuation systems, and are widely used in many commercial embedded systems such as smart devices, drones, and medical devices because they offer convenience and accuracy. As many sensing and actuation systems depend entirely on data from sensors, these systems are naturally vulnerable to sensor spoofing attacks that use fabricated physical stimuli. As a result, the systems become entirely insecure and unsafe. In this paper, we propose a new type of sensor spoofing attack based on saturation. A sensor shows a linear characteristic between its input physical stimuli and output sensor values in a typical operating region. However, if the input exceeds the upper bound of the operating region, the output is saturated and does not change as much as the corresponding changes of the input. Using saturation, our attack can make a sensor to ignore legitimate inputs. To demonstrate our sensor spoofing attack, we target two medical infusion pumps equipped with infrared (IR) drop sensors to control precisely the amount of medicine injected into a patients’ body. Our experiments based on analyses of the drop sensors show that the output of them could be manipulated by saturating the sensors using an additional IR source. In addition, by analyzing the infusion pumps’ firmware, we figure out the vulnerability in the mechanism handling the output of the drop sensors, and implement a sensor spoofing attack that can bypass the alarm systems of the targets. As a result, we show that both over-infusion and under-infusion are possible: our spoofing attack can inject up to 3.33 times the intended amount of fluid or 0.65 times of it for a 10 minute period.", "title": "" }, { "docid": "c4bd2667b2e105219e6a117838dd870d", "text": "Written contracts are a fundamental framework for commercial and cooperative transactions and relationships. Limited research has been published on the application of machine learning and natural language processing (NLP) to contracts. In this paper we report the classification of components of contract texts using machine learning and hand-coded methods. Authors studying a range of domains have found that combining machine learning and rule based approaches increases accuracy of machine learning. We find similar results which suggest the utility of considering leveraging hand coded classification rules for machine learning. We attained an average accuracy of 83.48% on a multiclass labelling task on 20 contracts combining machine learning and rule based approaches, increasing performance over machine learning alone.", "title": "" }, { "docid": "ac86e950866646a0b86d76bb3c087d0a", "text": "In this paper, an SVM-based approach is proposed for stock market trend prediction. The proposed approach consists of two parts: feature selection and prediction model. In the feature selection part, a correlation-based SVM filter is applied to rank and select a good subset of financial indexes. And the stock indicators are evaluated based on the ranking. In the prediction model part, a so called quasi-linear SVM is applied to predict stock market movement direction in term of historical data series by using the selected subset of financial indexes as the weighted inputs. The quasi-linear SVM is an SVM with a composite quasi-linear kernel function, which approximates a nonlinear separating boundary by multi-local linear classifiers with interpolation. Experimental results on Taiwan stock market datasets demonstrate that the proposed SVM-based stock market trend prediction method produces better generalization performance over the conventional methods in terms of the hit ratio. Moreover, the experimental results also show that the proposed SVM-based stock market trend prediction system can find out a good subset and evaluate stock indicators which provide useful information for investors.", "title": "" }, { "docid": "c14c575eed397c522a3bc0d2b766a836", "text": "Being highly unsaturated, carotenoids are susceptible to isomerization and oxidation during processing and storage of foods. Isomerization of trans-carotenoids to cis-carotenoids, promoted by contact with acids, heat treatment and exposure to light, diminishes the color and the vitamin A activity of carotenoids. The major cause of carotenoid loss, however, is enzymatic and non-enzymatic oxidation, which depends on the availability of oxygen and the carotenoid structure. It is stimulated by light, heat, some metals, enzymes and peroxides and is inhibited by antioxidants. Data on percentage losses of carotenoids during food processing and storage are somewhat conflicting, but carotenoid degradation is known to increase with the destruction of the food cellular structure, increase of surface area or porosity, length and severity of the processing conditions, storage time and temperature, transmission of light and permeability to O2 of the packaging. Contrary to lipid oxidation, for which the mechanism is well established, the oxidation of carotenoids is not well understood. It involves initially epoxidation, formation of apocarotenoids and hydroxylation. Subsequent fragmentations presumably result in a series of compounds of low molecular masses. Completely losing its color and biological activities, the carotenoids give rise to volatile compounds which contribute to the aroma/flavor, desirable in tea and wine and undesirable in dehydrated carrot. Processing can also influence the bioavailability of carotenoids, a topic that is currently of great interest.", "title": "" }, { "docid": "0802735955b52c1dae64cf34a97a33fb", "text": "Cutaneous facial aging is responsible for the increasingly wrinkled and blotchy appearance of the skin, whereas aging of the facial structures is attributed primarily to gravity. This article purports to show, however, that the primary etiology of structural facial aging relates instead to repeated contractions of certain facial mimetic muscles, the age marker fascicules, whereas gravity only secondarily abets an aging process begun by these muscle contractions. Magnetic resonance imaging (MRI) has allowed us to study the contrasts in the contour of the facial mimetic muscles and their associated deep and superficial fat pads in patients of different ages. The MRI model shows that the facial mimetic muscles in youth have a curvilinear contour presenting an anterior surface convexity. This curve reflects an underlying fat pad lying deep to these muscles, which acts as an effective mechanical sliding plane. The muscle’s anterior surface convexity constitutes the key evidence supporting the authors’ new aging theory. It is this youthful convexity that dictates a specific characteristic to the muscle contractions conveyed outwardly as youthful facial expression, a specificity of both direction and amplitude of facial mimetic movement. With age, the facial mimetic muscles (specifically, the age marker fascicules), as seen on MRI, gradually straighten and shorten. The authors relate this radiologic end point to multiple repeated muscle contractions over years that both expel underlying deep fat from beneath the muscle plane and increase the muscle resting tone. Hence, over time, structural aging becomes more evident as the facial appearance becomes more rigid.", "title": "" }, { "docid": "5033cc81abffc2b5a10635e87b025991", "text": "We describe the computing tasks involved in autonomous driving, examine existing autonomous driving computing platform implementations. To enable autonomous driving, the computing stack needs to simultaneously provide high performance, low power consumption, and low thermal dissipation, at low cost. We discuss possible approaches to design computing platforms that will meet these needs.", "title": "" }, { "docid": "33e41cf93ec8bb99c215dbce4afc34f8", "text": "This paper presents a general, trainable system for object detection in unconstrained, cluttered scenes. The system derives much of its power from a representation that describes an object class in terms of an overcomplete dictionary of local, oriented, multiscale intensity differences between adjacent regions, efficiently computable as a Haar wavelet transform. This example-based learning approach implicitly derives a model of an object class by training a support vector machine classifier using a large set of positive and negative examples. We present results on face, people, and car detection tasks using the same architecture. In addition, we quantify how the representation affects detection performance by considering several alternate representations including pixels and principal components. We also describe a real-time application of our person detection system as part of a driver assistance system.", "title": "" }, { "docid": "ceaa36ef5884f7fadd111744dc85f0c1", "text": "One-shot learning – the human ability to learn a new concept from just one or a few examples – poses a challenge to traditional learning algorithms, although approaches based on Hierarchical Bayesian models and compositional representations have been making headway. This paper investigates how children and adults readily learn the spoken form of new words from one example – recognizing arbitrary instances of a novel phonological sequence, and excluding non-instances, regardless of speaker identity and acoustic variability. This is an essential step on the way to learning a word’s meaning and learning to use it, and we develop a Hierarchical Bayesian acoustic model that can learn spoken words from one example, utilizing compositions of phoneme-like units that are the product of unsupervised learning. We compare people and computational models on one-shot classification and generation tasks with novel Japanese words, finding that the learned units play an important role in achieving good performance.", "title": "" }, { "docid": "3ff55193d10980cbb8da5ec757b9161c", "text": "The growth of social web contributes vast amount of user generated content such as customer reviews, comments and opinions. This user generated content can be about products, people, events, etc. This information is very useful for businesses, governments and individuals. While this content meant to be helpful analyzing this bulk of user generated content is difficult and time consuming. So there is a need to develop an intelligent system which automatically mine such huge content and classify them into positive, negative and neutral category. Sentiment analysis is the automated mining of attitudes, opinions, and emotions from text, speech, and database sources through Natural Language Processing (NLP). The objective of this paper is to discover the concept of Sentiment Analysis in the field of Natural Language Processing, and presents a comparative study of its techniques in this field. Keywords— Natural Language Processing, Sentiment Analysis, Sentiment Lexicon, Sentiment Score.", "title": "" }, { "docid": "1a9fc19eb416eebdbfe1110c37e0852b", "text": "Two important aspects of switched-mode (Class-D) amplifiers providing a high signal to noise ratio (SNR) for mechatronic applications are investigated. Signal jitter is common in digital systems and introduces noise, leading to a deterioration of the SNR. Hence, a jitter elimination technique for the transistor gate signals in power electronic converters is presented and verified. Jitter is reduced tenfold as compared to traditional approaches to values of 25 ps at the output of the power stage. Additionally, digital modulators used for the generation of the switch control signals can only achieve a limited resolution (and hence, limited SNR) due to timing constraints in digital circuits. Consequently, a specialized modulator structure based on noise shaping is presented and optimized which enables the creation of high-resolution switch control signals. This, together with the jitter reduction circuit, enables half-bridge output voltage SNR values of more than 100dB in an open-loop system.", "title": "" }, { "docid": "594a0aea6aeb7def20711c5f030fd2ae", "text": "Recent work in network quantization has substantially reduced the time and space complexity of neural network inference, enabling their deployment on embedded and mobile devices with limited computational and memory resources. However, existing quantization methods often represent all weights and activations with the same precision (bit-width). In this paper, we explore a new dimension of the design space: quantizing different layers with different bit-widths. We formulate this problem as a neural architecture search problem and propose a novel differentiable neural architecture search (DNAS) framework to efficiently explore its exponential search space with gradient-based optimization. Experiments show we surpass the state-of-the-art compression of ResNet on CIFAR-10 and ImageNet. Our quantized models with 21.1x smaller model size or 103.9x lower computational cost can still outperform baseline quantized or even full precision models.", "title": "" }, { "docid": "400be1fdbd0f1aebfb0da220fd62e522", "text": "Understanding users' interactions with highly subjective content---like artistic images---is challenging due to the complex semantics that guide our preferences. On the one hand one has to overcome `standard' recommender systems challenges, such as dealing with large, sparse, and long-tailed datasets. On the other, several new challenges present themselves, such as the need to model content in terms of its visual appearance, or even social dynamics, such as a preference toward a particular artist that is independent of the art they create. In this paper we build large-scale recommender systems to model the dynamics of a vibrant digital art community, Behance, consisting of tens of millions of interactions (clicks and 'appreciates') of users toward digital art. Methodologically, our main contributions are to model (a) rich content, especially in terms of its visual appearance; (b) temporal dynamics, in terms of how users prefer 'visually consistent' content within and across sessions; and (c) social dynamics, in terms of how users exhibit preferences both towards certain art styles, as well as the artists themselves.", "title": "" }, { "docid": "a497cb84141c7db35cd9a835b11f33d2", "text": "Ubiquitous nature of online social media and ever expending usage of short text messages becomes a potential source of crowd wisdom extraction especially in terms of sentiments therefore sentiment classification and analysis is a significant task of current research purview. Major challenge in this area is to tame the data in terms of noise, relevance, emoticons, folksonomies and slangs. This works is an effort to see the effect of pre-processing on twitter data for the fortification of sentiment classification especially in terms of slang word. The proposed method of pre-processing relies on the bindings of slang words on other coexisting words to check the significance and sentiment translation of the slang word. We have used n-gram to find the bindings and conditional random fields to check the significance of slang word. Experiments were carried out to observe the effect of proposed method on sentiment classification which clearly indicates the improvements in accuracy of classification. © 2016 The Authors. Published by Elsevier B.V. Peer-review under responsibility of organizing committee of the Twelfth International Multi-Conference on Information Processing-2016 (IMCIP-2016).", "title": "" }, { "docid": "07e93064b1971a32b5c85b251f207348", "text": "With the growing demand on automotive electronics for the advanced driver assistance systems and autonomous driving, the functional safety becomes one of the most important issues in the hardware development. Thus, the safety standard for automotive E/E system, ISO-26262, becomes state-of-the-art guideline to ensure that the required safety level can be achieved. In this study, we base on ISO-26262 to develop a FMEDA-based fault injection and data analysis framework. The main contribution of this study is to effectively reduce the effort for generating FMEDA report which is used to evaluate hardware's safety level based on ISO-26262 standard.", "title": "" }, { "docid": "4307dd62177d67881a51efaccd29957d", "text": "Data mining techniques and information personalization have made significant growth in the past decade. Enormous volume of data is generated every day. Recommender systems can help users to find their specific information in the extensive volume of information. Several techniques have been presented for development of Recommender System (RS). One of these techniques is the Evolutionary Computing (EC), which can optimize and improve RS in the various applications. This study investigates the number of publications, focusing on some aspects such as the recommendation techniques, the evaluation methods and the datasets which are used.", "title": "" }, { "docid": "553dc62182acef2b7ef226d6c951229b", "text": "The key intent of this work is to present a comprehensive comparative literature survey of the state-of-art in software agent-based computing technology and its incorporationwithin themodelling and simulation domain. The original contribution of this survey is two-fold: (1) Present a concise characterization of almost the entire spectrum of agent-based modelling and simulation tools, thereby highlighting the salient features, merits, and shortcomings of such multi-faceted application software; this article covers eighty five agent-based toolkits that may assist the system designers and developers with common tasks, such as constructing agent-based models and portraying the real-time simulation outputs in tabular/graphical formats and visual recordings. (2) Provide a usable reference that aids engineers, researchers, learners and academicians in readily selecting an appropriate agent-based modelling and simulation toolkit for designing and developing their system models and prototypes, cognizant of both their expertise and those requirements of their application domain. In a nutshell, a significant synthesis of Agent Based Modelling and Simulation (ABMS) resources has been performed in this review that stimulates further investigation into this topic. © 2017 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "6e4f71c411a57e3f705dbd0979c118b1", "text": "BACKGROUND\nStress perception is highly subjective, and so the complexity of nursing practice may result in variation between nurses in their identification of sources of stress, especially when the workplace and roles of nurses are changing, as is currently occurring in the United Kingdom health service. This could have implications for measures being introduced to address problems of stress in nursing.\n\n\nAIMS\nTo identify nurses' perceptions of workplace stress, consider the potential effectiveness of initiatives to reduce distress, and identify directions for future research.\n\n\nMETHOD\nA literature search from January 1985 to April 2003 was conducted using the key words nursing, stress, distress, stress management, job satisfaction, staff turnover and coping to identify research on sources of stress in adult and child care nursing. Recent (post-1997) United Kingdom Department of Health documents and literature about the views of practitioners was also consulted.\n\n\nFINDINGS\nWorkload, leadership/management style, professional conflict and emotional cost of caring have been the main sources of distress for nurses for many years, but there is disagreement as to the magnitude of their impact. Lack of reward and shiftworking may also now be displacing some of the other issues in order of ranking. Organizational interventions are targeted at most but not all of these sources, and their effectiveness is likely to be limited, at least in the short to medium term. Individuals must be supported better, but this is hindered by lack of understanding of how sources of stress vary between different practice areas, lack of predictive power of assessment tools, and a lack of understanding of how personal and workplace factors interact.\n\n\nCONCLUSIONS\nStress intervention measures should focus on stress prevention for individuals as well as tackling organizational issues. Achieving this will require further comparative studies, and new tools to evaluate the intensity of individual distress.", "title": "" } ]
scidocsrr
2a38e3c505cd31b21cbef4f793fcedf6
An event-triggered finite-time control scheme for unicycle robots
[ { "docid": "da43bbd689050e493dd9b67ea60ad691", "text": "Finite-time stability is defined for equilibria of continuous but non-Lipschitzian autonomous systems. Continuity, Lipschitz continuity, and Hölder continuity of the settling-time function are studied and illustrated with several examples. Lyapunov and converse Lyapunov results involving scalar differential inequalities are given for finite-time stability. It is shown that the regularity properties of the Lyapunov function and those of the settling-time function are related. Consequently, converse Lyapunov results can only assure the existence of continuous Lyapunov functions. Finally, the sensitivity of finite-time-stable systems to perturbations is investigated.", "title": "" } ]
[ { "docid": "5967c7705173ee346b4d47eb7422df20", "text": "A novel learnable dictionary encoding layer is proposed in this paper for end-to-end language identification. It is inline with the conventional GMM i-vector approach both theoretically and practically. We imitate the mechanism of traditional GMM training and Supervector encoding procedure on the top of CNN. The proposed layer can accumulate high-order statistics from variable-length input sequence and generate an utterance level fixed-dimensional vector representation. Unlike the conventional methods, our new approach provides an end-to-end learning framework, where the inherent dictionary are learned directly from the loss function. The dictionaries and the encoding representation for the classifier are learned jointly. The representation is orderless and therefore appropriate for language identification. We conducted a preliminary experiment on NIST LRE07 closed-set task, and the results reveal that our proposed dictionary encoding layer achieves significant error reduction comparing with the simple average pooling.", "title": "" }, { "docid": "faf000b318151222807ac69f2a557afd", "text": "Sentiment analysis or opinion mining is the computational study of people’s opinions, appraisals, and emotions toward entities, events and their attributes. In the past few years, it attracted a great deal of attentions from both academia and industry due to many challenging research problems and a wide range of applications [1]. Opinions are important because whenever we need to make a decision we want to hear others’ opinions. This is not only true for individuals but also true for organizations. However, there was almost no computational study on opinions before the Web because there was little opinionated text available. In the past, when an individual needed to make a decision, he/she typically asked for opinions from friends and families. When an organization wanted to find opinions of the general public about its products and services, it conducted surveys and focus groups. However, with the explosive growth of the social media content on the Web in the past few years, the world has been transformed. People can now post reviews of products at merchant sites and express their views on almost anything in discussion forums and blogs, and at social network sites. Now if one wants to buy a product, one is no longer limited to asking one’s friends and families because there are many user reviews on the Web. For a company, it may no longer need to conduct surveys or focus groups in order to gather consumer opinions about its products and those of its competitors because there is a plenty of such information publicly available.", "title": "" }, { "docid": "96f2e93e188046fa1d97cedc51b07808", "text": "The development of next-generation electrical link technology to support 400Gb/s standards is underway [1-5]. Physical constraints paired to the small area available to dissipate heat, impose limits to the maximum number of serial interfaces and therefore their minimum speed. As such, aggregation of currently available 25Gb/s systems is not an option, and the migration path requires serial interfaces to operate at increased rates. According to CEI-56G and IEEE P802.3bs emerging standards, PAM-4 signaling paired to forward error correction (FEC) schemes is enabling several interconnect applications and low-loss profiles [1]. Since the amplitude of each eye is reduced by a factor of 3, while noise power is only halved, a high transmitter (TX) output amplitude is key to preserve high SNR. However, compared to NRZ, the design of a PAM-4 TX is challenged by tight linearity constraints, required to minimize the amplitude distortion among the 4 levels [1]. In principle, current-mode (CM) drivers can deliver a differential peak-to-peak swing up to 4/3(VDD-VOV), but they struggle to generate high-swing PAM-4 levels with the required linearity. This is confirmed by recently published CM PAM-4 drivers, showing limited output swings even with VDD raised to 1.5V [2-4]. Source-series terminated (SST) drivers naturally feature better linearity and represent a valid alternative, but the maximum differential peak-to-peak swing is bounded to VDD only. In [5], a dual-mode SST driver supporting NRZ/PAM-4 was presented, but without FFE for PAM-4 mode. In this paper, we present a PAM-4 transmitter leveraging a hybrid combination of SST and CM driver. The CM part enhances the output swing by 30% beyond the theoretical limit of a conventional SST implementation, while being calibrated to maintain the desired linearity level. A 5b 4-tap FIR filter, where equalization tuning can be controlled independently from output matching, is also embedded. The transmitter, implemented in 28nm CMOS FDSOI, incorporates a half-rate serializer, duty-cycle correction (DCC), ≫2kV HBM ESD diodes, and delivers a full swing of 1.3Vppd at 45Gb/s while drawing 120mA from a 1V supply. The power efficiency is ~2 times better than those compared in this paper.", "title": "" }, { "docid": "5e25a133af30d08844eca800d82379a3", "text": "This study evaluates the effects of ketamine on healthy and schizophrenic volunteers (SVs) in an effort to define the detailed behavioral effects of the drug in a psychosis model. We compared the effects of ketamine on normal and SVs to establish the comparability of their responses and the extent to which normal subjects might be used experimentally as a model. Eighteen normal volunteers (NVs) and 17 SVs participated in ketamine interviews. Some (n = 7 NVs; n = 9 SVs) had four sessions with a 0.1–0.5 mg/kg of ketamine and a placebo; others (n = 11 NVs; n = 8 SVs) had two sessions with one dose of ketamine (0.3 mg/kg) and a placebo. Experienced research clinicians used the BPRS to assess any change in mental status over time and documented the specifics in a timely way. In both volunteer groups, ketamine induced a dose-related, short (<30 min) increase in psychotic symptoms. The scores of NVs increased on both the Brief Psychiatric Rating Scale (BPRS) psychosis subscale (p = .0001) and the BPRS withdrawal subscale (p = .0001), whereas SVs experienced an increase only in positive symptoms (p = .0001). Seventy percent of the patients reported an increase (i.e., exacerbation) of previously experienced positive symptoms. Normal and schizophrenic groups differed only on the BPRS withdrawal score. The magnitude of ketamine-induced changes in positive symptoms was similar, although the psychosis baseline differed, and the dose-response profiles over time were superimposable across the two populations. The similarity between ketamine-induced symptoms in SVs and their own positive symptoms suggests that ketamine provides a unique model of psychosis in human volunteers. The data suggest that the phencyclidine (PCP) model of schizophrenia maybe a more valid human psychosis/schizophrenia drug model than the amphetamine model, with a broader range of psychotic symptoms. This study indicates that NVs could be used for many informative experimental psychosis studies involving ketamine interviews.", "title": "" }, { "docid": "9fa635dbefeb2d2f49ba56d193ba185d", "text": "The contents and conclusions of this report are considered appropriate for the time of its preparation. They may be modified in the light of further knowledge gained at subsequent stages. The designations employed and the presentation of material in this information product do not imply the expression of any opinion whatsoever on the part of the Food and Agriculture Organization of the United Nations (FAO) concerning the legal or development status of any country, territory, city or area or of its authorities, or concerning the delimitation of its frontiers or boundaries. The mention of specific companies or products of manufacturers, whether or not these have been patented, does not imply that these have been endorsed or recommended by FAO in preference to others of a similar nature that are not mentioned. All rights reserved. Reproduction and dissemination of material in this information product for educational or other non-commercial purposes are authorized without any prior written permission from the copyright holders provided the source is fully acknowledged. Reproduction of material in this information product for resale or other commercial purposes is prohibited without written permission of the copyright holders. Agriculture in developing countries must undergo a significant transformation in order to meet the related challenges of achieving food security and responding to climate change. Projections based on population growth and food consumption patterns indicate that agricultural production will need to increase by at least 70 percent to meet demands by 2050. Most estimates also indicate that climate change is likely to reduce agricultural productivity, production stability and incomes in some areas that already have high levels of food insecurity. Developing climate-smart agriculture 1 is thus crucial to achieving future food security and climate change goals. This paper examines some of the key technical, institutional, policy and financial responses required to achieve this transformation. Building on case studies from the field, the paper outlines a range of practices, approaches and tools aimed at increasing the resilience and productivity of agricultural production systems, while also reducing and removing emissions. The second part of the paper surveys institutional and policy options available to promote the transition to climate-smart agriculture at the smallholder level. Finally, the paper considers current financing gaps and makes innovative suggestions regarding the combined use of different sources, financing mechanisms and delivery systems. 1) Agriculture in developing countries must undergo a significant transformation in order to meet the related challenges of …", "title": "" }, { "docid": "6ccfe86f2a07dc01f87907855f6cb337", "text": "H istorically, retention of distance learners has been problematic with dropout rates disproportionably high compared to traditional course settings (Richards & Ridley, 1997; Wetzel, Radtke, & Stern, 1994). Dropout rates of 30 to 50% have been common (Moore & Kearsley, 1996). Students may experience feelings of isolation in distance courses compared to prior faceto-face educational experiences (Shaw & Polovina, 1999). If the distance courses feature limited contact with instructors and fellow students, the result of this isolation can be unfinished courses or degrees (Keegan, 1990). Student satisfaction in traditional learning environments has been overlooked in the past (Astin, 1993; DeBourgh, 1999; Navarro & Shoemaker, 2000). Student satisfaction has also not been given the proper attention in distance learning environments (Biner, Dean, & Mellinger, 1994). Richards and Ridley (1997) suggested further research is necessary to study factors affecting student enrollment and satisfaction. Prior studies in classroom-based courses have shown there is a high correlation between student satisfaction and retention (Astin, 1993; Edwards & Waters, 1982). This high correlation has also been found in studies in which distance learners were the target population (Bailey, Bauman, & Lata, 1998). The purpose of this study was to identify factors influencing student satisfaction in online courses, and to create and validate an instrument to measure student satisfaction in online courses.", "title": "" }, { "docid": "45be2fbf427a3ea954a61cfd5150db90", "text": "Linguistic style conveys the social context in which communication occurs and defines particular ways of using language to engage with the audiences to which the text is accessible. In this work, we are interested in the task of stylistic transfer in natural language generation (NLG) systems, which could have applications in the dissemination of knowledge across styles, automatic summarization and author obfuscation. The main challenges in this task involve the lack of parallel training data and the difficulty in using stylistic features to control generation. To address these challenges, we plan to investigate neural network approaches to NLG to automatically learn and incorporate stylistic features in the process of language generation. We identify several evaluation criteria, and propose manual and automatic evaluation approaches.", "title": "" }, { "docid": "6d3a9c8df2ef344f0ba80980158f4ff3", "text": "Die Begriffe Digitalisierung und Industrie 4.0 werden gegenwärtig mit unterschiedlichen Inhalten besetzt. Das mag ein Indiz sein, dass ein v. a. im deutschsprachigen Raum sehr begrüßenswerter Trend durch Modewellen überlagert wird. Es könnte wie oft in der Informatik und verwandten Disziplinen das Verlaufsmuster Gartner Hype Cycle mit den Phasen ,,zögernder Beginn“, ,,steiler Anstieg mit übertriebenen Versprechungen“, ,,hoher Gipfel“, ,,Abfall in ein Tal“, ,,öffentlich wenig beachtete Weiterarbeit am Detail“, ,,allmählicher Wiederaufstieg“, ,,Einmündung in einen langfristigen Trend“ beobachtet werden. Dieser Verlauf impliziert Ressourcenvergeudung. Wir erörtern retardierende Momente, wie z. B. unterschiedliche Kulturen zwischen der Entwicklung von Software und Maschinen oder Probleme mit dem Lebenszyklus von individualisierten Erzeugnissen, die dazu führen könnten, dass man nach dem Gipfel in ein ,,Tal der Enttäuschungen“ stürzt.", "title": "" }, { "docid": "ac0e5d2b50462a15928556bee7f8548e", "text": "The concept of “truth,” as a public good is the production of a collective understanding, which emerges from a complex network of social interactions. The recent impact of social networks on shaping the perception of truth in political arena shows how such perception is corroborated and established by the online users, collectively. However, investigative journalism for discovering truth is a costly option, given the vast spectrum of online information. In some cases, both journalist and online users choose not to investigate the authenticity of the news they receive, because they assume other actors of the network had carried the cost of validation. Therefore, the new phenomenon of “fake news” has emerged within the context of social networks. The online social networks, similarly to System of Systems, cause emergent properties, which makes authentication processes difficult, given availability of multiple sources. In this study, we show how this conflict can be modeled as a volunteer's dilemma. We also show how the public contribution through news subscription (shared rewards) can impact the dominance of truth over fake news in the network.", "title": "" }, { "docid": "1f6bf9c06b7ee774bc08848293b5c94a", "text": "The success of a virtual learning environment (VLE) depends to a considerable extent on student acceptance and use of such an e-learning system. After critically assessing models of technology adoption, including the Technology Acceptance Model (TAM), TAM2, and the Unified Theory of Acceptance and Usage of Technology (UTAUT), we build a conceptual model to explain the differences between individual students in the level of acceptance and use of a VLE. This model extends TAM2 and includes subjective norm, personal innovativeness in the domain of information technology, and computer anxiety. Data were collected from 45 Chinese participants in an Executive MBA program. After performing satisfactory reliability and validity checks, the structural model was tested with the use of PLS. Results indicate that perceived usefulness has a direct effect on VLE use. Perceived ease of use and subjective norm have only indirect effects via perceived usefulness. Both personal innovativeness and computer anxiety have direct effects on perceived ease of use only. Implications are that program managers in education should not only concern themselves with basic system design but also explicitly address individual differences between VLE users. 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "979e01842f7572a0ec45dadb6f1c2f86", "text": "With the rapid development of the credit industry, credit scoring models has become a very important issue in the credit industry. Many credit scoring models based on machine learning have been widely used. Such as artificial neural network (ANN), rough set, support vector machine (SVM), and other innovative credit scoring models. However, in practical applications, a large amount of irrelevant and redundant features in the credit data, which leads to higher computational complexity and lower prediction accuracy. So, the face of a large number of credit data, effective feature selection method is necessary. In this paper, we propose a novel credit scoring model, called NCSM, based on feature selection and grid search to optimize random forest algorithm. The model reduces the influence of the irrelevant and redundant features and to get the higher prediction accuracy. In NCSM, the information entropy is regarded as the heuristic to select the optimal feature. Two credit data sets in UCI database are used as experimental data to demonstrate the accuracy of the NCSM. Compared with linear SVM, CART, MLP, H2O RF models, the experimental result shows that NCSM has a superior performance in improving the prediction accuracy.", "title": "" }, { "docid": "ee639ccb2ac943783a9404cf4ec3583c", "text": "DOI reference number: 10.18293/SEKE2016-040 Abstract—Activity recognition has been widely studied in ubiquitous computing since it can be used in several application domains, such as fall detection and gesture recognition. Initially, works in this area were based on research-only devices (bodyworn sensors). However, with advances in mobile computing, current research focuses on mobile devices, mainly, smartphones. These devices provide Internet access, processing, and various sensors, such as accelerometer and gyroscope, which are useful resources for activity recognition. Therefore, many studies use smartphones as data source. Additionally, some works have already considered the use of wristbands and specially-designed watches, but fewer investigate the latest marketable wearable devices, such as smartwatches, which are less intrusive and can provide new opportunities to complement smartphone data. Moreover, for the best of our knowledge, no previous work experimentally evaluates the impact caused by the combination of sensor data from smartwatches and smartphones on the accuracy of activity recognition approaches. Therefore, the main goal of this experimental evaluation is to compare the use of data from smartphones as well as the combination of data from smartphones and smartwatches for activity recognition. We evidenced that the use of smartphone and smartwatch data combined can increase the accuracy of activity recognition.", "title": "" }, { "docid": "fde101a0604eaa703979c56aa3ab8e93", "text": "Community Question Answering (cQA) forums have become a popular medium for soliciting direct answers to specific questions of users from experts or other experienced users on a given topic. However, for a given question, users sometimes have to sift through a large number of low-quality or irrelevant answers to find out the answer which satisfies their information need. To alleviate this, the problem of Answer Quality Prediction (AQP) aims to predict the quality of an answer posted in response to a forum question. Current AQP systems either learn models using a) various hand-crafted features (HCF) or b) use deep learning (DL) techniques which automatically learn the required feature representations. In this paper, we propose a novel approach for AQP known as -“Deep Feature Fusion Network (DFFN)”which leverages the advantages of both hand-crafted features and deep learning based systems. Given a question-answer pair along with its metadata, DFFN independently a) learns deep features using a Convolutional Neural Network (CNN) and b) computes hand-crafted features using various external resources and then combines them using a deep neural network trained to predict the final answer quality. DFFN achieves stateof-the-art performance on the standard SemEval-2015 and SemEval-2016 benchmark datasets and outperforms baseline approaches which individually employ either HCF or DL based techniques alone.", "title": "" }, { "docid": "072b36d53de6a1a1419b97a1503f8ecd", "text": "In classical control of brushless dc (BLDC) motors, flux distribution is assumed trapezoidal and fed current is controlled rectangular to obtain a desired constant torque. However, in reality, this assumption may not always be correct, due to nonuniformity of magnetic material and design trade-offs. These factors, together with current controller limitation, can lead to an undesirable torque ripple. This paper proposes a new torque control method to attenuate torque ripple of BLDC motors with un-ideal back electromotive force (EMF) waveforms. In this method, the action time of pulses, which are used to control the corresponding switches, are calculated in the torque controller regarding actual back EMF waveforms in both normal conduction period and commutation period. Moreover, the influence of finite dc bus supply voltage is considered in the commutation period. Simulation and experimental results are shown that, compared with conventional rectangular current control, the proposed torque control method results in apparent reduction of the torque ripple.", "title": "" }, { "docid": "561490cad4ecb94956221958fffcf00d", "text": "In this paper, we present a hidden Markov model (HMM) approach to segment meeting transcripts into topics. To learn the model, we use unsupervised learning to cluster the text segments obtained from topic boundary information. Using modified WinDiff and Pk metrics, we demonstrate that an HMM outperforms LCSeg, a state-of-the-art lexical chain based method for topic segmentation using the ICSI meeting corpus. We evaluate the effect of language model order, the number of hidden states, and the use of stop words. Our experimental results show that a unigram LM is better than a trigram LM, using too many hidden states degrades topic segmentation performance, and that removing the stop words from the transcripts does not improve segmentation performance.", "title": "" }, { "docid": "7e1df3fd563009c356c8a1620b96a232", "text": "This research investigates the large hype surrounding big data (BD) and Analytics (BDA) in both academia and the business world. Initial insights pointed to large and complex amalgamations of different fields, techniques and tools. Above all, BD as a research field and as a business tool found to be under developing and is fraught with many challenges. The intention here in this research is to develop an adoption model of BD that could detect key success predictors. The research finds a great interest and optimism about BD value that fueled this current buzz behind this novel phenomenon. Like any disruptive innovation, its assimilation in organizations oppressed with many challenges at various contextual levels. BD would provide different advantages to organizations that would seriously consider all its perspectives alongside its lifecycle in the pre-adoption or adoption or implementation phases. The research attempts to delineate the different facets of BD as a technology and as a management tool highlighting different contributions, implications and recommendations. This is of great interest to researchers, professional and policy makers.", "title": "" }, { "docid": "bcbbc8913330378af7c986549ab4bb30", "text": "Anomaly detection involves identifying the events which do not conform to an expected pattern in data. A common approach to anomaly detection is to identify outliers in a latent space learned from data. For instance, PCA has been successfully used for anomaly detection. Variational autoencoder (VAE) is a recently-developed deep generative model which has established itself as a powerful method for learning representation from data in a nonlinear way. However, the VAE does not take the temporal dependence in data into account, so it limits its applicability to time series. In this paper we combine the echo-state network, which is a simple training method for recurrent networks, with the VAE, in order to learn representation from multivariate time series data. We present an echo-state conditional variational autoencoder (ES-CVAE) and demonstrate its useful behavior in the task of anomaly detection in multivariate time series data.", "title": "" }, { "docid": "66acaa4909502a8d7213366e0667c3c2", "text": "Facial rejuvenation, particularly lip augmentation, has gained widespread popularity. An appreciation of perioral anatomy as well as the structural characteristics that define the aging face is critical to achieve optimal patient outcomes. Although techniques and technology evolve continuously, hyaluronic acid (HA) dermal fillers continue to dominate aesthetic practice. A combination approach including neurotoxin and volume restoration demonstrates superior results in select settings.", "title": "" }, { "docid": "225204d66c371372debb3bb2a37c795b", "text": "We present two novel methods for face verification. Our first method - “attribute” classifiers - uses binary classifiers trained to recognize the presence or absence of describable aspects of visual appearance (e.g., gender, race, and age). Our second method - “simile” classifiers - removes the manual labeling required for attribute classification and instead learns the similarity of faces, or regions of faces, to specific reference people. Neither method requires costly, often brittle, alignment between image pairs; yet, both methods produce compact visual descriptions, and work on real-world images. Furthermore, both the attribute and simile classifiers improve on the current state-of-the-art for the LFW data set, reducing the error rates compared to the current best by 23.92% and 26.34%, respectively, and 31.68% when combined. For further testing across pose, illumination, and expression, we introduce a new data set - termed PubFig - of real-world images of public figures (celebrities and politicians) acquired from the internet. This data set is both larger (60,000 images) and deeper (300 images per individual) than existing data sets of its kind. Finally, we present an evaluation of human performance.", "title": "" }, { "docid": "e2ca275e334d1eca7bd901851aba0ac5", "text": "Males of the desert beetle Parastizopus armaticeps (Pér.) (Coleoptera: Tenebrionidae) exhibit a characteristic calling behavior that attracts females by raising the tip of the abdomen, exposing the aedeagus, and remaining in this posture for a few seconds while emitting a pheromone. We collected the pheromone by holding a solid phase microextraction fiber (100 μm polydimethylsiloxane) close to the aedeagus for 5 s and analyzed the volatiles collected by gas chromatography/mass spectrometry. The volatiles consisted of 3-methylphenol (52%), ethyl-1,4-benzoquinone (48%), and 3-ethylphenol (2%). The pheromone originated from the aedeagal glands. In the gland reservoirs, these compounds (2.1%) were mixed with ethyl, isopropyl, and propyl esters of fatty acids (24.2%), and a mixture of hydrocarbons (69.1%). The mean amount of volatiles extracted from gland reservoirs was 0.92 ± 0.83 μg. Chemo-orientation experiments with a servosphere show that females responded only to the ternary volatile mixture. Females stopped walking, elevated the front parts of their bodies with erected antennae, turned slowly on their own axis, and walked upwind toward the odor source. Single components or binary mixtures did not elicit responses from females. Males did not respond to the pheromone. Evolutionary aspects of this pheromone system are discussed.", "title": "" } ]
scidocsrr
8cdd82960583680238e38a077796af3a
SilhoNet: An RGB Method for 3D Object Pose Estimation and Grasp Planning
[ { "docid": "5c4a81dd06b5c80ba7c32a9ac1673a4f", "text": "We present a complete software architecture for reliable grasping of household objects. Our work combines aspects such as scene interpretation from 3D range data, grasp planning, motion planning, and grasp failure identification and recovery using tactile sensors. We build upon, and add several new contributions to the significant prior work in these areas. A salient feature of our work is the tight coupling between perception (both visual and tactile) and manipulation, aiming to address the uncertainty due to sensor and execution errors. This integration effort has revealed new challenges, some of which can be addressed through system and software engineering, and some of which present opportunities for future research. Our approach is aimed at typical indoor environments, and is validated by long running experiments where the PR2 robotic platform was able to consistently grasp a large variety of known and unknown objects. The set of tools and algorithms for object grasping presented here have been integrated into the open-source Robot Operating System (ROS).", "title": "" }, { "docid": "29d2a613f7da6b99e35eb890d590f4ca", "text": "Recent work has focused on generating synthetic imagery and augmenting real imagery to increase the size and variability of training data for learning visual tasks in urban scenes. This includes increasing the occurrence of occlusions or varying environmental and weather effects. However, few have addressed modeling the variation in the sensor domain. Unfortunately, varying sensor effects can degrade performance and generalizability of results for visual tasks trained on human annotated datasets. This paper proposes an efficient, automated physicallybased augmentation pipeline to vary sensor effects – specifically, chromatic aberration, blur, exposure, noise, and color cast – across both real and synthetic imagery. In particular, this paper illustrates that augmenting training datasets with the proposed pipeline improves the robustness and generalizability of object detection on a variety of benchmark vehicle datasets.", "title": "" }, { "docid": "7fe82f7231235ce6d4b16ec103130156", "text": "Autonomous grasping of household objects is one of the major skills that an intelligent service robot necessarily has to provide in order to interact with the environment. In this paper, we propose a grasping strategy for known objects, comprising an off-line, box-based grasp generation technique on 3D shape representations. The complete system is able to robustly detect an object and estimate its pose, flexibly generate grasp hypotheses from the assigned model and perform such hypotheses using visual servoing. We will present experiments implemented on the humanoid platform ARMAR-III.", "title": "" }, { "docid": "9dd245f75092adc8d8bb2b151275789b", "text": "Current model free learning-based robot grasping approaches exploit human-labeled datasets for training the models. However, there are two problems with such a methodology: (a) since each object can be grasped in multiple ways, manually labeling grasp locations is not a trivial task; (b) human labeling is biased by semantics. While there have been attempts to train robots using trial-and-error experiments, the amount of data used in such experiments remains substantially low and hence makes the learner prone to over-fitting. In this paper, we take the leap of increasing the available training data to 40 times more than prior work, leading to a dataset size of 50K data points collected over 700 hours of robot grasping attempts. This allows us to train a Convolutional Neural Network (CNN) for the task of predicting grasp locations without severe overfitting. In our formulation, we recast the regression problem to an 18-way binary classification over image patches. We also present a multi-stage learning approach where a CNN trained in one stage is used to collect hard negatives in subsequent stages. Our experiments clearly show the benefit of using large-scale datasets (and multi-stage training) for the task of grasping. We also compare to several baselines and show state-of-the-art performance on generalization to unseen objects for grasping.", "title": "" } ]
[ { "docid": "82c0292aa7717aaef617927eb83e07bd", "text": "Deutsch, Feynman, and Manin viewed quantum computing as a kind of universal physical simulation procedure. Much of the writing about quantum Turing machines has shown how these machines can simulate an arbitrary unitary transformation on a finite number of qubits. This interesting problem has been addressed most famously in a paper by Deutsch, and later by Bernstein and Vazirani. Quantum Turing machines form a class closely related to deterministic and probabilistic Turing machines and one might hope to find a universal machine in this class. A universal machine is the basis of a notion of programmability. The extent to which universality has in fact been established by the pioneers in the field is examined and a key notion in theoretical computer science (universality) is scrutinised. In a forthcoming paper, the authors will also consider universality in the quantum gate model.", "title": "" }, { "docid": "35eb5c51ff22ae0c350e5fc4eb8faa43", "text": "We propose gradient adversarial training, an auxiliary deep learning framework applicable to different machine learning problems. In gradient adversarial training, we leverage a prior belief that in many contexts, simultaneous gradient updates should be statistically indistinguishable from each other. We enforce this consistency using an auxiliary network that classifies the origin of the gradient tensor, and the main network serves as an adversary to the auxiliary network in addition to performing standard task-based training. We demonstrate gradient adversarial training for three different scenarios: (1) as a defense to adversarial examples we classify gradient tensors and tune them to be agnostic to the class of their corresponding example, (2) for knowledge distillation, we do binary classification of gradient tensors derived from the student or teacher network and tune the student gradient tensor to mimic the teacher’s gradient tensor; and (3) for multi-task learning we classify the gradient tensors derived from different task loss functions and tune them to be statistically indistinguishable. For each of the three scenarios we show the potential of gradient adversarial training procedure. Specifically, gradient adversarial training increases the robustness of a network to adversarial attacks, is able to better distill the knowledge from a teacher network to a student network compared to soft targets, and boosts multi-task learning by aligning the gradient tensors derived from the task specific loss functions. Overall, our experiments demonstrate that gradient tensors contain latent information about whatever tasks are being trained, and can support diverse machine learning problems when intelligently guided through adversarialization using a auxiliary network.", "title": "" }, { "docid": "c608e8eca5f584f9da999b7d39de1fea", "text": "In this paper, we propose a novel approach to discriminate malignant melanomas and benign atypical nevi, since both types of melanocytic skin lesions have very similar characteristics. Recent studies involving the non-invasive diagnosis of melanoma indicate that the concentrations of the two main classes of melanin present in the human skin, eumelanin and pheomelanin, can potentially be used in the computation of relevant features to differentiate these lesions. So, we describe how these features can be estimated using only standard camera images. Moreover, we demonstrate that using these features in conjunction with features based on the well known ABCD rule, it is possible to achieve 100% of sensitivity and more than 99% accuracy in melanocytic skin lesion discrimination, which is a highly desirable characteristic in a prescreening system. 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "6e76496dbe78bd7ffa9359a41dc91e69", "text": "US Supreme Court rulings concerning sanctions for juvenile offenders have drawn on the science of brain development and concluded that adolescents are inherently less mature than adults in ways that render them less culpable. This conclusion departs from arguments made in cases involving the mature minor doctrine, in which teenagers have been portrayed as comparable to adults in their capacity to make medical decisions. I attempt to reconcile these apparently incompatible views of adolescents' decision-making competence. Adolescents are indeed less mature than adults when making decisions under conditions that are characterized by emotional arousal and peer pressure, but adolescents aged 15 and older are just as mature as adults when emotional arousal is minimized and when they are not under the influence of peers, conditions that typically characterize medical decision-making. The mature minor doctrine, as applied to individuals 15 and older, is thus consistent with recent research on adolescent development.", "title": "" }, { "docid": "35670547246a3cf3f41c03a5d78db5eb", "text": "Distillation has remained an important separation technology for the chemical process industries. In 1997 it was reported in the journal Chemical Engineering that about 95% of all worldwide separation processes use this technology. In the USA alone, some 40 000 distillation columns represent a capital investment of about US $8 billion. They consume the energy equivalent of approximately 1 billion barrels of crude oil per day. Such columns are used in reRneries, petrochemical plants, gas processing plants and organic chemical plants to purify natural gas, improve gasoline, produce petrochemicals and organic products, recover pollulant species, etc. Distillation can be carried out in a tray or a packed column. The major considerations involved in the choice of the column type are operating pressure and design reliability. As pressure increases, tray coulmns become more efRcient for mass transfer and can often tolerate the pressure drop across the trays. The design procedure for the large diameter tray column is also more reliable than that for the packed column. Thus, trays are usually selected for large pressurized column applications. Distillation trays can be classiRed as:", "title": "" }, { "docid": "14e2371584228563eec36d83d712d14d", "text": "In this paper, we present a novel two-layer video representation for human action recognition employing hierarchical group sparse encoding technique and spatio-temporal structure. In the first layer, a new sparse encoding method named locally consistent group sparse coding (LCGSC) is proposed to make full use of motion and appearance information of local features. LCGSC method not only encodes global layouts of features within the same video-level groups, but also captures local correlations between them, which obtains expressive sparse representations of video sequences. Meanwhile, two kinds of efficient location estimation models, namely an absolute location model and a relative location model, are developed to incorporate spatio-temporal structure into LCGSC representations. In the second layer, action-level group is established, where a hierarchical LCGSC encoding scheme is applied to describe videos at different levels of abstractions. On the one hand, the new layer captures higher order dependency between video sequences; on the other hand, it takes label information into consideration to improve discrimination of videos’ representations. The superiorities of our hierarchical framework are demonstrated on several challenging datasets.", "title": "" }, { "docid": "e2867713be67291ee8c25afa3e2d1319", "text": "In recent years the <i>l</i><sub>1</sub>, <sub>∞</sub> norm has been proposed for joint regularization. In essence, this type of regularization aims at extending the <i>l</i><sub>1</sub> framework for learning sparse models to a setting where the goal is to learn a set of jointly sparse models. In this paper we derive a simple and effective projected gradient method for optimization of <i>l</i><sub>1</sub>, <sub>∞</sub> regularized problems. The main challenge in developing such a method resides on being able to compute efficient projections to the <i>l</i><sub>1</sub>, <sub>∞</sub> ball. We present an algorithm that works in <i>O</i>(<i>n</i> log <i>n</i>) time and <i>O</i>(<i>n</i>) memory where <i>n</i> is the number of parameters. We test our algorithm in a multi-task image annotation problem. Our results show that <i>l</i><sub>1</sub>, <sub>∞</sub> leads to better performance than both <i>l</i><sub>2</sub> and <i>l</i><sub>1</sub> regularization and that it is is effective in discovering jointly sparse solutions.", "title": "" }, { "docid": "476e612f4124fc5e9f391e2fa4a49a3b", "text": "Debugging data processing logic in Data-Intensive Scalable Computing (DISC) systems is a difficult and time consuming effort. Today's DISC systems offer very little tooling for debugging programs, and as a result programmers spend countless hours collecting evidence (e.g., from log files) and performing trial and error debugging. To aid this effort, we built Titian, a library that enables data provenance-tracking data through transformations-in Apache Spark. Data scientists using the Titian Spark extension will be able to quickly identify the input data at the root cause of a potential bug or outlier result. Titian is built directly into the Spark platform and offers data provenance support at interactive speeds-orders-of-magnitude faster than alternative solutions-while minimally impacting Spark job performance; observed overheads for capturing data lineage rarely exceed 30% above the baseline job execution time.", "title": "" }, { "docid": "68c3b039e9b05eef878de3cdc2e992ef", "text": "Genitourinary rhabdomyosarcoma in females usually originates in the vagina or uterus, but rarely the vulva. The authors present a case of rhabdomyosarcoma originating in the clitoris. A 4-year-old with an alveolar rhabdomyosarcoma of the clitoris was treated with radical clitorectomy, radiation, and chemotherapy. Follow-up at 3 years showed no active disease.", "title": "" }, { "docid": "3f40c24a8098fd0a06ef772f2d7d9e2f", "text": "Knowing how hands move and what object is being manipulated are two key sub-tasks for analyzing first-person (egocentric) action. However, lack of fully annotated hand data as well as imprecise foreground segmentation make either sub-task challenging. This work aims to explicitly ad dress these two issues via introducing a cascaded interactional targeting (i.e., infer both hand and active object regions) deep neural network. Firstly, a novel EM-like learning framework is proposed to train the pixel-level deep convolutional neural network (DCNN) by seamlessly integrating weakly supervised data (i.e., massive bounding box annotations) with a small set of strongly supervised data (i.e., fully annotated hand segmentation maps) to achieve state-of-the-art hand segmentation performance. Secondly, the resulting high-quality hand segmentation maps are further paired with the corresponding motion maps and object feature maps, in order to explore the contextual information among object, motion and hand to generate interactional foreground regions (operated objects). The resulting interactional target maps (hand + active object) from our cascaded DCNN are further utilized to form discriminative action representation. Experiments show that our framework has achieved the state-of-the-art egocentric action recognition performance on the benchmark dataset Activities of Daily Living (ADL).", "title": "" }, { "docid": "5c9124859874e20cd8f6f7b79aeecf4d", "text": "Earl Stevick has always been interested in improving language teaching methodology, and he has never been afraid of innovation. His seminal work, Teaching Languages: A Way and Ways (Stevick 1980), introduced many of us to Counselling-Learning and Suggestopedia for the first time, and in Memory, Meaning and Method: A View of Language Teaching. (Stevick 1996) he discussed a wide range of theoretical and practical considerations to help us better understand the intricate cognitive and interpersonal processes whereby a language is acquired and then used for meaningful communication. The proposal in this chapter to revitalize Communicative Language Teaching (CLT) in the light of contemporary scholarly advances is fully within the spirit of Earl's approach.' By the turn of the new millennium, CLT had become a real buzzword in language teaching methodology, but the extent to which the term covers a well-defined and uniform teaching method is highly questionable. In fact, since the genesis of CLT in the early 1970s, its proponents have developed a very wide range of variants that were only loosely related to each other (for overviews, see Savignon 2005; Spada 2007). In this chapter I first look at the core characteristics of CLT to explore the roots of the diverse interpretations and then argue that in order for CLT to fulfil all the expectations attached to it in the twenty-first century, the method needs to be revised according to the latest findings of psycholinguistic research. I will conclude the chapter by outlining the main principles of a proposed revised approach that I have termed the `Principled Communicative Approach' (PCA).", "title": "" }, { "docid": "7ff1f129b7cdfd32a159cc426c59a2d1", "text": "In numerous applications, forecasting relies on numerical solvers for partial differential equations (PDEs). Although the use of deep-learning techniques has been proposed, the uses have been restricted by the fact the training data are obtained using PDE solvers. Thereby, the uses were limited to domains, where the PDE solver was applicable, but no further. We present methods for training on small domains, while applying the trained models on larger domains, with consistency constraints ensuring the solutions are physically meaningful even at the boundary of the small domains. We demonstrate the results on an air-pollution forecasting model for Dublin, Ireland.", "title": "" }, { "docid": "28d8cad6fda1f1345b9905e71495e745", "text": "To provide COSMOS, a dynamic model baaed manipulator control system, with an improved dynamic model, a PUMA 560 arm waa diaaaaembled; the inertial propertiea of the individual links were meaaured; and an ezplicit model incorporating all ofthe non-zero meaaured parametera waa deriued. The ezplicit model of the PUMA arm has been obtained with a derivation procedure comprised of aeveral heuristic rulea for simplification. A aimplijied model, abbreviated from the full ezplicit model with a 1% aignijicance criterion, can be evaluated with 305 calculationa, one fifth the number required by the recuraive Newton-Euler method. The procedure used to derive the model i a laid out; the meaaured inertial parametera are preaented, and the model ia included in an appendiz.", "title": "" }, { "docid": "8bbff097ecdf6ede66bf13c985501fd4", "text": "In this paper, we present a practical algorithm for calibrating a magnetometer for the presence of magnetic disturbances and for magnetometer sensor errors. To allow for combining the magnetometer measurements with inertial measurements for orientation estimation, the algorithm also corrects for misalignment between the magnetometer and the inertial sensor axes. The calibration algorithm is formulated as the solution to a maximum likelihood problem, and the computations are performed offline. The algorithm is shown to give good results using data from two different commercially available sensor units. Using the calibrated magnetometer measurements in combination with the inertial sensors to determine the sensor’s orientation is shown to lead to significantly improved heading estimates.", "title": "" }, { "docid": "1bc285b8bd63e701a55cf956179abbac", "text": "A new anode/cathode design and process concept for thin wafer based silicon devices is proposed to achieve the goal of providing improved control for activating the injecting layer and forming a good ohmic contact. The concept is based on laser annealing in a melting regime of a p-type anode layer covered with a thin titanium layer with high melting temperature and high laser light absorption. The improved activation control of a boron anode layer is demonstrated on the Soft Punch Through IGBT with a nominal breakdown voltage of 1700 V. Furthermore, the silicidation of the titanium absorbing layer, which is necessary for achieving a low VCE ON, is discussed in terms of optimization of the device electrical parameters.", "title": "" }, { "docid": "722a2b6f773473d032d202ce7aded43c", "text": "Detection of skin cancer in the earlier stage is very Important and critical. In recent days, skin cancer is seen as one of the most Hazardous form of the Cancers found in Humans. Skin cancer is found in various types such as Melanoma, Basal and Squamous cell Carcinoma among which Melanoma is the most unpredictable. The detection of Melanoma cancer in early stage can be helpful to cure it. Computer vision can play important role in Medical Image Diagnosis and it has been proved by many existing systems. In this paper, we present a computer aided method for the detection of Melanoma Skin Cancer using Image processing tools. The input to the system is the skin lesion image and then by applying novel image processing techniques, it analyses it to conclude about the presence of skin cancer. The Lesion Image analysis tools checks for the various Melanoma parameters Like Asymmetry, Border, Colour, Diameter, (ABCD) etc. by texture, size and shape analysis for image segmentation and feature stages. The extracted feature parameters are used to classify the image as Normal skin and Melanoma cancer lesion.", "title": "" }, { "docid": "2502fc02f09be72d138275a7ac41d8bc", "text": "This manual describes the competition software for the Simulated Car Racing Championship, an international competition held at major conferences in the field of Evolutionary Computation and in the field of Computational Intelligence and Games. It provides an overview of the architecture, the instructions to install the software and to run the simple drivers provided in the package, the description of the sensors and the actuators.", "title": "" }, { "docid": "e9af5e2bfc36dd709ae6feefc4c38976", "text": "Due to object detection's close relationship with video analysis and image understanding, it has attracted much research attention in recent years. Traditional object detection methods are built on handcrafted features and shallow trainable architectures. Their performance easily stagnates by constructing complex ensembles that combine multiple low-level image features with high-level context from object detectors and scene classifiers. With the rapid development in deep learning, more powerful tools, which are able to learn semantic, high-level, deeper features, are introduced to address the problems existing in traditional architectures. These models behave differently in network architecture, training strategy, and optimization function. In this paper, we provide a review of deep learning-based object detection frameworks. Our review begins with a brief introduction on the history of deep learning and its representative tool, namely, the convolutional neural network. Then, we focus on typical generic object detection architectures along with some modifications and useful tricks to improve detection performance further. As distinct specific detection tasks exhibit different characteristics, we also briefly survey several specific tasks, including salient object detection, face detection, and pedestrian detection. Experimental analyses are also provided to compare various methods and draw some meaningful conclusions. Finally, several promising directions and tasks are provided to serve as guidelines for future work in both object detection and relevant neural network-based learning systems.", "title": "" }, { "docid": "e099acc0e4666f04f6a0405e305b7adb", "text": "Organisations in all sectors are operating in highly competitive environment which requires that these institutions retain their core employees in order to gain and retain competitive advantage. Because of globalisation and new methods of management, different organisations have experienced competition both locally and globally in terms of market and staff. The role of leaders in employee retention is critical since their leadership styles impact directly on the employees’ feelings about the organization. The paper sought to find out the influence of leadership style on staff retention in organisations. The s tudy was purely based on l i tera ture review. From the review of several empirical studies it was established that leadership style significantly influences intention to leave of staff and hence there is need to embrace a leadership style that promotes staff retention.", "title": "" }, { "docid": "c74bd8c04af6a63b73a30bf9637b5a2a", "text": "Complex regional pain syndrome (CRPS) is a debilitating condition affecting the limbs that can be induced by surgery or trauma. This condition can complicate recovery and impair one's functional and psychological well-being. The wide variety of terminology loosely used to describe CRPS in the past has led to misdiagnosis of this condition, resulting in poor evidence-base regarding the treatment modalities available and their impact. The aim of this review is to report on the recent progress in the understanding of the epidemiology, pathophysiology and treatment of CRPS and to discuss novel approaches in treating this condition.", "title": "" } ]
scidocsrr
ee024c6bad265264a90b327e8f41191b
Twevent: segment-based event detection from tweets
[ { "docid": "289a8d4cc1535b9ec07d85127f6096cd", "text": "Automated tracking of events from chronologically ordered document streams is a new challenge for statistical text classification. Existing learning techniques must be adapted or improved in order to effectively handle difficult situations where the number of positive training instances per event is extremely small, the majority of training documents are unlabelled, and most of the events have a short duration in time. We adapted several supervised text categorization methods, specifically several new variants of the k-Nearest Neighbor (kNN) algorithm and a Rocchio approach, to track events. All of these methods showed significant improvement (up to 71% reduction in weighted error rates) over the performance of the original kNN algorithm on TDT benchmark collections, making kNN among the top-performing systems in the recent TDT3 official evaluation. Furthermore, by combining these methods, we significantly reduced the variance in performance of our event tracking system over different data collections, suggesting a robust solution for parameter optimization.", "title": "" } ]
[ { "docid": "3776b7fdcd1460b60a18c87cd60b639e", "text": "A sketch is a probabilistic data structure that is used to record frequencies of items in a multi-set. Various types of sketches have been proposed in literature and applied in a variety of fields, such as data stream processing, natural language processing, distributed data sets etc. While several variants of sketches have been proposed in the past, existing sketches still have a significant room for improvement in terms of accuracy. In this paper, we propose a new sketch, called Slim-Fat (SF) sketch, which has a significantly higher accuracy compared to prior art, a much smaller memory footprint, and at the same time achieves the same speed as the best prior sketch. The key idea behind our proposed SF-sketch is to maintain two separate sketches: a small sketch called Slim-subsketch and a large sketch called Fat-subsketch. The Slim-subsketch, stored in the fast memory (SRAM), enables fast and accurate querying. The Fat-subsketch, stored in the relatively slow memory (DRAM), is used to assist the insertion and deletion from Slim-subsketch. We implemented and extensively evaluated SF-sketch along with several prior sketches and compared them side by side. Our experimental results show that SF-sketch outperforms the most commonly used CM-sketch by up to 33.1 times in terms of accuracy.", "title": "" }, { "docid": "f6647e82741dfe023ee5159bd6ac5be9", "text": "3D scene understanding is important for robots to interact with the 3D world in a meaningful way. Most previous works on 3D scene understanding focus on recognizing geometrical or semantic properties of a scene independently. In this work, we introduce Data Associated Recurrent Neural Networks (DA-RNNs), a novel framework for joint 3D scene mapping and semantic labeling. DA-RNNs use a new recurrent neural network architecture for semantic labeling on RGB-D videos. The output of the network is integrated with mapping techniques such as KinectFusion in order to inject semantic information into the reconstructed 3D scene. Experiments conducted on real world and synthetic RGB-D videos demonstrate the superior performance of our method.", "title": "" }, { "docid": "491e12f503c982c4beef081d82c4575b", "text": "Article history: This study investigates hows Received 25 December 2007 Received in revised form 28 January 2009 Accepted 9 February 2009 Available online 21 February 2009", "title": "" }, { "docid": "6ad8da8198b1f61dfe0dc337781322d9", "text": "A model of human speech quality perception has been developed to provide an objective measure for predicting subjective quality assessments. The Virtual Speech Quality Objective Listener (ViSQOL) model is a signal based full reference metric that uses a spectro-temporal measure of similarity between a reference and a test speech signal. This paper describes the algorithm and compares the results with PESQ for common problems in VoIP: clock drift, associated time warping and jitter. The results indicate that ViSQOL is less prone to underestimation of speech quality in both scenarios than the ITU standard.", "title": "" }, { "docid": "46baa51f8c36c9d913bc9ece46aa1919", "text": "Radio frequency identification (RFID) has been identified as a crucial technology for the modern 21 st century knowledge-based economy. Many businesses started realising RFID to be able to improve their operational efficiency, achieve additional cost savings, and generate opportunities for higher revenues. To investigate how RFID technology has brought an impact to warehousing, a comprehensive analysis of research findings available through leading scientific article databases was conducted. Articles from years 1995 to 2010 were reviewed and analysed according to warehouse operations, RFID application domains, and benefits achieved. This paper presents four discussion topics covering RFID innovation, including its applications, perceived benefits, obstacles to its adoption and future trends. This is aimed at elucidating the current state of RFID in the warehouse and giving insights for the academics to establish new research scope and for the practitioners to evaluate their assessment of adopting RFID in the warehouse.", "title": "" }, { "docid": "7f70eb577d9f76b95222377e2ad0bf4c", "text": "Designing high-performance and scalable applications on GPU clusters requires tackling several challenges. The key challenge is the separate host memory and device memory, which requires programmers to use multiple programming models, such as CUDA and MPI, to operate on data in different memory spaces. This challenge becomes more difficult to tackle when non-contiguous data in multidimensional structures is used by real-world applications. These challenges limit the programming productivity and the application performance. We propose the GPU-Aware MPI to support data communication from GPU to GPU using standard MPI. It unifies the separate memory spaces, and avoids explicit CPU-GPU data movement and CPU/GPU buffer management. It supports all MPI datatypes on device memory with two algorithms: a GPU datatype vectorization algorithm and a vector based GPU kernel data pack and unpack algorithm. A pipeline is designed to overlap the non-contiguous data packing and unpacking on GPUs, the data movement on the PCIe, and the RDMA data transfer on the network. We incorporate our design with the open-source MPI library MVAPICH2 and optimize a production application: the multiphase 3D LBM. Besides the increase of programming productivity, we observe up to 19.9 percent improvement in application-level performance on 64 GPUs of the Oakley supercomputer.", "title": "" }, { "docid": "c7d6e273065ce5ca82cd55f0ba5937cd", "text": "Many environmental and socioeconomic time–series data can be adequately modeled using Auto-Regressive Integrated Moving Average (ARIMA) models. We call such time–series ARIMA time–series. We consider the problem of clustering ARIMA time–series. We propose the use of the Linear Predictive Coding (LPC) cepstrum of time–series for clustering ARIMA time–series, by using the Euclidean distance between the LPC cepstra of two time–series as their dissimilarity measure. We demonstrate that LPC cepstral coefficients have the desired features for accurate clustering and efficient indexing of ARIMA time–series. For example, few LPC cepstral coefficients are sufficient in order to discriminate between time–series that are modeled by different ARIMA models. In fact this approach requires fewer coefficients than traditional approaches, such as DFT and DWT. The proposed distance measure can be used for measuring the similarity between different ARIMA models as well. We cluster ARIMA time–series using the Partition Around Medoids method with various similarity measures. We present experimental results demonstrating that using the proposed measure we achieve significantly better clusterings of ARIMA time–series data as compared to clusterings obtained by using other traditional similarity measures, such as DFT, DWT, PCA, etc. Experiments were performed both on simulated as well as real data.", "title": "" }, { "docid": "8b4fbc7fd8f41200731562a92a0c80ce", "text": "The problem of recognizing mathematical expressions differs significantly from the recognition of standard prose. While in prose significant constraints can be put on the interpretation of a character by the characters immediately preceding and following it, few such simple constraints are present in a mathematical expression. In order to make the problem tractable, effective methods of recognizing mathematical expressions will need to put intelligent constraints on the possible interpretations. The authors present preliminary results on a system for the recognition of both handwritten and typeset mathematical expressions. While previous systems perform character recognition out of context, the current system maintains ambiguity of the characters until context can be used to disambiguate the interpretatiom In addition, the system limits the number of potentially valid interpretations by decomposing the expressions into a sequence of compatible convex regions. The system uses A-star to search for the best possible interpretation of an expression. We provide a new lower bound estimate on the cost to goal that improves performance significantly.", "title": "" }, { "docid": "0b3e5df3c317b748280e6253965e59e5", "text": "The explicitly observed social relations from online social platforms have been widely incorporated into recommender systems to mitigate the data sparsity issue. However, the direct usage of explicit social relations may lead to an inferior performance due to the unreliability (e.g., noises) of observed links. To this end, the discovery of reliable relations among users plays a central role in advancing social recommendation. In this paper, we propose a novel approach to adaptively identify implicit friends toward discovering more credible user relations. Particularly, implicit friends are those who share similar tastes but could be distant from each other on the network topology of social relations. Methodologically, to find the implicit friends for each user, we first model the whole system as a heterogeneous information network, and then capture the similarity of users through the meta-path based embedding representation learning. Finally, based on the intuition that social relations have varying degrees of impact on different users, our approach adaptively incorporates different numbers of similar users as implicit friends for each user to alleviate the adverse impact of unreliable social relations for a more effective recommendation. Experimental analysis on three real-world datasets demonstrates the superiority of our method and explain why implicit friends are helpful in improving social recommendation.", "title": "" }, { "docid": "cece842f05a59c824a2272106ff2e3a9", "text": "Recent developments in sensor technology [1], [2] have resulted in the deployment of mobile robots equipped with multiple sensors, in specific real-world applications [3]–[6]. A robot equipped with multiple sensors, however, obtains information about different regions of the scene, in different formats and with varying levels of uncertainty. In addition, the bits of information obtained from different sensors may contradict or complement each other. One open challenge to the widespread deployment of robots is the ability to fully utilize the information obtained from each sensor, in order to operate robustly in dynamic environments. This paper presents a probabilistic framework to address autonomous multisensor information fusion on a humanoid robot. The robot exploits the known structure of the environment to autonomously model the expected performance of the individual information processing schemes. The learned models are used to effectively merge the available information. As a result, the robot is able to robustly detect and localize mobile obstacles in its environment. The algorithm is fully implemented and tested on a humanoid robot platform (Aldebaran Naos [7]) in the robot soccer scenario.", "title": "" }, { "docid": "76669901070ecccb0ce45ef861955f3a", "text": "We describe RDF123, a highly flexible open-source tool for translating spreadsheet data to RDF. Existing spreadsheet-to-rdf tools typically map only to star-shaped RDF graphs, i.e. each spreadsheet row is an instance, with each column representing a property. RDF123, on the other hand, allows users to define mappings to arbitrary graphs, thus allowing much richer spreadsheet semantics to be expressed. Further, each row in the spreadsheet can be mapped with a fairly different RDF scheme. Two interfaces are available. The first is a graphical application that allows users to create their mapping in an intuitive manner. The second is a Web service that takes as input a URL to a Google spreadsheet or CSV file and an RDF123 map, and provides RDF as output.", "title": "" }, { "docid": "83b79fc95e90a303f29a44ef8730a93f", "text": "Internet of Things (IoT) is a concept that envisions all objects around us as part of internet. IoT coverage is very wide and includes variety of objects like smart phones, tablets, digital cameras and sensors. Once all these devices are connected to each other, they enable more and more smart processes and services that support our basic needs, environment and health. Such enormous number of devices connected to internet provides many kinds of services. They also produce huge amount of data and information. Cloud computing is one such model for on-demand access to a shared pool of configurable resources (computer, networks, servers, storage, applications, services, and software) that can be provisioned as infrastructures ,software and applications. Cloud based platforms help to connect to the things around us so that we can access anything at any time and any place in a user friendly manner using customized portals and in built applications. Hence, cloud acts as a front end to access IoT. Applications that interact with devices like sensors have special requirements of massive storage to store big data, huge computation power to enable the real time processing of the data, information and high speed network to stream audio or video. Here we have describe how Internet of Things and Cloud computing can work together can address the Big Data problems. We have also illustrated about Sensing as a service on cloud using few applications like Augmented Reality, Agriculture, Environment monitoring,etc. Finally, we propose a prototype model for providing sensing as a service on cloud.", "title": "" }, { "docid": "b4284204ae7d9ef39091a651583b3450", "text": "Embedding learning, a.k.a. representation learning, has been shown to be able to model large-scale semantic knowledge graphs. A key concept is a mapping of the knowledge graph to a tensor representation whose entries are predicted by models using latent representations of generalized entities. Latent variable models are well suited to deal with the high dimensionality and sparsity of typical knowledge graphs. In recent publications the embedding models were extended to also consider temporal evolutions, temporal patterns and subsymbolic representations. In this paper we map embedding models, which were developed purely as solutions to technical problems for modelling temporal knowledge graphs, to various cognitive memory functions, in particular to semantic and concept memory, episodic memory, sensory memory, short-term memory, and working memory. We discuss learning, query answering, the path from sensory input to semantic decoding, and relationships between episodic memory and semantic memory. We introduce a number of hypotheses on human memory that can be derived from the developed mathematical models. There are three main hypotheses. The first one is that semantic memory is described as triples and that episodic memory is described as triples in time. A second main hypothesis is that generalized entities have unique latent representations which are shared across memory functions and that are the basis for prediction, decision support and other functionalities executed by working memory. A third main hypothesis is that the latent representation for a time t, which summarizes all sensory information available at time t, is the basis for episodic memory. The proposed model includes both a recall of previous memories and the mental imagery of future events and sensory impressions.", "title": "" }, { "docid": "05feddebdddc575a1a29af390cea6196", "text": "Most previous works on saliency detection are dedicated to 2D images. Recently it has been shown that 3D visual information supplies a powerful cue for saliency analysis. In this paper, we propose a novel saliency method that works on depth images based on anisotropic center-surround difference. Instead of depending on absolute depth, we measure the saliency of a point by how much it outstands from surroundings, which takes the global depth structure into consideration. Besides, two common priors based on depth and location are used for refinement. The proposed method works within a complexity of O(N) and the evaluation on a dataset of over 1000 stereo images shows that our method outperforms state-of-the-art.", "title": "" }, { "docid": "298b65526920c7a094f009884439f3e4", "text": "Big Data concerns massive, heterogeneous, autonomous sources with distributed and decentralized control. These characteristics make it an extreme challenge for organizations using traditional data management mechanism to store and process these huge datasets. It is required to define a new paradigm and re-evaluate current system to manage and process Big Data. In this paper, the important characteristics, issues and challenges related to Big Data management has been explored. Various open source Big Data analytics frameworks that deal with Big Data analytics workloads have been discussed. Comparative study between the given frameworks and suitability of the same has been proposed.", "title": "" }, { "docid": "75f43dc0731d442e0d293c6e2f360f85", "text": "A novel polarization reconfigurable array antenna is proposed and fabricated. Measured results validate the performance improvement in port isolation and cross polarization level. The antenna can operate with vertical and horizontal polarizations at the same time and hence realize polarization diversity. By adding a polarization switch, the antenna can provide either RHCP or LHCP radiations. A total of four polarization modes can be facilitated with this antenna array and may find applications in new and up-coming wireless communication standards that require polarization diversity.", "title": "" }, { "docid": "e11e30bb83ada1b255c45392f41af6ff", "text": "We provide the first extensive evaluation of how using different types of context to learn skip-gram word embeddings affects performance on a wide range of intrinsic and extrinsic NLP tasks. Our results suggest that while intrinsic tasks tend to exhibit a clear preference to particular types of contexts and higher dimensionality, more careful tuning is required for finding the optimal settings for most of the extrinsic tasks that we considered. Furthermore, for these extrinsic tasks, we find that once the benefit from increasing the embedding dimensionality is mostly exhausted, simple concatenation of word embeddings, learned with different context types, can yield further performance gains. As an additional contribution, we propose a new variant of the skip-gram model that learns word embeddings from weighted contexts of substitute words.", "title": "" }, { "docid": "720b7ede75f47e9ce4dc13b1876dbf33", "text": "The organization of lateral septal connections has been re-examined with respect to its newly defined subdivisions, using anterograde (PHAL) and retrograde (fluorogold) axonal tracer methods. The results confirm that progressively more ventral transverse bands in the hippocampus (defined by the orientation of the trisynaptic circuit) innervate progressively more ventral, transversely oriented sheets in the lateral septum. In addition, hippocampal field CA3 projects selectively to the caudal part of the lateral septal nucleus, which occupies topologically lateral regions of the transverse sheets, whereas field CA1 and the subiculum project selectively to the rostral and ventral parts of the lateral septal nucleus, which occupy topologically medial regions of the transverse sheets. Finally, the evidence suggests that progressively more ventral hippocampal bands innervate progressively thicker lateral septal sheets. In contrast, ascending inputs to the lateral septum appear to define at least 20 vertically oriented bands or subdivisions arranged orthogonal to the hippocampal input (Risold, P.Y. and Swanson, L.W., Chemoarchitecture of the rat lateral septal nucleus, Brain Res. Rev., 24 (1997) 91-113). Hypothalamic nuclei forming parts of behavior-specific subsystems share bidirectional connections with specific subdivisions of the lateral septal nucleus (especially the rostral part), suggesting that specific domains in the hippocampus may influence specific hypothalamic behavioral systems. In contrast, the caudal part of the lateral septal nucleus projects to the lateral hypothalamus and to the supramammillary nucleus, which projects back to the hippocampus and receives its major inputs from brainstem cell groups thought to regulate behavioral state. The neural system mediating defensive behavior shows these features rather clearly, and what is known about its organization is discussed in some detail.", "title": "" }, { "docid": "c47af26f11bc1a3aa978c0b3f7052126", "text": "We present our UWB system for Semantic Textual Similarity (STS) task at SemEval 2016. Given two sentences, the system estimates the degree of their semantic similarity. We use state-of-the-art algorithms for the meaning representation and combine them with the best performing approaches to STS from previous years. These methods benefit from various sources of information, such as lexical, syntactic, and semantic. In the monolingual task, our system achieve mean Pearson correlation 75.7% compared with human annotators. In the cross-lingual task, our system has correlation 86.3% and is ranked first among 26 systems.", "title": "" }, { "docid": "09d7bb1b4b976e6d398f20dc34fc7678", "text": "A compact wideband quarter-wave transformer using microstrip lines is presented. The design relies on replacing a uniform microstrip line with a multi-stage equivalent circuit. The equivalent circuit is a cascade of either T or π networks. Design equations for both types of equivalent circuits have been derived. A quarter-wave transformer operating at 1 GHz is implemented. Simulation results indicate a −15 dB impedance bandwidth exceeding 64% for a 3-stage network with less than 0.25 dB of attenuation within the bandwidth. Both types of equivalent circuits provide more than 40% compaction with proper selection of components. Measured results for the fabricated unit deviate within acceptable limits. The designed quarter-wave transformer may be used to replace 90° transmission lines in various passive microwave components.", "title": "" } ]
scidocsrr
878038d4e352cf18e675e1c9908ee0d3
Ethnic Classification Based on Iris Images
[ { "docid": "a0e68c731cdb46d1bdf708997a871695", "text": "Iris segmentation is an essential module in iris recognition because it defines the effective image region used for subsequent processing such as feature extraction. Traditional iris segmentation methods often involve an exhaustive search of a large parameter space, which is time consuming and sensitive to noise. To address these problems, this paper presents a novel algorithm for accurate and fast iris segmentation. After efficient reflection removal, an Adaboost-cascade iris detector is first built to extract a rough position of the iris center. Edge points of iris boundaries are then detected, and an elastic model named pulling and pushing is established. Under this model, the center and radius of the circular iris boundaries are iteratively refined in a way driven by the restoring forces of Hooke's law. Furthermore, a smoothing spline-based edge fitting scheme is presented to deal with noncircular iris boundaries. After that, eyelids are localized via edge detection followed by curve fitting. The novelty here is the adoption of a rank filter for noise elimination and a histogram filter for tackling the shape irregularity of eyelids. Finally, eyelashes and shadows are detected via a learned prediction model. This model provides an adaptive threshold for eyelash and shadow detection by analyzing the intensity distributions of different iris regions. Experimental results on three challenging iris image databases demonstrate that the proposed algorithm outperforms state-of-the-art methods in both accuracy and speed.", "title": "" }, { "docid": "4164774428ce68c4c61039eafeae03ea", "text": "Images of a human iris contain rich texture information useful for identity authentication. A key and still open issue in iris recognition is how best to represent such textural information using a compact set of features (iris features). In this paper, we propose using ordinal measures for iris feature representation with the objective of characterizing qualitative relationships between iris regions rather than precise measurements of iris image structures. Such a representation may lose some image-specific information, but it achieves a good trade-off between distinctiveness and robustness. We show that ordinal measures are intrinsic features of iris patterns and largely invariant to illumination changes. Moreover, compactness and low computational complexity of ordinal measures enable highly efficient iris recognition. Ordinal measures are a general concept useful for image analysis and many variants can be derived for ordinal feature extraction. In this paper, we develop multilobe differential filters to compute ordinal measures with flexible intralobe and interlobe parameters such as location, scale, orientation, and distance. Experimental results on three public iris image databases demonstrate the effectiveness of the proposed ordinal feature models.", "title": "" }, { "docid": "d82e41bcf0d25a728ddbad1dd875bd16", "text": "With an increasing emphasis on security, automated personal identification based on biometrics has been receiving extensive attention over the past decade. Iris recognition, as an emerging biometric recognition approach, is becoming a very active topic in both research and practical applications. In general, a typical iris recognition system includes iris imaging, iris liveness detection, and recognition. This paper focuses on the last issue and describes a new scheme for iris recognition from an image sequence. We first assess the quality of each image in the input sequence and select a clear iris image from such a sequence for subsequent recognition. A bank of spatial filters, whose kernels are suitable for iris recognition, is then used to capture local characteristics of the iris so as to produce discriminating texture features. Experimental results show that the proposed method has an encouraging performance. In particular, a comparative study of existing methods for iris recognition is conducted on an iris image database including 2,255 sequences from 213 subjects. Conclusions based on such a comparison using a nonparametric statistical method (the bootstrap) provide useful information for further research.", "title": "" } ]
[ { "docid": "2ea6addaae9187d69166ab2694f9e633", "text": "Convolutional neural networks are increasingly used outside the domain of image analysis, in particular in various areas of the natural sciences concerned with spatial data. Such networks often work out-of-the box, and in some cases entire model architectures from image analysis can be carried over to other problem domains almost unaltered. Unfortunately, this convenience does not trivially extend to data in non-euclidean spaces, such as spherical data. In this paper, we introduce two strategies for conducting convolutions on the sphere, using either a spherical-polar grid or a grid based on the cubed-sphere representation. We investigate the challenges that arise in this setting, and extend our discussion to include scenarios of spherical volumes, with several strategies for parameterizing the radial dimension. As a proof of concept, we conclude with an assessment of the performance of spherical convolutions in the context of molecular modelling, by considering structural environments within proteins. We show that the models are capable of learning non-trivial functions in these molecular environments, and that our spherical convolutions generally outperform standard 3D convolutions in this setting. In particular, despite the lack of any domain specific feature-engineering, we demonstrate performance comparable to state-of-the-art methods in the field, which build on decades of domain-specific knowledge.", "title": "" }, { "docid": "910baa3349f1090e9c22814bff382b0b", "text": "Researchers in information science and related areas have developed various methods for analyzing textual data, such as survey responses. This article describes the application of analysis methods from two distinct fields, one method from interpretive social science and one method from statistical machine learning, to the same survey data. The results show that the two analyses produce some similar and some complementary insights about the phenomenon of interest, in this case, nonuse of social media. We compare both the processes of conducting these analyses and the results they produce to derive insights about each method's unique advantages and drawbacks, as well as the broader roles that these methods play in the respective fields where they are often used. These insights allow us to make more informed decisions about the tradeoffs in choosing different methods for analyzing textual data. Furthermore, this comparison suggests ways that such methods might be combined in novel and compelling ways. A Tale of Two Methods Text plays an important role in research, but analyzing textual data poses unique challenges. In the case of surveys, free‐text responses allow participants to report their experiences in detailed ways that might not be anticipated by researchers. However, this flexibility also creates challenges in synthesizing insights from the specificity of individual responses. This article compares two approaches to this problem, a qualitative approach from interpretive social science (Glaser & Strauss, 1967) and a quantitative approach from natural language processing (Blei, Ng, & Jordan, 2003). We demonstrate, through a running example relating to social media use and nonuse, that these methods involve surprisingly similar processes and produce surprisingly similar results. Surveys that include open‐ended, free‐text responses are often analyzed using qualitative methods (e.g., Baumer et al., 2013; Rader, Wash, & Brooks, 2012; Wang et al., 2011). Analytic methods from, e.g., grounded theory (Charmaz, 2006; Glaser & Strauss, 1967) can generate rich, thick descriptions (Geertz, 1973). Coming from interpretivist (cf. Ma, 2012; Orlikowski & Baroudi, 1991) analysis of empirical social phenomena, these approaches emphasize how a social group co‐constructs both their reality and its meaning. However, such methods are time‐ consuming and difficult to apply to massive datasets. Furthermore, concerns can emerge about the particular subject position(s), and concomitant bias(es), of the researcher (Clifford & Marcus, 1986). Methods from machine learning provide an alternative approach. Roberts, Stewart, and Tingley (2014a) analyzed open‐ended survey responses using topic models (Blei et al., 2003), which find statistical regularities in word co‐occurrence that often correspond to recognizable themes, events, or discourses (Jockers & Mimno, 2013). This method scales to billion‐word datasets and can arguably provide an analysis driven more by the documents than by human preconceptions. However, algorithmically defined “topics” run the risk of misleading researchers. Not only does properly making sense of the results require detailed technical knowledge, but these techniques forgo a certain degree of human contextual interpretive ability (Rost, Barkhuus, Cramer, & Brown, 2013). Moreover, an appeal to computational objectivity may embody fundamentally different epistemic commitments than those at work in interpretivist approaches (Ma, 2012; Orlikowski & Baroudi, 1991). Each method thus carries its own strengths and weaknesses. We can algorithmically identify latent patterns in complex datasets with relative speed and ease, but in a way that potentially forfeits an appreciation for context, subtlety, and an interpretive approach to social reality. Alternatively, we can conduct detailed analyses of particular sociotechnical practices, but the effort involved precludes analyzing data collected from larger pools of respondents. These tensions arise not only in the analysis of surveys but also for a wide variety of data sources, from usage logs and scraped data (Backstrom, Boldi, Rosa, Ugander, & Vigna, 2012; Schoenebeck, 2014) to popular media coverage (Harmon & Mazmanian, 2013; Lind & Salo, 2002; Portwood‐Stacer, 2013) to policy documents (Epstein, Roth, & Baumer, 2014). However, we have relatively little understanding of how these differences play out in practice. Mixed methods research combines different methods (e.g., Baumer et al., 2013; Schoenebeck, 2014), but such work rarely examines similarities and differences in the methods themselves. To the authors' knowledge, no empirical comparative analysis has examined how approaches from the interpretive social sciences and computational analysis techniques either converge, diverge, or both when applied to the same data. To conduct such a comparison, we consider a case study in social media reversion (Baumer, Guha, Quan, Mimno, & Gay, 2015), that is, when a social media user leaves a site to become a nonuser (Satchell & Dourish, 2009; Wyatt, 2003) but then subsequently returns to become a user again. Social media reversion represents a phenomenon that has been identified as important in prior work on technology nonuse (Baumer et al., 2013; Brubaker, Ananny, & Crawford, 2014; Schoenebeck, 2014) but has not yet received significant attention (Wyatt, 2003). To study this nascent phenomenon, we leverage data from an online campaign by the Dutch advertising firm Just that encouraged users to stay off of the social networking site Facebook for 99 days (http://99daysoffreedom.com/). Participants were then surveyed by Just after 33, 66, and 99 days. Of the more than 5,000 survey responses collected, 1,095 reported returning to Facebook before 99 days had passed. We analyzed these participants' descriptions of their experiences in returning to the site. These data provide a prime opportunity to compare qualitative and computational methods. The dataset is large enough that a computational text analysis will provide meaningful results, but it is small enough to ensure that an interpretive social scientific approach remains tractable. These data are analyzed using two separate approaches—grounded theory (Glaser & Strauss, 1967) and statistical topic modeling (Blei et al., 2003)—which were conducted independently by separate authors. We chose these particular methods due to their respective popularity among interpretive and computational approaches, their roughly analogous goals of identifying thematic patterns in unstructured text data, and their popularity among information science researchers. However, conducting a comparison of these methods purely as abstract analytic techniques would be nearly impossible. Neither topic modeling nor grounded theory are applied purely by rote. Rather, each requires nonnegligible amounts of researchers' subjective judgment, both in the application process and in interpreting results. Indeed, if two researchers conducted independent analyses following the principles of grounded theory, one would expect different results from each. Comparing these results would highlight where each analysis confirms, or perhaps discredits, the other's findings. This article takes a similar approach, but instead it compares results from a grounded theory analysis and from a topic modeling analysis. This comparison provides a better understanding of both where and how these methods for textual analysis might either converge or diverge. Our findings show both numerous areas of resonance and some key divergences. First, the authors identified several correspondences between the grounded theory themes and the algorithmically generated topics. These correspondences did not strictly map either set of results onto the other but suggest a many‐to‐many, or in most cases a “two‐to‐two,” mapping. Second, the authors found several alignments in analysis processes. Both methods begin with a provisional model or theory that is iteratively refined based on the data. Thus, the theory or model arises primarily from the data themselves. However, what is meant in each approach by the terms “model,” “theory,” “iteration,” and even “data” differ, perhaps dramatically. Third, grounded theory and topic modeling each serve important discursive functions in terms of legitimating particular analytic practices to a broader disciplinary audience. Both approaches attempt to shift the perception of particular methods on a spectrum from impressionistic to computational (Ramsay, 2003, 2011), each moving toward a potential center but from opposite directions. Thus, this article makes a primarily methodological contribution. It advances our understanding of how approaches from the interpretive social sciences and from computer science both resemble and differ from one another, both in terms of substantive results and in terms of analytic process. It also argues that these methods should be considered in the context of the broader theoretical and methodological shifts they represent in their respective fields. The article concludes by suggesting directions to orient future work that explores mixed computational‐interpretive methods.", "title": "" }, { "docid": "2706e8ed981478ad4cb2db060b3d9844", "text": "We develop a technique for transfer learning in machine comprehension (MC) using a novel two-stage synthesis network (SynNet). Given a high-performing MC model in one domain, our technique aims to answer questions about documents in another domain, where we use no labeled data of question-answer pairs. Using the proposed SynNet with a pretrained model on the SQuAD dataset, we achieve an F1 measure of 46.6% on the challenging NewsQA dataset, approaching performance of in-domain models (F1 measure of 50.0%) and outperforming the out-ofdomain baseline by 7.6%, without use of provided annotations.1", "title": "" }, { "docid": "96344ccc2aac1a7e7fbab96c1355fa10", "text": "A highly sensitive field-effect sensor immune to environmental potential fluctuation is proposed. The sensor circuit consists of two sensors each with a charge sensing field effect transistor (FET) and an extended sensing gate (SG). By enlarging the sensing gate of an extended gate ISFET, a remarkable sensitivity of 130mV/pH is achieved, exceeding the conventional Nernst limit of 59mV/pH. The proposed differential sensing circuit consists of a pair of matching n-channel and p-channel ion sensitive sensors connected in parallel and biased at a matched transconductance bias point. Potential fluctuations in the electrolyte appear as common mode signal to the differential pair and are cancelled by the matched transistors. This novel differential measurement technique eliminates the need for a true reference electrode such as the bulky Ag/AgCl reference electrode and enables the use of the sensor for autonomous and implantable applications.", "title": "" }, { "docid": "be90932dfddcf02b33fc2ef573b8c910", "text": "Style-based Text Categorization: What Newspaper Am I Reading?", "title": "" }, { "docid": "874876e2ed9e4a2ba044cf62d408da55", "text": "It is widely believed that refactoring improves software quality and programmer productivity by making it easier to maintain and understand software systems. However, the role of refactorings has not been systematically investigated using fine-grained evolution history. We quantitatively and qualitatively studied API-level refactorings and bug fixes in three large open source projects, totaling 26523 revisions of evolution.\n The study found several surprising results: One, there is an increase in the number of bug fixes after API-level refactorings. Two, the time taken to fix bugs is shorter after API-level refactorings than before. Three, a large number of refactoring revisions include bug fixes at the same time or are related to later bug fix revisions. Four, API-level refactorings occur more frequently before than after major software releases. These results call for re-thinking refactoring's true benefits. Furthermore, frequent floss refactoring mistakes observed in this study call for new software engineering tools to support safe application of refactoring and behavior modifying edits together.", "title": "" }, { "docid": "05edb3594eab114a5015557d4260e3db", "text": "In CMOS circuits, the reduction of the threshold voltage due to voltage scaling leads to increase in subthreshold leakage current and hence static power dissipation. We propose a novel technique called LECTOR for designing CMOS gates which significantly cuts down the leakage current without increasing the dynamic power dissipation. In the proposed technique, we introduce two leakage control transistors (a p-type and a n-type) within the logic gate for which the gate terminal of each leakage control transistor (LCT) is controlled by the source of the other. In this arrangement, one of the LCTs is always \"near its cutoff voltage\" for any input combination. This increases the resistance of the path from V/sub dd/ to ground, leading to significant decrease in leakage currents. The gate-level netlist of the given circuit is first converted into a static CMOS complex gate implementation and then LCTs are introduced to obtain a leakage-controlled circuit. The significant feature of LECTOR is that it works effectively in both active and idle states of the circuit, resulting in better leakage reduction compared to other techniques. Further, the proposed technique overcomes the limitations posed by other existing methods for leakage reduction. Experimental results indicate an average leakage reduction of 79.4% for MCNC'91 benchmark circuits.", "title": "" }, { "docid": "0a63a875b57b963372640f8fb527bd5c", "text": "KEMI-TORNIO UNIVERSITY OF APPLIED SCIENCES Degree programme: Business Information Technology Writer: Guo, Shuhang Thesis title: Analysis and evaluation of similarity metrics in collaborative filtering recommender system Pages (of which appendix): 62 (1) Date: May 15, 2014 Thesis instructor: Ryabov, Vladimir This research is focused on the field of recommender systems. The general aims of this thesis are to summary the state-of-the-art in recommendation systems, evaluate the efficiency of the traditional similarity metrics with varies of data sets, and propose an ideology to model new similarity metrics. The literatures on recommender systems were studied for summarizing the current development in this filed. The implementation of the recommendation and evaluation was achieved by Apache Mahout which provides an open source platform of recommender engine. By importing data information into the project, a customized recommender engine was built. Since the recommending results of collaborative filtering recommender significantly rely on the choice of similarity metrics and the types of the data, several traditional similarity metrics provided in Apache Mahout were examined by the evaluator offered in the project with five data sets collected by some academy groups. From the evaluation, I found out that the best performance of each similarity metric was achieved by optimizing the adjustable parameters. The features of each similarity metric were obtained and analyzed with practical data sets. In addition, an ideology by combining two traditional metrics was proposed in the thesis and it was proven applicable and efficient by the metrics combination of Pearson correlation and Euclidean distance. The observation and evaluation of traditional similarity metrics with practical data is helpful to understand their features and suitability, from which new models can be created. Besides, the ideology proposed for modeling new similarity metrics can be found useful both theoretically and practically.", "title": "" }, { "docid": "d86eb92d0d9b35b68f42b03c6587cfe3", "text": "Introduction The badminton smash is an essential component of a player’s repertoire and a significant stroke in gaining success as it is the most common winning shot, accounting for 53.9% of winning shots (Tsai and Chang, 1998; Tong and Hong, 2000; Rambely et al., 2005). The speed of the shuttlecock exceeds that of any other racket sport projectile with a maximum shuttle speed of 493 km/h (306 mph) reported in 2013 by Tan Boon Heong. If a player is able to cause the shuttle to travel at a higher velocity and give the opponent less reaction time to the shot, it would be expected that the smash would be a more effective weapon (Kollath, 1996; Sakurai and Ohtsuki, 2000).", "title": "" }, { "docid": "a2376c57c3c1c51f57f84788f4c6669f", "text": "Text categorization is a significant tool to manage and organize the surging text data. Many text categorization algorithms have been explored in previous literatures, such as KNN, Naïve Bayes and Support Vector Machine. KNN text categorization is an effective but less efficient classification method. In this paper, we propose an improved KNN algorithm for text categorization, which builds the classification model by combining constrained one pass clustering algorithm and KNN text categorization. Empirical results on three benchmark corpuses show that our algorithm can reduce the text similarity computation substantially and outperform the-state-of-the-art KNN, Naïve Bayes and Support Vector Machine classifiers. In addition, the classification model constructed by the proposed algorithm can be updated incrementally, and it is valuable in practical application.", "title": "" }, { "docid": "4a1a9504603177613cbc51c427de39d0", "text": "A novel and low-cost embedded hardware architecture for real-time refocusing based on a standard plenoptic camera is presented in this study. The proposed layout design synthesizes refocusing slices directly from micro images by omitting the process for the commonly used sub-aperture extraction. Therefore, intellectual property cores, containing switch controlled Finite Impulse Response (FIR) filters, are developed and applied to the Field Programmable Gate Array (FPGA) XC6SLX45 from Xilinx. Enabling the hardware design to work economically, the FIR filters are composed of stored product as well as upsampling and interpolation techniques in order to achieve an ideal relation between image resolution, delay time, power consumption and the demand of logic gates. The video output is transmitted via High-Definition Multimedia Interface (HDMI) with a resolution of 720p at a frame rate of 60 fps conforming to the HD ready standard. Examples of the synthesized refocusing slices are presented.", "title": "" }, { "docid": "bee01b9bd3beb41b0ca963c05378a93f", "text": "Calibration between color camera and 3D Light Detection And Ranging (LIDAR) equipment is an essential process for data fusion. The goal of this paper is to improve the calibration accuracy between a camera and a 3D LIDAR. In particular, we are interested in calibrating a low resolution 3D LIDAR with a relatively small number of vertical sensors. Our goal is achieved by employing a new methodology for the calibration board, which exploits 2D-3D correspondences. The 3D corresponding points are estimated from the scanned laser points on the polygonal planar board with adjacent sides. Since the lengths of adjacent sides are known, we can estimate the vertices of the board as a meeting point of two projected sides of the polygonal board. The estimated vertices from the range data and those detected from the color image serve as the corresponding points for the calibration. Experiments using a low-resolution LIDAR with 32 sensors show robust results.", "title": "" }, { "docid": "1a0889892eb87ebd26abb0a295fae51b", "text": "The fairness notion of maximin share (MMS) guarantee underlies a deployed algorithm for allocating indivisible goods under additive valuations. Our goal is to understand when we can expect to be able to give each player his MMS guarantee. Previous work has shown that such an MMS allocation may not exist, but the counterexample requires a number of goods that is exponential in the number of players; we give a new construction that uses only a linear number of goods. On the positive side, we formalize the intuition that these counterexamples are very delicate by designing an algorithm that provably finds an MMS allocation with high probability when valuations are drawn at random.", "title": "" }, { "docid": "1d56775bf9e993e0577c4d03131e7dc4", "text": "Canonical correlation analysis (CCA) is a classical method for seeking correlations between two multivariate data sets. During the last ten years, it has received more and more attention in the machine learning community in the form of novel computational formulations and a plethora of applications. We review recent developments in Bayesian models and inference methods for CCA which are attractive for their potential in hierarchical extensions and for coping with the combination of large dimensionalities and small sample sizes. The existing methods have not been particularly successful in fulfilling the promise yet; we introduce a novel efficient solution that imposes group-wise sparsity to estimate the posterior of an extended model which not only extracts the statistical dependencies (correlations) between data sets but also decomposes the data into shared and data set-specific components. In statistics literature the model is known as inter-battery factor analysis (IBFA), for which we now provide a Bayesian treatment.", "title": "" }, { "docid": "c9bc670fae6dd0f2274bb18492260372", "text": "We present an efficient GPU-based parallel LSH algorithm to perform approximate k-nearest neighbor computation in high-dimensional spaces. We use the Bi-level LSH algorithm, which can compute k-nearest neighbors with higher accuracy and is amenable to parallelization. During the first level, we use the parallel RP-tree algorithm to partition datasets into several groups so that items similar to each other are clustered together. The second level involves computing the Bi-Level LSH code for each item and constructing a hierarchical hash table. The hash table is based on parallel cuckoo hashing and Morton curves. In the query step, we use GPU-based work queues to accelerate short-list search, which is one of the main bottlenecks in LSH-based algorithms. We demonstrate the results on large image datasets with 200,000 images which are represented as 512 dimensional vectors. In practice, our GPU implementation can obtain more than 40X acceleration over a single-core CPU-based LSH implementation.", "title": "" }, { "docid": "6176a2fd4e07d0c72a53c6207af305ca", "text": "At present, Bluetooth Low Energy (BLE) is dominantly used in commercially available Internet of Things (IoT) devices -- such as smart watches, fitness trackers, and smart appliances. Compared to classic Bluetooth, BLE has been simplified in many ways that include its connection establishment, data exchange, and encryption processes. Unfortunately, this simplification comes at a cost. For example, only a star topology is supported in BLE environments and a peripheral (an IoT device) can communicate with only one gateway (e.g. a smartphone, or a BLE hub) at a set time. When a peripheral goes out of range, it loses connectivity to a gateway, and cannot connect and seamlessly communicate with another gateway without user interventions. In other words, BLE connections do not get automatically migrated or handed-off to another gateway. In this paper, we propose a system which brings seamless connectivity to BLE-capable mobile IoT devices in an environment that consists of a network of gateways. Our framework ensures that unmodified, commercial off-the-shelf BLE devices seamlessly and securely connect to a nearby gateway without any user intervention.", "title": "" }, { "docid": "5deaf3ef06be439ad0715355d3592cff", "text": "Hybrid reconfigurable logic circuits were fabricated by integrating memristor-based crossbars onto a foundry-built CMOS (complementary metal-oxide-semiconductor) platform using nanoimprint lithography, as well as materials and processes that were compatible with the CMOS. Titanium dioxide thin-film memristors served as the configuration bits and switches in a data routing network and were connected to gate-level CMOS components that acted as logic elements, in a manner similar to a field programmable gate array. We analyzed the chips using a purpose-built testing system, and demonstrated the ability to configure individual devices, use them to wire up various logic gates and a flip-flop, and then reconfigure devices.", "title": "" }, { "docid": "7b170913f315cf5f240958ffbde6697e", "text": "We show that single-digit “Nishio” subproblems in n×n Sudoku puzzles may be solved in time o(2n), faster than previous solutions such as the pattern overlay method. We also show that single-digit deduction in Sudoku is NP-hard.", "title": "" }, { "docid": "c22c34214e0f3c4d80be81d706233f96", "text": "An alternating-current light-emitting diode (AC-LED) driver is implemented between the grid and lamp to eliminate the disadvantages of a directly grid-tied AC-LED lamp. In order to highlight the benefits of AC-LED technology, a single-stage converter with few components is adopted. A high power-factor (PF) single-stage bridgeless AC/AC converter is proposed with higher efficiency, greater power factor, less harmonics to pass IEC 61000-3-2 class C, and better regulation of output current. The brightness and flicker frequency issues caused by a low-frequency sinusoidal input are surpassed by the implementation of a high-frequency square-wave output current. In addition, the characteristics of the proposed circuit are discussed and analyzed in order to design the AC-LED driver. Finally, some simulation and experimental results are shown to verify this proposed scheme.", "title": "" }, { "docid": "e7cf2e5d05818eaded8a5565a9bf42e4", "text": "We design and implement the first private and anonymous decentralized crowdsourcing system ZebraLancer, and overcome two fundamental challenges of decentralizing crowdsourcing, i.e. data leakage and identity breach. First, our outsource-then-prove methodology resolves the tension between blockchain transparency and data confidentiality, which is critical in crowdsourcing use-case. ZebraLancer ensures: (i) a requester will not pay more than what data deserve, according to a policy announced when her task is published via the blockchain; (ii) each worker indeed gets a payment based on the policy, if he submits data to the blockchain; (iii) the above properties are realized not only without a central arbiter, but also without leaking the data to the open blockchain. Furthermore, the transparency of blockchain allows one to infer private information about workers and requesters through their participation history. On the other hand, allowing anonymity will enable a malicious worker to submit multiple times to reap rewards. ZebraLancer overcomes this problem by allowing anonymous requests/submissions without sacrificing the accountability. The idea behind is a subtle linkability: if a worker submits twice to a task, anyone can link the submissions, or else he stays anonymous and unlinkable across tasks. To realize this delicate linkability, we put forward a novel cryptographic concept, i.e. the common-prefix-linkable anonymous authentication. We remark the new anonymous authentication scheme might be of independent interest. Finally, we implement our protocol for a common image annotation task and deploy it in a test net of Ethereum. The experiment results show the applicability of our protocol with the existing real-world blockchain.", "title": "" } ]
scidocsrr
37a0ca2307d20db33bfe8ffb7cec20b6
On Passive Wireless Device Fingerprinting using Infinite Hidden Markov Random Field
[ { "docid": "9db9902c0e9d5fc24714554625a04c7a", "text": "Large-scale peer-to-peer systems face security threats from faulty or hostile remote computing elements. To resist these threats, many such systems employ redundancy. However, if a single faulty entity can present multiple identities, it can control a substantial fraction of the system, thereby undermining this redundancy. One approach to preventing these “Sybil attacks” is to have a trusted agency certify identities. This paper shows that, without a logically centralized authority, Sybil attacks are always possible except under extreme and unrealistic assumptions of resource parity and coordination among entities.", "title": "" } ]
[ { "docid": "b93b24d9a0f025c0dd0a6eedb24424bf", "text": "Comic books are a kind of storytelling graphic publications mainly expressed by abstract line drawings. As a clue of story lines, comic characters play an important role in the story, and their detection is an essential part of comic book analysis. For this purpose, the task includes (1) locating characters in comics pages and (2) identifying them, which is called specific character detection. Corresponding to different scenes of comic books, one specific character can be represented by various expressions coupled with rotations, occlusions, and other perspective drawing effects, which challenge the detection. In this paper, we focus on stable features regarding the possible transformations and proposed a framework to detect them. Specifically, some discriminative features are selected as detectors for characterizing characters, on the basis of a training dataset. Based on the detectors, the drawings of the same characters in different scenes can be detected. The methodology has been experimented and validated on 6 titles of comics. Despite the terrific changes for different scenes, the proposed method achieved detection of 70% comic characters.", "title": "" }, { "docid": "35f13368398debb4b45a550fb5e9514d", "text": "This paper reviews the Bayesian approach to model selection and model averaging. In this review, I emphasize objective Bayesian methods based on noninformative priors. I will also discuss implementation details, approximations, and relationships to other methods. Copyright 2000 Academic Press.", "title": "" }, { "docid": "07a1fca1b738cb550a7f384bd3e8de23", "text": "American Library Association /ALA/ American Library Directory bibliographic record bibliography binding blanket order", "title": "" }, { "docid": "2e5d6c99ac0d02711d9586176e9f176f", "text": "Every year billions of Euros are lost worldwide due to credit card fraud. Thus, forcing financial institutions to continuously improve their fraud detection systems. In recent years, several studies have proposed the use of machine learning and data mining techniques to address this problem. However, most studies used some sort of misclassification measure to evaluate the different solutions, and do not take into account the actual financial costs associated with the fraud detection process. Moreover, when constructing a credit card fraud detection model, it is very important how to extract the right features from the transactional data. This is usually done by aggregating the transactions in order to observe the spending behavioral patterns of the customers. In this paper we expand the transaction aggregation strategy, and propose to create a new set of features based on analyzing the periodic behavior of the time of a transaction using the von Mises distribution. Then, using a real credit card fraud dataset provided by a large European card processing company, we compare state-of-the-art credit card fraud detection models, and evaluate how the different sets of features have an impact on the results. By including the proposed periodic features into the methods, the results show an average increase in savings of 13%. © 2016 Elsevier Ltd. All rights reserved. o t W s i c s 2 2 & t s p e a t h a M b t", "title": "" }, { "docid": "71759cdcf18dabecf1d002727eb9d8b8", "text": "A commonly observed neural correlate of working memory is firing that persists after the triggering stimulus disappears. Substantial effort has been devoted to understanding the many potential mechanisms that may underlie memory-associated persistent activity. These rely either on the intrinsic properties of individual neurons or on the connectivity within neural circuits to maintain the persistent activity. Nevertheless, it remains unclear which mechanisms are at play in the many brain areas involved in working memory. Herein, we first summarize the palette of different mechanisms that can generate persistent activity. We then discuss recent work that asks which mechanisms underlie persistent activity in different brain areas. Finally, we discuss future studies that might tackle this question further. Our goal is to bridge between the communities of researchers who study either single-neuron biophysical, or neural circuit, mechanisms that can generate the persistent activity that underlies working memory.", "title": "" }, { "docid": "861de9cbe8b6571ac5c723420fbd1ea5", "text": "Utilizing a standard 0.18-mum CMOS process, a receiver frontend and a local oscillator (LO) module are implemented for RF applications at the 24-GHz industrial, scientific, medical band. The proposed frontend is composed of a three-stage low-noise amplifier, a down-conversion mixer, and IF amplifiers. With an IF frequency of 4.82 GHz, the fabricated circuit demonstrates a conversion gain of 28.4 dB and a noise figure of 6.0 dB while maintaining an input return loss better than 14 dB. The measured P in - 1dB and IIP3 of the receiver frontend are -23.2 and -13.0 dBm, respectively. In addition, a circuit module, which generates the required dual down-conversion LO signals, is also included in this study. The proposed LO generator consists of a 19-GHz low-phase-noise voltage-controlled oscillator (VCO), a 4 : 1 frequency divider, and a quadrature phase-tuning circuit. From the measurement results, the VCO exhibits a tuning range of 850 MHz and a phase noise of -110 dBc/Hz at 1-MHz offset frequency. Operated at a supply voltage of 1.8 V, the current consumptions for the receiver frontend and the LO generator are both 30 mA.", "title": "" }, { "docid": "d39843f342646e4d338ab92bb7391d76", "text": "In this paper, a double-axis planar micro-fluxgate magnetic sensor and its front-end circuitry are presented. The ferromagnetic core material, i.e., the Vitrovac 6025 X, has been deposited on top of the coils with the dc-magnetron sputtering technique, which is a new type of procedure with respect to the existing solutions in the field of fluxgate sensors. This procedure allows us to obtain a core with the good magnetic properties of an amorphous ferromagnetic material, which is typical of a core with 25-mum thickness, but with a thickness of only 1 mum, which is typical of an electrodeposited core. The micro-Fluxgate has been realized in a 0.5- mum CMOS process using copper metal lines to realize the excitation coil and aluminum metal lines for the sensing coil, whereas the integrated interface circuitry for exciting and reading out the sensor has been realized in a 0.35-mum CMOS technology. Applying a triangular excitation current of 18 mA peak at 100 kHz, the magnetic sensitivity achieved is about 10 LSB/muT [using a 13-bit analog-to-digital converter (ADC)], which is suitable for detecting the Earth's magnetic field (plusmn60 muT), whereas the linearity error is 3% of the full scale. The maximum angle error of the sensor evaluating the Earth magnetic field is 2deg. The power consumption of the sensor is about 13.7 mW. The total power consumption of the system is about 90 mW.", "title": "" }, { "docid": "1f8b3933dc49d87204ba934f82f2f84f", "text": "While journalism is evolving toward a rather open-minded participatory paradigm, social media presents overwhelming streams of data that make it difficult to identify the information of a journalist's interest. Given the increasing interest of journalists in broadening and democratizing news by incorporating social media sources, we have developed TweetGathering, a prototype tool that provides curated and contextualized access to news stories on Twitter. This tool was built with the aim of assisting journalists both with gathering and with researching news stories as users comment on them. Five journalism professionals who tested the tool found helpful characteristics that could assist them with gathering additional facts on breaking news, as well as facilitating discovery of potential information sources such as witnesses in the geographical locations of news.", "title": "" }, { "docid": "7699f4fa25a47fca0de320b8bbe6ff00", "text": "Homeland Security (HS) is a growing field of study in the U.S. today, generally covering risk management, terrorism studies, policy development, and other topics related to the broad field. Information security threats to both the public and private sectors are growing in intensity, frequency, and severity, and are a very real threat to the security of the nation. While there are many models for information security education at all levels of higher education, these programs are invariably offered as a technical course of study, these curricula are generally not well suited to HS students. As a result, information systems and cyber security principles are under represented in the typical HS program. The authors propose a course of study in cyber security designed to capitalize on the intellectual strengths of students in this discipline and that are consistent with the broad suite of professional needs in this discipline.", "title": "" }, { "docid": "30260d1a4a936c79e6911e1e91c3a84a", "text": "Two recent approaches have achieved state-of-the-art results in image captioning. The first uses a pipelined process where a set of candidate words is generated by a convolutional neural network (CNN) trained on images, and then a maximum entropy (ME) language model is used to arrange these words into a coherent sentence. The second uses the penultimate activation layer of the CNN as input to a recurrent neural network (RNN) that then generates the caption sequence. In this paper, we compare the merits of these different language modeling approaches for the first time by using the same state-ofthe-art CNN as input. We examine issues in the different approaches, including linguistic irregularities, caption repetition, and data set overlap. By combining key aspects of the ME and RNN methods, we achieve a new record performance over previously published results on the benchmark COCO dataset. However, the gains we see in BLEU do not translate to human judgments.", "title": "" }, { "docid": "fd14b9e25affb05fd9b05036f3ce350b", "text": "Recent advances in pedestrian detection are attained by transferring the learned features of Convolutional Neural Network (ConvNet) to pedestrians. This ConvNet is typically pre-trained with massive general object categories (e.g. ImageNet). Although these features are able to handle variations such as poses, viewpoints, and lightings, they may fail when pedestrian images with complex occlusions are present. Occlusion handling is one of the most important problem in pedestrian detection. Unlike previous deep models that directly learned a single detector for pedestrian detection, we propose DeepParts, which consists of extensive part detectors. DeepParts has several appealing properties. First, DeepParts can be trained on weakly labeled data, i.e. only pedestrian bounding boxes without part annotations are provided. Second, DeepParts is able to handle low IoU positive proposals that shift away from ground truth. Third, each part detector in DeepParts is a strong detector that can detect pedestrian by observing only a part of a proposal. Extensive experiments in Caltech dataset demonstrate the effectiveness of DeepParts, which yields a new state-of-the-art miss rate of 11:89%, outperforming the second best method by 10%.", "title": "" }, { "docid": "bf272aa2413f1bc186149e814604fb03", "text": "Reading has been studied for decades by a variety of cognitive disciplines, yet no theories exist which sufficiently describe and explain how people accomplish the complete task of reading real-world texts. In particular, a type of knowledge intensive reading known as creative reading has been largely ignored by the past research. We argue that creative reading is an aspect of practically all reading experiences; as a result, any theory which overlooks this will be insufficient. We have built on results from psychology, artificial intelligence, and education in order to produce a functional theory of the complete reading process. The overall framework describes the set of tasks necessary for reading to be performed. Within this framework, we have developed a theory of creative reading. The theory is implemented in the ISAAC (Integrated Story Analysis And Creativity) system, a reading system which reads science fiction stories.", "title": "" }, { "docid": "fae9db6e3522ec00793613abc3617dcc", "text": "Size, accessibility, and rate of growth of Online Social Media (OSM) has attracted cyber crimes through them. One form of cyber crime that has been increasing steadily is phishing, where the goal (for the phishers) is to steal personal information from users which can be used for fraudulent purposes. Although the research community and industry has been developing techniques to identify phishing attacks through emails and instant messaging (IM), there is very little research done, that provides a deeper understanding of phishing in online social media. Due to constraints of limited text space in social systems like Twitter, phishers have begun to use URL shortener services. In this study, we provide an overview of phishing attacks for this new scenario. One of our main conclusions is that phishers are using URL shorteners not only for reducing space but also to hide their identity. We observe that social media websites like Facebook, Habbo, Orkut are competing with e-commerce services like PayPal, eBay in terms of traffic and focus of phishers. Orkut, Habbo, and Facebook are amongst the top 5 brands targeted by phishers. We study the referrals from Twitter to understand the evolving phishing strategy. A staggering 89% of references from Twitter (users) are inorganic accounts which are sparsely connected amongst themselves, but have large number of followers and followees. We observe that most of the phishing tweets spread by extensive use of attractive words and multiple hashtags. To the best of our knowledge, this is the first study to connect the phishing landscape using blacklisted phishing URLs from PhishTank, URL statistics from bit.ly and cues from Twitter to track the impact of phishing in online social media.", "title": "" }, { "docid": "e3913c904630d23b7133978a1116bc57", "text": "A novel self-substrate-triggered (SST) technique is proposed to solve the nonuniform turn-on issue of the multi-finger GGNMOS for ESD protection. The first turned-on center finger is used to trigger on all fingers in the GGNMOS structure with self-substrate-triggered technique. So, the turn-on uniformity and ESD robustness of GGNMOS can be greatly improved by the new proposed self-substrate-triggered technique.", "title": "" }, { "docid": "8952cc1f9df1799bec6bcf5b5a5af8a0", "text": "Despite recent progress in understanding the cancer genome, there is still a relative delay in understanding the full aspects of the glycome and glycoproteome of cancer. Glycobiology has been instrumental in relevant discoveries in various biological and medical fields, and has contributed to the deciphering of several human diseases. Glycans are involved in fundamental molecular and cell biology processes occurring in cancer, such as cell signalling and communication, tumour cell dissociation and invasion, cell–matrix interactions, tumour angiogenesis, immune modulation and metastasis formation. The roles of glycans in cancer have been highlighted by the fact that alterations in glycosylation regulate the development and progression of cancer, serving as important biomarkers and providing a set of specific targets for therapeutic intervention. This Review discusses the role of glycans in fundamental mechanisms controlling cancer development and progression, and their applications in oncology.", "title": "" }, { "docid": "86af81e39bce547a3f29b4851d033356", "text": "Empirical studies largely support the continuity hypothesis of dreaming. Despite of previous research efforts, the exact formulation of the continuity hypothesis remains vague. The present paper focuses on two aspects: (1) the differential incorporation rate of different waking-life activities and (2) the magnitude of which interindividual differences in waking-life activities are reflected in corresponding differences in dream content. Using a correlational design, a positive, non-zero correlation coefficient will support the continuity hypothesis. Although many researchers stress the importance of emotional involvement on the incorporation rate of waking-life experiences into dreams, formulated the hypothesis that highly focused cognitive processes such as reading, writing, etc. are rarely found in dreams due to the cholinergic activation of the brain during dreaming. The present findings based on dream diaries and the exact measurement of waking activities replicated two recent questionnaire studies. These findings indicate that it will be necessary to specify the continuity hypothesis more fully and include factors (e.g., type of waking-life experience, emotional involvement) which modulate the incorporation rate of waking-life experiences into dreams. Whether the cholinergic state of the brain during REM sleep or other alterations of brain physiology (e.g., down-regulation of the dorsolateral prefrontal cortex) are the underlying factors of the rare occurrence of highly focused cognitive processes in dreaming remains an open question. Although continuity between waking life and dreaming has been demonstrated, i.e., interindividual differences in the amount of time spent with specific waking-life activities are reflected in dream content, methodological issues (averaging over a two-week period, small number of dreams) have limited the capacity for detecting substantial relationships in all areas. Nevertheless, it might be concluded that the continuity hypothesis in its present general form is not valid and should be elaborated and tested in a more specific way.", "title": "" }, { "docid": "0ff96a055763aa3af122c42723b7c140", "text": "In this paper, we propose the Self-Attention Generative Adversarial Network (SAGAN) which allows attention-driven, long-range dependency modeling for image generation tasks. Traditional convolutional GANs generate high-resolution details as a function of only spatially local points in lower-resolution feature maps. In SAGAN, details can be generated using cues from all feature locations. Moreover, the discriminator can check that highly detailed features in distant portions of the image are consistent with each other. Furthermore, recent work has shown that generator conditioning affects GAN performance. Leveraging this insight, we apply spectral normalization to the GAN generator and find that this improves training dynamics. The proposed SAGAN achieves the state-ofthe-art results, boosting the best published Inception score from 36.8 to 52.52 and reducing Fréchet Inception distance from 27.62 to 18.65 on the challenging ImageNet dataset. Visualization of the attention layers shows that the generator leverages neighborhoods that correspond to object shapes rather than local regions of fixed shape.", "title": "" }, { "docid": "45e5227a5b156806a3bdc560ce895651", "text": "This paper presents reconfigurable RF integrated circuits (ICs) for a compact implementation of an intelligent RF front-end for multiband and multistandard applications. Reconfigurability has been addressed at each level starting from the basic elements to the RF blocks and the overall front-end architecture. An active resistor tunable from 400 to 1600 /spl Omega/ up to 10 GHz has been designed and an equivalent model has been extracted. A fully tunable active inductor using a tunable feedback resistor has been proposed that provides inductances between 0.1-15 nH with Q>50 in the C-band. To demonstrate reconfigurability at the block level, voltage-controlled oscillators with very wide tuning ranges have been implemented in the C-band using the proposed active inductor, as well as using a switched-spiral resonator with capacitive tuning. The ICs have been implemented using 0.18-/spl mu/m Si-CMOS and 0.18-/spl mu/m SiGe-BiCMOS technologies.", "title": "" }, { "docid": "c3b099d2499346314657257ec35e8d78", "text": "In the fuzzy clustering literature, two main types of membership are usually considered: A relative type, termed probabilistic, and an absolute or possibilistic type, indicating the strength of the attribution to any cluster independent from the rest. There are works addressing the unification of the two schemes. Here, we focus on providing a model for the transition from one schema to the other, to exploit the dual information given by the two schemes, and to add flexibility for the interpretation of results. We apply an uncertainty model based on interval values to memberships in the clustering framework, obtaining a framework that we term graded possibility. We outline a basic example of graded possibilistic clustering algorithm and add some practical remarks about its implementation. The experimental demonstrations presented highlight the different properties attainable through appropriate implementation of a suitable graded possibilistic model. An interesting application is found in automated segmentation of diagnostic medical images, where the model provides an interactive visualization tool for this task", "title": "" }, { "docid": "736fa570042ced702eedb985416dd3df", "text": "BACKGROUND\nThe aggressive and heterogeneous nature of lung cancer has thwarted efforts to reduce mortality from this cancer through the use of screening. The advent of low-dose helical computed tomography (CT) altered the landscape of lung-cancer screening, with studies indicating that low-dose CT detects many tumors at early stages. The National Lung Screening Trial (NLST) was conducted to determine whether screening with low-dose CT could reduce mortality from lung cancer.\n\n\nMETHODS\nFrom August 2002 through April 2004, we enrolled 53,454 persons at high risk for lung cancer at 33 U.S. medical centers. Participants were randomly assigned to undergo three annual screenings with either low-dose CT (26,722 participants) or single-view posteroanterior chest radiography (26,732). Data were collected on cases of lung cancer and deaths from lung cancer that occurred through December 31, 2009.\n\n\nRESULTS\nThe rate of adherence to screening was more than 90%. The rate of positive screening tests was 24.2% with low-dose CT and 6.9% with radiography over all three rounds. A total of 96.4% of the positive screening results in the low-dose CT group and 94.5% in the radiography group were false positive results. The incidence of lung cancer was 645 cases per 100,000 person-years (1060 cancers) in the low-dose CT group, as compared with 572 cases per 100,000 person-years (941 cancers) in the radiography group (rate ratio, 1.13; 95% confidence interval [CI], 1.03 to 1.23). There were 247 deaths from lung cancer per 100,000 person-years in the low-dose CT group and 309 deaths per 100,000 person-years in the radiography group, representing a relative reduction in mortality from lung cancer with low-dose CT screening of 20.0% (95% CI, 6.8 to 26.7; P=0.004). The rate of death from any cause was reduced in the low-dose CT group, as compared with the radiography group, by 6.7% (95% CI, 1.2 to 13.6; P=0.02).\n\n\nCONCLUSIONS\nScreening with the use of low-dose CT reduces mortality from lung cancer. (Funded by the National Cancer Institute; National Lung Screening Trial ClinicalTrials.gov number, NCT00047385.).", "title": "" } ]
scidocsrr
e39b08f10862b0d670b6047728f333a9
Deep recurrent neural networks for predicting intraoperative and postoperative outcomes and trends
[ { "docid": "386cd963cf70c198b245a3251c732180", "text": "Support vector machines (SVMs) are promising methods for the prediction of -nancial timeseries because they use a risk function consisting of the empirical error and a regularized term which is derived from the structural risk minimization principle. This study applies SVM to predicting the stock price index. In addition, this study examines the feasibility of applying SVM in -nancial forecasting by comparing it with back-propagation neural networks and case-based reasoning. The experimental results show that SVM provides a promising alternative to stock market prediction. c © 2003 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "c247cca9f592cbc274c989bff1586ab9", "text": "We demonstrate a simple strategy to cope with missing data in sequential inputs, addressing the task of multilabel classification of diagnoses given clinical time series. Collected from the intensive care unit (ICU) of a major urban medical center, our data consists of multivariate time series of observations. The data is irregularly sampled, leading to missingness patterns in re-sampled sequences. In this work, we show the remarkable ability of RNNs to make effective use of binary indicators to directly model missing data, improving AUC and F1 significantly. However, while RNNs can learn arbitrary functions of the missing data and observations, linear models can only learn substitution values. For linear models and MLPs, we show an alternative strategy to capture this signal. Additionally, we evaluate LSTMs, MLPs, and linear models trained on missingness patterns only, showing that for several diseases, what tests are run can be more predictive than the results themselves.", "title": "" } ]
[ { "docid": "dbc28fb8fe14ac5fcfe5a1c52df5b8f0", "text": "Wireless Local Area Networks frequently referred to as WLANs or Wi-Fi networks are all the vehemence in recent times. People are installing these in houses, institutions, offices and hotels etc, without any vain. In search of fulfilling the wireless demands, Wi-Fi product vendors and service contributors are exploding up as quickly as possible. Wireless networks offer handiness, mobility, and can even be less expensive to put into practice than wired networks in many cases. With the consumer demand, vendor solutions and industry standards, wireless network technology is factual and is here to stay. But how far this technology is going provide a protected environment in terms of privacy is again an anonymous issue. Realizing the miscellaneous threats and vulnerabilities associated with 802.11-based wireless networks and ethically hacking them to make them more secure is what this paper is all about. On this segment, we'll seize a look at common threats, vulnerabilities related with wireless networks. And also we have discussed the entire process of cracking WEP (Wired Equivalent Privacy) encryption of WiFi, focusing the necessity to become familiar with scanning tools like Cain, NetStumbler, Kismet and MiniStumbler to help survey the area and tests we should run so as to strengthen our air signals.", "title": "" }, { "docid": "13211210ca0a3fda62fd44383eca6b52", "text": "Cancer is the most important cause of death for both men and women. The early detection of cancer can be helpful in curing the disease completely. So the requirement of techniques to detect the occurrence of cancer nodule in early stage is increasing. A disease that is commonly misdiagnosed is lung cancer. Earlier diagnosis of Lung Cancer saves enormous lives, failing which may lead to other severe problems causing sudden fatal end. Its cure rate and prediction depends mainly on the early detection and diagnosis of the disease. One of the most common forms of medical malpractices globally is an error in diagnosis. Knowledge discovery and data mining have found numerous applications in business and scientific domain. Valuable knowledge can be discovered from application of data mining techniques in healthcare system. In this study, we briefly examine the potential use of classification based data mining techniques such as Rule based, Decision tree, Naïve Bayes and Artificial Neural Network to massive volume of healthcare data. The healthcare industry collects huge amounts of healthcare data which, unfortunately, are not “mined” to discover hidden information. For data preprocessing and effective decision making One Dependency Augmented Naïve Bayes classifier (ODANB) and naive creedal classifier 2 (NCC2) are used. This is an extension of naïve Bayes to imprecise probabilities that aims at delivering robust classifications also when dealing with small or incomplete data sets. Discovery of hidden patterns and relationships often goes unexploited. Diagnosis of Lung Cancer Disease can answer complex “what if” queries which traditional decision support systems cannot. Using generic lung cancer symptoms such as age, sex, Wheezing, Shortness of breath, Pain in shoulder, chest, arm, it can predict the likelihood of patients getting a lung cancer disease. Aim of the paper is to propose a model for early detection and correct diagnosis of the disease which will help the doctor in saving the life of the patient. Keywords—Lung cancer, Naive Bayes, ODANB, NCC2, Data Mining, Classification.", "title": "" }, { "docid": "1298ddbeea84f6299e865708fd9549a6", "text": "Since its invention in the early 1960s (Rotman and Turner, 1963), the Rotman Lens has proven itself to be a useful beamformer for designers of electronically scanned arrays. Inherent in its design is a true time delay phase shift capability that is independent of frequency and removes the need for costly phase shifters to steer a beam over wide angles. The Rotman Lens has a long history in military radar, but it has also been used in communication systems. This article uses the developed software to design and analyze a microstrip Rotman Lens for the Ku band. The initial lens design will come from a tool based on geometrical optics (GO). A second stage of analysis will be performed using a full wave finite difference time domain (FDTD) solver. The results between the first-cut design tool and the comprehensive FDTD solver will be compared, and some of the design trades will be explored to gauge their impact on the performance of the lens.", "title": "" }, { "docid": "2445b8d7618c051acd743f65ef6f588a", "text": "Recent developments in analysis methods on the non-linear and non-stationary data have received large attention by the image analysts. In 1998, Huang introduced the empirical mode decomposition (EMD) in signal processing. The EMD approach, fully unsupervised, proved reliable monodimensional (seismic and biomedical) signals. The main contribution of our approach is to apply the EMD to texture extraction and image filtering, which are widely recognized as a difficult and challenging computer vision problem. We developed an algorithm based on bidimensional empirical mode decomposition (BEMD) to extract features at multiple scales or spatial frequencies. These features, called intrinsic mode functions, are extracted by a sifting process. The bidimensional sifting process is realized using morphological operators to detect regional maxima and thanks to radial basis function for surface interpolation. The performance of the texture extraction algorithms, using BEMD method, is demonstrated in the experiment with both synthetic and natural images. q 2003 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "6c893b6c72f932978a996b6d6283bc02", "text": "Deep metric learning aims to learn an embedding function, modeled as deep neural network. This embedding function usually puts semantically similar images close while dissimilar images far from each other in the learned embedding space. Recently, ensemble has been applied to deep metric learning to yield state-of-the-art results. As one important aspect of ensemble, the learners should be diverse in their feature embeddings. To this end, we propose an attention-based ensemble, which uses multiple attention masks, so that each learner can attend to different parts of the object. We also propose a divergence loss, which encourages diversity among the learners. The proposed method is applied to the standard benchmarks of deep metric learning and experimental results show that it outperforms the state-of-the-art methods by a significant margin on image retrieval tasks.", "title": "" }, { "docid": "9e0ded0d1f913dce7d0ea6aab115678c", "text": "DevOps is changing the way organizations develop and deploy applications and service customers. Many organizations want to apply DevOps, but they are concerned by the security aspects of the produced software. This has triggered the creation of the terms SecDevOps and DevSecOps. These terms refer to incorporating security practices in a DevOps environment by promoting the collaboration between the development teams, the operations teams, and the security teams. This paper surveys the literature from academia and industry to identify the main aspects of this trend. The main aspects that we found are: definition, security best practices, compliance, process automation, tools for SecDevOps, software configuration, team collaboration, availability of activity data and information secrecy. Although the number of relevant publications is low, we believe that the terms are not buzzwords, they imply important challenges that the security and software communities shall address to help organizations develop secure software while applying DevOps processes.", "title": "" }, { "docid": "dc817bc11276d76f8d97f67e4b1b2155", "text": "Abstract A Security Operation Center (SOC) is made up of five distinct modules: event generators, event collectors, message database, analysis engines and reaction management software. The main problem encountered when building a SOC is the integration of all these modules, usually built as autonomous parts, while matching availability, integrity and security of data and their transmission channels. In this paper we will discuss the functional architecture needed to integrate those modules. Chapter one will introduce the concepts behind each module and briefly describe common problems encountered with each of them. In chapter two we will design the global architecture of the SOC. We will then focus on collection & analysis of data generated by sensors in chapters three and four. A short conclusion will describe further research & analysis to be performed in the field of SOC design.", "title": "" }, { "docid": "bfd7c204dec258679e15ce477df04cad", "text": "Clarification is needed regarding the definitions and classification of groove and hollowness of the infraorbital region depending on the cause, anatomical characteristics, and appearance. Grooves in the infraorbital region can be classified as nasojugal grooves (or folds), tear trough deformities, and palpebromalar grooves; these can be differentiated based on anatomical characteristics. They are caused by the herniation of intraorbital fat, atrophy of the skin and subcutaneous fat, contraction of the orbital part of the orbicularis oculi muscle or squinting, and malar bone resorption. Safe and successful treatment requires an optimal choice of filler and treatment method. The choice between a cannula and needle depends on various factors; a needle is better for injections into a subdermal area in a relatively safe plane, while a cannula is recommended for avoiding vascular compromise when injecting filler into a deep fat layer and releasing fibrotic ligamentous structures. The injection of a soft-tissue filler into the subcutaneous fat tissue is recommended for treating mild indentations around the orbital rim and nasojugal region. Reducing the tethering effect of ligamentous structures by undermining using a cannula prior to the filler injection is recommended for treating relatively deep and fine indentations. The treatment of mild prolapse of the intraorbital septal fat or broad flattening of the infraorbital region can be improved by restoring the volume deficiency using a relatively firm filler.", "title": "" }, { "docid": "78e8f84224549b75584c59591a8febef", "text": "Our goal is to design architectures that retain the groundbreaking performance of Convolutional Neural Networks (CNNs) for landmark localization and at the same time are lightweight, compact and suitable for applications with limited computational resources. To this end, we make the following contributions: (a) we are the first to study the effect of neural network binarization on localization tasks, namely human pose estimation and face alignment. We exhaustively evaluate various design choices, identify performance bottlenecks, and more importantly propose multiple orthogonal ways to boost performance. (b) Based on our analysis, we propose a novel hierarchical, parallel and multi-scale residual architecture that yields large performance improvement over the standard bottleneck block while having the same number of parameters, thus bridging the gap between the original network and its binarized counterpart. (c) We perform a large number of ablation studies that shed light on the properties and the performance of the proposed block. (d) We present results for experiments on the most challenging datasets for human pose estimation and face alignment, reporting in many cases state-of-the-art performance. (e) We further provide additional results for the problem of facial part segmentation. Code can be downloaded from https://www.adrianbulat.com/binary-cnn-landmarks.", "title": "" }, { "docid": "424239765383edd8079d90f63b3fde1d", "text": "The availability of huge amounts of medical data leads to the need for powerful data analysis tools to extract useful knowledge. Researchers have long been concerned with applying statistical and data mining tools to improve data analysis on large data sets. Disease diagnosis is one of the applications where data mining tools are proving successful results. Heart disease is the leading cause of death all over the world in the past ten years. Several researchers are using statistical and data mining tools to help health care professionals in the diagnosis of heart disease. Using single data mining technique in the diagnosis of heart disease has been comprehensively investigated showing acceptable levels of accuracy. Recently, researchers have been investigating the effect of hybridizing more than one technique showing enhanced results in the diagnosis of heart disease. However, using data mining techniques to identify a suitable treatment for heart disease patients has received less attention. This paper identifies gaps in the research on heart disease diagnosis and treatment and proposes a model to systematically close those gaps to discover if applying data mining techniques to heart disease treatment data can provide as reliable performance as that achieved in diagnosing heart disease.", "title": "" }, { "docid": "665f109e8263b687764de476befcbab9", "text": "In this work we analyze the behavior on a company-internal social network site to determine which interaction patterns signal closeness between colleagues. Regression analysis suggests that employee behavior on social network sites (SNSs) reveals information about both professional and personal closeness. While some factors are predictive of general closeness (e.g. content recommendations), other factors signal that employees feel personal closeness towards their colleagues, but not professional closeness (e.g. mutual profile commenting). This analysis contributes to our understanding of how SNS behavior reflects relationship multiplexity: the multiple facets of our relationships with SNS connections.", "title": "" }, { "docid": "982df058d920dbb8b2c9d012b50b62a3", "text": "A recommendation system tracks past purchases of a group of users to make product recommendations to individual members of the group. In this paper we present a notion of competitive recommendation systems, building on recent theoretical work on this subject. We reduce the problem of achieving competitiveness to a problem in matrix reconstruction. We then present a matrix reconstruction scheme that is competitive: it requires a small overhead in the number of users and products to be sampled, delivering in the process a net utility that closely approximates the best possible with full knowledge of all user-product preferences.", "title": "" }, { "docid": "99c1ad04419fa0028724a26e757b1b90", "text": "Contrary to popular belief, despite decades of research in fingerprints, reliable fingerprint recognition is still an open problem. Extracting features out of poor quality prints is the most challenging problem faced in this area. This paper introduces a new approach for fingerprint enhancement based on Short Time Fourier Transform(STFT) Analysis. STFT is a well known technique in signal processing to analyze non-stationary signals. Here we extend its application to 2D fingerprint images. The algorithm simultaneously estimates all the intrinsic properties of the fingerprints such as the foreground region mask, local ridge orientation and local ridge frequency. Furthermore we propose a probabilistic approach of robustly estimating these parameters. We experimentally compare the proposed approach to other filtering approaches in literature and show that our technique performs favorably.", "title": "" }, { "docid": "9d55947637b358c4dc30d7ba49885472", "text": "Deep neural networks have been successfully applied to many text matching tasks, such as paraphrase identification, question answering, and machine translation. Although ad-hoc retrieval can also be formalized as a text matching task, few deep models have been tested on it. In this paper, we study a state-of-the-art deep matching model, namely MatchPyramid, on the ad-hoc retrieval task. The MatchPyramid model employs a convolutional neural network over the interactions between query and document to produce the matching score. We conducted extensive experiments to study the impact of different pooling sizes, interaction functions and kernel sizes on the retrieval performance. Finally, we show that the MatchPyramid models can significantly outperform several recently introduced deep matching models on the retrieval task, but still cannot compete with the traditional retrieval models, such as BM25 and language models. CCS Concepts •Information systems→ Retrieval models and ranking;", "title": "" }, { "docid": "8996068836559be2b253cd04aeaa285b", "text": "We present AutonoVi-Sim, a novel high-fidelity simulation platform for autonomous driving data generation and driving strategy testing. AutonoVi-Sim is a collection of high-level extensible modules which allows the rapid development and testing of vehicle configurations and facilitates construction of complex traffic scenarios. Autonovi-Sim supports multiple vehicles with unique steering or acceleration limits, as well as unique tire parameters and dynamics profiles. Engineers can specify the specific vehicle sensor systems and vary time of day and weather conditions to generate robust data and gain insight into how conditions affect the performance of a particular algorithm. In addition, AutonoVi-Sim supports navigation for non-vehicle traffic participants such as cyclists and pedestrians, allowing engineers to specify routes for these actors, or to create scripted scenarios which place the vehicle in dangerous reactive situations. Autonovi-Sim facilitates training of deep-learning algorithms by enabling data export from the vehicle's sensors, including camera data, LIDAR, relative positions of traffic participants, and detection and classification results. Thus, AutonoVi-Sim allows for the rapid prototyping, development and testing of autonomous driving algorithms under varying vehicle, road, traffic, and weather conditions. In this paper, we detail the simulator and provide specific performance and data benchmarks.", "title": "" }, { "docid": "2560535c3ad41b46e08b8b39f89f555b", "text": "Crises are unpredictable events that can impact on an organisation’s viability, credibility, and reputation, and few topics have generated greater interest in communication over the past 15 years. This paper builds on early theory such as Fink (1986), and extends the crisis life-cycle theoretical model to enable a better understanding and prediction of the changes and trends of mass media coverage during crises. This expanded model provides a framework to identify and understand the dynamic and multi-dimensional set of relationships that occurs during the crisis life cycle in a rapidly changing and challenging operational environment. Using the 2001 Ansett Airlines’ Easter groundings as a case study, this paper monitors mass media coverage during this organisational crisis. The analysis reinforces the view that, by using proactive strategies, public relations practitioners can better manage mass media crisis coverage. Further, the understanding gained by extending the crisis life cycle to track when and how mass media content changes may help public relations practitioners craft messages and supply information at the outset of each stage of the crisis, thereby maintaining control of the message.", "title": "" }, { "docid": "21df2b20c9ecd6831788e00970b3ca79", "text": "Enterprises today face several challenges when hosting line-of-business applications in the cloud. Central to many of these challenges is the limited support for control over cloud network functions, such as, the ability to ensure security, performance guarantees or isolation, and to flexibly interpose middleboxes in application deployments. In this paper, we present the design and implementation of a novel cloud networking system called CloudNaaS. Customers can leverage CloudNaaS to deploy applications augmented with a rich and extensible set of network functions such as virtual network isolation, custom addressing, service differentiation, and flexible interposition of various middleboxes. CloudNaaS primitives are directly implemented within the cloud infrastructure itself using high-speed programmable network elements, making CloudNaaS highly efficient. We evaluate an OpenFlow-based prototype of CloudNaaS and find that it can be used to instantiate a variety of network functions in the cloud, and that its performance is robust even in the face of large numbers of provisioned services and link/device failures.", "title": "" }, { "docid": "48a0e75b97fdaa734f033c6b7791e81f", "text": "OBJECTIVE\nTo examine the role of physical activity, inactivity, and dietary patterns on annual weight changes among preadolescents and adolescents, taking growth and development into account.\n\n\nSTUDY DESIGN\nWe studied a cohort of 6149 girls and 4620 boys from all over the United States who were 9 to 14 years old in 1996. All returned questionnaires in the fall of 1996 and a year later in 1997. Each child provided his or her current height and weight and a detailed assessment of typical past-year dietary intakes, physical activities, and recreational inactivities (TV, videos/VCR, and video/computer games).\n\n\nMETHODS\nOur hypotheses were that physical activity and dietary fiber intake are negatively correlated with annual changes in adiposity and that recreational inactivity (TV/videos/games), caloric intake, and dietary fat intake are positively correlated with annual changes in adiposity. Separately for boys and girls, we performed regression analysis of 1-year change in body mass index (BMI; kg/m(2)). All hypothesized factors were in the model simultaneously with several adjustment factors.\n\n\nRESULTS\nLarger increases in BMI from 1996 to 1997 were among girls who reported higher caloric intakes (.0061 +/-.0026 kg/m(2) per 100 kcal/day; beta +/- standard error), less physical activity (-.0284 +/-.0142 kg/m(2)/hour/day) and more time with TV/videos/games (.0372 +/-.0106 kg/m(2)/hour/day) during the year between the 2 BMI assessments. Larger BMI increases were among boys who reported more time with TV/videos/games (.0384 +/-.0101) during the year. For both boys and girls, a larger rise in caloric intake from 1996 to 1997 predicted larger BMI increases (girls:.0059 +/-.0027 kg/m(2) per increase of 100 kcal/day; boys:.0082 +/-.0030). No significant associations were noted for energy-adjusted dietary fat or fiber.\n\n\nCONCLUSIONS\nFor both boys and girls, a 1-year increase in BMI was larger in those who reported more time with TV/videos/games during the year between the 2 BMI measurements, and in those who reported that their caloric intakes increased more from 1 year to the next. Larger year-to-year increases in BMI were also seen among girls who reported higher caloric intakes and less physical activity during the year between the 2 BMI measurements. Although the magnitudes of these estimated effects were small, their cumulative effects, year after year during adolescence, would produce substantial gains in body weight. Strategies to prevent excessive caloric intakes, to decrease time with TV/videos/games, and to increase physical activity would be promising as a means to prevent obesity.", "title": "" }, { "docid": "50afcbdf0482c75ae41afd8525274933", "text": "Adhesive devices of digital pads of gecko lizards are formed by microscopic hair-like structures termed setae that derive from the interaction between the oberhautchen and the clear layer of the epidermis. The two layers form the shedding complex and permit skin shedding in lizards. Setae consist of a resistant but flexible corneous material largely made of keratin-associated beta-proteins (KA beta Ps, formerly called beta-keratins) of 8-22 kDa and of alpha-keratins of 45-60 kDa. In Gekko gecko, 19 sauropsid keratin-associated beta-proteins (sKAbetaPs) and at least two larger alpha-keratins are expressed in the setae. Some sKA beta Ps are rich in cysteine (111-114 amino acids), while others are rich in glycine (169-219 amino acids). In the entire genome of Anolis carolinensis 40 Ka beta Ps are present and participate in the formation of all types of scales, pad lamellae and claws. Nineteen sKA beta Ps comprise cysteine-rich 9.2-14.4 kDa proteins of 89-142 amino acids, and 19 are glycine-rich 16.5-22.0 kDa proteins containing 162-225 amino acids, and only two types of sKA beta Ps are cysteine- and glycine-poor proteins. Genes coding for these proteins contain an intron in the 5'-non-coding region, a typical characteristic of most sauropsid Ka beta Ps. Gecko KA beta Ps show a central amino acid region of high homology and a beta-pleated conformation that is likely responsible for the polymerization of Ka beta Ps into long and resistant filaments. The association of numerous filaments, probably over a framework of alpha-keratins, permits the formation of bundles of corneous material for the elongation of setae, which may be over 100 microm long. The terminals branching off each seta may derive from the organization of the cytoskeleton and from the mechanical separation of keratin bundles located at the terminal apex of setae.", "title": "" }, { "docid": "be7f0079a3462e9cf81d44002b8a340e", "text": "Long-term participation in creative activities has benefits for middle-aged and older people that may improve their adaptation to later life. We first investigated the factor structure of the Creative Benefits Scale and then used it to construct a model to help explain the connection between generativity and life satisfaction in adults who participated in creative hobbies. Participants included 546 adults between the ages of 40 and 88 (Mean = 58.30 years) who completed measures of life satisfaction, generativity, and the Creative Benefits Scale with its factors of Identity, Calming, Spirituality, and Recognition. Structural equation modeling was used to examine the connection of age with life satisfaction in older adults and to explore the effects of creativity on this relation. The proposed model of life satisfaction, incorporating age, creativity, and generativity, fit the data well, indicating that creativity may help explain the link between the generativity and life satisfaction.", "title": "" } ]
scidocsrr
adf42cb737da973c36f7a1576c0cb929
Putting Pieces Together: Combining FrameNet, VerbNet and WordNet for Robust Semantic Parsing
[ { "docid": "96782e8d5e5af7f518be6fba0e736931", "text": "This paper presents our basic approach to creating Proposition Bank, which involves adding a layer of semantic annotation to the Penn English TreeBank. Without attempting to confirm or disconfirm any particular semantic theory, our goal is to provide consistent argument labeling that will facilitate the automatic extraction of relational data. An argument such asthe window in John broke the window and in The window brokewould receive the same label in both sentences. In order to ensure reliable human annotation, we provide our annotators with explicit guidelines for labeling all of the syntactic and semantic frames of each particular verb. We give several examples of these guidelines and discuss the inter−annotator agreement figures. We also discuss our current experiments on the automatic expansion of our verb guidelines based on verb class membership. Our current rate of progress and our consistency of annotation demonstrate the feasibility of the task.", "title": "" } ]
[ { "docid": "79eb0a39106679e80bd1d1edcd100d4d", "text": "Multi-agent predictive modeling is an essential step for understanding physical, social and team-play systems. Recently, Interaction Networks (INs) were proposed for the task of modeling multi-agent physical systems. One of the drawbacks of INs is scaling with the number of interactions in the system (typically quadratic or higher order in the number of agents). In this paper we introduce VAIN, a novel attentional architecture for multi-agent predictive modeling that scales linearly with the number of agents. We show that VAIN is effective for multiagent predictive modeling. Our method is evaluated on tasks from challenging multi-agent prediction domains: chess and soccer, and outperforms competing multi-agent approaches.", "title": "" }, { "docid": "18ab36acafc5e0d39d02cecb0db2f7b3", "text": "Trigeminal trophic syndrome is a rare complication after peripheral or central damage to the trigeminal nerve, characterized by sensorial impairment in the trigeminal nerve territory and self-induced nasal ulceration. Conditions that can affect the trigeminal nerve include brainstem cerebrovascular disease, diabetes, tabes, syringomyelia, and postencephalopathic parkinsonism; it can also occur following the surgical management of trigeminal neuralgia. Trigeminal trophic syndrome may develop months to years after trigeminal nerve insult. Its most common presentation is a crescent-shaped ulceration within the trigeminal sensory territory. The ala nasi is the most frequently affected site. Trigeminal trophic syndrome is notoriously difficult to diagnose and manage. A clear history is of paramount importance, with exclusion of malignant, fungal, granulomatous, vasculitic, or infective causes. We present a case of ulceration of the left ala nasi after brainstem cerebrovascular accident.", "title": "" }, { "docid": "532463ff1e5e91a2f9054cb86dcfa654", "text": "During the last ten years, the discontinuous Galerkin time-domain (DGTD) method has progressively emerged as a viable alternative to well established finite-di↵erence time-domain (FDTD) and finite-element time-domain (FETD) methods for the numerical simulation of electromagnetic wave propagation problems in the time-domain. The method is now actively studied for various application contexts including those requiring to model light/matter interactions on the nanoscale. In this paper we further demonstrate the capabilities of the method for the simulation of near-field plasmonic interactions by considering more particularly the possibility of combining the use of a locally refined conforming tetrahedral mesh with a local adaptation of the approximation order.", "title": "" }, { "docid": "4d231af03ac60ccb1a7c17a5defe693a", "text": "This paper describes a hierarchical neural network we propose for sentence classification to extract product information from product documents. The network classifies each sentence in a document into attribute and condition classes on the basis of word sequences and sentence sequences in the document. Experimental results showed the method using the proposed network significantly outperformed baseline methods by taking semantic representation of word and sentence sequential data into account. We also evaluated the network with two different product domains (insurance and tourism domains) and found that it was effective for both the domains.", "title": "" }, { "docid": "784dc5ac8e639e3ba4103b4b8653b1ff", "text": "Super-resolution reconstruction produces one or a set of high-resolution images from a set of low-resolution images. In the last two decades, a variety of super-resolution methods have been proposed. These methods are usually very sensitive to their assumed model of data and noise, which limits their utility. This paper reviews some of these methods and addresses their shortcomings. We propose an alternate approach using L/sub 1/ norm minimization and robust regularization based on a bilateral prior to deal with different data and noise models. This computationally inexpensive method is robust to errors in motion and blur estimation and results in images with sharp edges. Simulation results confirm the effectiveness of our method and demonstrate its superiority to other super-resolution methods.", "title": "" }, { "docid": "c6cfc50062e42f51c9ac0db3b4faed83", "text": "We put forward two new measures of security for threshold schemes secure in the adaptive adversary model: security under concurrent composition; and security without the assumption of reliable erasure. Using novel constructions and analytical tools, in both these settings, we exhibit efficient secure threshold protocols for a variety of cryptographic applications. In particular, based on the recent scheme by Cramer-Shoup, we construct adaptively secure threshold cryptosystems secure against adaptive chosen ciphertext attack under the DDH intractability assumption. Our techniques are also applicable to other cryptosystems and signature schemes, like RSA, DSS, and ElGamal. Our techniques include the first efficient implementation, for a wide but special class of protocols, of secure channels in erasure-free adaptive model. Of independent interest, we present the notion of a committed proof.", "title": "" }, { "docid": "d7f878ed79899f72d5d7bf58a7dcaa40", "text": "We report in detail the decoding strategy that we used for the past two Darpa Rich Transcription evaluations (RT’03 and RT’04) which is based on finite state automata (FSA). We discuss the format of the static decoding graphs, the particulars of our Viterbi implementation, the lattice generation and the likelihood evaluation. This paper is intended to familiarize the reader with some of the design issues encountered when building an FSA decoder. Experimental results are given on the EARS database (English conversational telephone speech) with emphasis on our faster than real-time system.", "title": "" }, { "docid": "ed9c0cdb74950bf0f1288931707b9d08", "text": "Introduction This chapter reviews the theoretical and empirical literature on the concept of credibility and its areas of application relevant to information science and technology, encompassing several disciplinary approaches. An information seeker's environment—the Internet, television, newspapers, schools, libraries, bookstores, and social networks—abounds with information resources that need to be evaluated for both their usefulness and their likely level of accuracy. As people gain access to a wider variety of information resources, they face greater uncertainty regarding who and what can be believed and, indeed, who or what is responsible for the information they encounter. Moreover, they have to develop new skills and strategies for determining how to assess the credibility of an information source. Historically, the credibility of information has been maintained largely by professional knowledge workers such as editors, reviewers, publishers, news reporters, and librarians. Today, quality control mechanisms are evolving in such a way that a vast amount of information accessed through a wide variety of systems and resources is out of date, incomplete, poorly organized, or simply inaccurate (Janes & Rosenfeld, 1996). Credibility has been examined across a number of fields ranging from communication, information science, psychology, marketing, and the management sciences to interdisciplinary efforts in human-computer interaction (HCI). Each field has examined the construct and its practical significance using fundamentally different approaches, goals, and presuppositions, all of which results in conflicting views of credibility and its effects. The notion of credibility has been discussed at least since Aristotle's examination of ethos and his observations of speakers' relative abilities to persuade listeners. Disciplinary approaches to investigating credibility systematically developed only in the last century, beginning within the field of communication. A landmark among these efforts was the work of Hovland and colleagues (Hovland, Jannis, & Kelley, 1953; Hovland & Weiss, 1951), who focused on the influence of various characteristics of a source on a recipient's message acceptance. This work was followed by decades of interest in the relative credibility of media involving comparisons between newspapers, radio, television, Communication researchers have tended to focus on sources and media, viewing credibility as a perceived characteristic. Within information science, the focus is on the evaluation of information, most typically instantiated in documents and statements. Here, credibility has been viewed largely as a criterion for relevance judgment, with researchers focusing on how information seekers assess a document's likely level of This brief account highlights an often implicit focus on varying objects …", "title": "" }, { "docid": "e8dd0edd4ae06d53b78662f9acca09c5", "text": "A new methodology based on mixed linear models was developed for mapping QTLs with digenic epistasis and QTL×environment (QE) interactions. Reliable estimates of QTL main effects (additive and epistasis effects) can be obtained by the maximum-likelihood estimation method, while QE interaction effects (additive×environment interaction and epistasis×environment interaction) can be predicted by the-best-linear-unbiased-prediction (BLUP) method. Likelihood ratio and t statistics were combined for testing hypotheses about QTL effects and QE interactions. Monte Carlo simulations were conducted for evaluating the unbiasedness, accuracy, and power for parameter estimation in QTL mapping. The results indicated that the mixed-model approaches could provide unbiased estimates for both positions and effects of QTLs, as well as unbiased predicted values for QE interactions. Additionally, the mixed-model approaches also showed high accuracy and power in mapping QTLs with epistatic effects and QE interactions. Based on the models and the methodology, a computer software program (QTLMapper version 1.0) was developed, which is suitable for interval mapping of QTLs with additive, additive×additive epistasis, and their environment interactions.", "title": "" }, { "docid": "7699f4fa25a47fca0de320b8bbe6ff00", "text": "Homeland Security (HS) is a growing field of study in the U.S. today, generally covering risk management, terrorism studies, policy development, and other topics related to the broad field. Information security threats to both the public and private sectors are growing in intensity, frequency, and severity, and are a very real threat to the security of the nation. While there are many models for information security education at all levels of higher education, these programs are invariably offered as a technical course of study, these curricula are generally not well suited to HS students. As a result, information systems and cyber security principles are under represented in the typical HS program. The authors propose a course of study in cyber security designed to capitalize on the intellectual strengths of students in this discipline and that are consistent with the broad suite of professional needs in this discipline.", "title": "" }, { "docid": "9b55ca83ff9115e2ea0fe1b0b85aab21", "text": "This paper discusses the “Fine-Grained Sentiment Analysis on Financial Microblogs and News” task as part of SemEval-2017, specifically under the “Detecting sentiment, humour, and truth” theme. This task contains two tracks, where the first one concerns Microblog messages and the second one covers News Statements and Headlines. The main goal behind both tracks was to predict the sentiment score for each of the mentioned companies/stocks. The sentiment scores for each text instance adopted floating point values in the range of -1 (very negative/bearish) to 1 (very positive/bullish), with 0 designating neutral sentiment. This task attracted a total of 32 participants, with 25 participating in Track 1 and 29 in Track 2.", "title": "" }, { "docid": "9e65315d4e241dc8d4ea777247f7c733", "text": "A long-standing focus on compliance has traditionally constrained development of fundamental design changes for Electronic Health Records (EHRs). We now face a critical need for such innovation, as personalization and data science prompt patients to engage in the details of their healthcare and restore agency over their medical data. In this paper, we propose MedRec: a novel, decentralized record management system to handle EHRs, using blockchain technology. Our system gives patients a comprehensive, immutable log and easy access to their medical information across providers and treatment sites. Leveraging unique blockchain properties, MedRec manages authentication, confidentiality, accountability and data sharing—crucial considerations when handling sensitive information. A modular design integrates with providers' existing, local data storage solutions, facilitating interoperability and making our system convenient and adaptable. We incentivize medical stakeholders (researchers, public health authorities, etc.) to participate in the network as blockchain “miners”. This provides them with access to aggregate, anonymized data as mining rewards, in return for sustaining and securing the network via Proof of Work. MedRec thus enables the emergence of data economics, supplying big data to empower researchers while engaging patients and providers in the choice to release metadata. The purpose of this paper is to expose, in preparation for field tests, a working prototype through which we analyze and discuss our approach and the potential for blockchain in health IT and research.", "title": "" }, { "docid": "c17e6363762e0e9683b51c0704d43fa7", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "0bb2798c21d9f7420ea47c717578e94d", "text": "Blockchain has drawn attention as the next-generation financial technology due to its security that suits the informatization era. In particular, it provides security through the authentication of peers that share virtual cash, encryption, and the generation of hash value. According to the global financial industry, the market for security-based blockchain technology is expected to grow to about USD 20 billion by 2020. In addition, blockchain can be applied beyond the Internet of Things (IoT) environment; its applications are expected to expand. Cloud computing has been dramatically adopted in all IT environments for its efficiency and availability. In this paper, we discuss the concept of blockchain technology and its hot research trends. In addition, we will study how to adapt blockchain security to cloud computing and its secure solutions in detail.", "title": "" }, { "docid": "5ff999d4cf322366b3c6c8cfac878c1a", "text": "Mammalian associative learning is organized into separate anatomically defined functional systems. We illustrate the organization of two of these systems, Pavlovian fear conditioning and Pavlovian eyeblink conditioning, by describing studies using mutant mice, brain stimulation and recording, brain lesions and direct pharmacological manipulations of specific brain regions. The amygdala serves as the neuroanatomical hub of the former, whereas the cerebellum is the hub of the latter. Pathways that carry information about signals for biologically important events arrive at these hubs by circuitry that depends on stimulus modality and complexity. Within the amygdala and cerebellum, neural plasticity occurs because of convergence of these stimuli and the biologically important information they predict. This neural plasticity is the physical basis of associative memory formation, and although the intracellular mechanisms of plasticity within these structures share some similarities, they differ significantly. The last Annual Review of Psychology article to specifically tackle the question of mammalian associative learning ( Lavond et al. 1993 ) persuasively argued that identifiable \"essential\" circuits encode memories formed during associative learning. The next dozen years saw breathtaking progress not only in detailing those essential circuits but also in identifying the essential processes occurring at the synapses (e.g., Bi & Poo 2001, Martinez & Derrick 1996 ) and within the neurons (e.g., Malinow & Malenka 2002, Murthy & De Camilli 2003 ) that make up those circuits. In this chapter, we describe the orientation that the neuroscience of learning has taken and review some of the progress made within that orientation.", "title": "" }, { "docid": "1b647a09085a41e66f8c1e3031793fed", "text": "In this paper we apply distributional semantic information to document-level machine translation. We train monolingual and bilingual word vector models on large corpora and we evaluate them first in a cross-lingual lexical substitution task and then on the final translation task. For translation, we incorporate the semantic information in a statistical document-level decoder (Docent), by enforcing translation choices that are semantically similar to the context. As expected, the bilingual word vector models are more appropriate for the purpose of translation. The final document-level translator incorporating the semantic model outperforms the basic Docent (without semantics) and also performs slightly over a standard sentencelevel SMT system in terms of ULC (the average of a set of standard automatic evaluation metrics for MT). Finally, we also present some manual analysis of the translations of some concrete documents.", "title": "" }, { "docid": "c206399c6ebf96f3de3aa5fdb10db49d", "text": "Canine monocytotropic ehrlichiosis (CME), caused by the rickettsia Ehrlichia canis, an important canine disease with a worldwide distribution. Diagnosis of the disease can be challenging due to its different phases and multiple clinical manifestations. CME should be suspected when a compatible history (living in or traveling to an endemic region, previous tick exposure), typical clinical signs and characteristic hematological and biochemical abnormalities are present. Traditional diagnostic techniques including hematology, cytology, serology and isolation are valuable diagnostic tools for CME, however a definitive diagnosis of E. canis infection requires molecular techniques. This article reviews the current literature covering the diagnosis of infection caused by E. canis.", "title": "" }, { "docid": "b00311730b7b9b4f79cdd7bde5aa84f6", "text": "While neural networks demonstrate stronger capabilities in pattern recognition nowadays, they are also becoming larger and deeper. As a result, the effort needed to train a network also increases dramatically. In many cases, it is more practical to use a neural network intellectual property (IP) that an IP vendor has already trained. As we do not know about the training process, there can be security threats in the neural IP: the IP vendor (attacker) may embed hidden malicious functionality, i.e neural Trojans, into the neural IP. We show that this is an effective attack and provide three mitigation techniques: input anomaly detection, re-training, and input preprocessing. All the techniques are proven effective. The input anomaly detection approach is able to detect 99.8% of Trojan triggers although with 12.2% false positive. The re-training approach is able to prevent 94.1% of Trojan triggers from triggering the Trojan although it requires that the neural IP be reconfigurable. In the input preprocessing approach, 90.2% of Trojan triggers are rendered ineffective and no assumption about the neural IP is needed.", "title": "" }, { "docid": "f13cbc36f2c51c5735185751ddc2500e", "text": "This paper presents an overview of the road and traffic sign detection and recognition. It describes the characteristics of the road signs, the requirements and difficulties behind road signs detection and recognition, how to deal with outdoor images, and the different techniques used in the image segmentation based on the colour analysis, shape analysis. It shows also the techniques used for the recognition and classification of the road signs. Although image processing plays a central role in the road signs recognition, especially in colour analysis, but the paper points to many problems regarding the stability of the received information of colours, variations of these colours with respect to the daylight conditions, and absence of a colour model that can led to a good solution. This means that there is a lot of work to be done in the field, and a lot of improvement can be achieved. Neural networks were widely used in the detection and the recognition of the road signs. The majority of the authors used neural networks as a recognizer, and as classifier. Some other techniques such as template matching or classical classifiers were also used. New techniques should be involved to increase the robustness, and to get faster systems for real-time applications.", "title": "" }, { "docid": "dd34e763b3fdf0a0a903b773fe1a84be", "text": "Natural language processing (NLP) is a vibrant field of interdisciplinary Computer Science research. Ultimately, NLP seeks to build intelligence into software so that software will be able to process a natural language as skillfully and artfully as humans. Prolog, a general purpose logic programming language, has been used extensively to develop NLP applications or components thereof. This report is concerned with introducing the interested reader to the broad field of NLP with respect to NLP applications that are built in Prolog or from Prolog components.", "title": "" } ]
scidocsrr
5c29014951bceae739de50a2938736f5
Assessing binary classifiers using only positive and unlabeled data
[ { "docid": "125655821a44bbce2646157c8465e345", "text": "Due to its wide applicability, the problem of semi-supervised classification is attracting increasing attention in machine learning. Semi-Supervised Support Vector Machines (S3VMs) are based on applying the margin maximization principle to both labeled and unlabeled examples. Unlike SVMs, their formulation leads to a non-convex optimization problem. A suite of algorithms have recently been proposed for solving S3VMs. This paper reviews key ideas in this literature. The performance and behavior of various S3VM algorithms is studied together, under a common experimental setting.", "title": "" } ]
[ { "docid": "e6b27bb9f2b74791af5e74c16c7c47da", "text": "Due to the storage and retrieval efficiency, hashing has been widely deployed to approximate nearest neighbor search for large-scale multimedia retrieval. Supervised hashing, which improves the quality of hash coding by exploiting the semantic similarity on data pairs, has received increasing attention recently. For most existing supervised hashing methods for image retrieval, an image is first represented as a vector of hand-crafted or machine-learned features, followed by another separate quantization step that generates binary codes. However, suboptimal hash coding may be produced, because the quantization error is not statistically minimized and the feature representation is not optimally compatible with the binary coding. In this paper, we propose a novel Deep Hashing Network (DHN) architecture for supervised hashing, in which we jointly learn good image representation tailored to hash coding and formally control the quantization error. The DHN model constitutes four key components: (1) a subnetwork with multiple convolution-pooling layers to capture image representations; (2) a fully-connected hashing layer to generate compact binary hash codes; (3) a pairwise crossentropy loss layer for similarity-preserving learning; and (4) a pairwise quantization loss for controlling hashing quality. Extensive experiments on standard image retrieval datasets show the proposed DHN model yields substantial boosts over latest state-of-the-art hashing methods.", "title": "" }, { "docid": "750a1dd126b0bb90def0bba34dc73cdd", "text": "Skinning of skeletally deformable models is extensively used for real-time animation of characters, creatures and similar objects. The standard solution, linear blend skinning, has some serious drawbacks that require artist intervention. Therefore, a number of alternatives have been proposed in recent years. All of them successfully combat some of the artifacts, but none challenge the simplicity and efficiency of linear blend skinning. As a result, linear blend skinning is still the number one choice for the majority of developers. In this article, we present a novel skinning algorithm based on linear combination of dual quaternions. Even though our proposed method is approximate, it does not exhibit any of the artifacts inherent in previous methods and still permits an efficient GPU implementation. Upgrading an existing animation system from linear to dual quaternion skinning is very easy and has a relatively minor impact on runtime performance.", "title": "" }, { "docid": "0618529a20e00174369a05077294de5b", "text": "In this paper we present a case study of the steps leading up to the extraction of the spam bot payload found within a backdoor rootkit known as Backdoor.Rustock.B or Spam-Mailbot.c. Following the extraction of the spam module we focus our analysis on the steps necessary to decrypt the communications between the command and control server and infected hosts. Part of the discussion involves a method to extract the encryption key from within the malware binary and use that to decrypt the communications. The result is a better understanding of an advanced botnet communications scheme.", "title": "" }, { "docid": "5c0dea7721a5f63a11fe4df28c60d64f", "text": "INTRODUCTION\nReducing postoperative opioid consumption is a priority given its impact upon recovery, and the efficacy of ketamine as an opioid-sparing agent in children is debated. The goal of this study was to update a previous meta-analysis on the postoperative opioid-sparing effect of ketamine, adding trial sequential analysis (TSA) and four new studies.\n\n\nMATERIALS AND METHODS\nA comprehensive literature search was conducted to identify clinical trials that examined ketamine as a perioperative opioid-sparing agent in children and infants. Outcomes measured were postoperative opioid consumption to 48 h (primary outcome: postoperative opioid consumption to 24 h), postoperative pain intensity, postoperative nausea and vomiting and psychotomimetic symptoms. The data were combined to calculate the pooled mean difference, odds ratios or standard mean differences. In addition to this classical meta-analysis approach, a TSA was performed.\n\n\nRESULTS\nEleven articles were identified, with four added to seven from the previous meta-analysis. Ketamine did not exhibit a global postoperative opioid-sparing effect to 48 postoperative hours, nor did it decrease postoperative pain intensity. This result was confirmed using TSA, which found a lack of power to draw any conclusion regarding the primary outcome of this meta-analysis (postoperative opioid consumption to 24 h). Ketamine did not increase the prevalence of either postoperative nausea and vomiting or psychotomimetic complications.\n\n\nCONCLUSIONS\nThis meta-analysis did not find a postoperative opioid-sparing effect of ketamine. According to the TSA, this negative result might involve a lack of power of this meta-analysis. Further studies are needed in order to assess the postoperative opioid-sparing effects of ketamine in children.", "title": "" }, { "docid": "c9d0e46417146f31d8d79280146e3ca1", "text": "Generating images from a text description is as challenging as it is interesting. The Adversarial network performs in a competitive fashion where the networks are the rivalry of each other. With the introduction of Generative Adversarial Network, lots of development is happening in the field of Computer Vision. With generative adversarial networks as the baseline model, studied Stack GAN consisting of two-stage GANS step-by-step in this paper that could be easily understood. This paper presents visual comparative study of other models attempting to generate image conditioned on the text description. One sentence can be related to many images. And to achieve this multi-modal characteristic, conditioning augmentation is also performed. The performance of Stack-GAN is better in generating images from captions due to its unique architecture. As it consists of two GANS instead of one, it first draws a rough sketch and then corrects the defects yielding a high-resolution image.", "title": "" }, { "docid": "997993e389cdb1e40714e20b96927890", "text": "Developer support forums are becoming more popular than ever. Crowdsourced knowledge is an essential resource for many developers yet it can raise concerns about the quality of the shared content. Most existing research efforts address the quality of answers posted by Q&A community members. In this paper, we explore the quality of questions and propose a method of predicting the score of questions on Stack Overflow based on sixteen factors related to questions' format, content and interactions that occur in the post. We performed an extensive investigation to understand the relationship between the factors and the scores of questions. The multiple regression analysis shows that the question's length of the code, accepted answer score, number of tags and the count of views, comments and answers are statistically significantly associated with the scores of questions. Our findings can offer insights to community-based Q&A sites for improving the content of the shared knowledge.", "title": "" }, { "docid": "7ffaedeabffcc9816d1eb83a4e4cdfd0", "text": "In this paper, we propose a new method for calculating the output layer in neural machine translation systems. The method is based on predicting a binary code for each word and can reduce computation time/memory requirements of the output layer to be logarithmic in vocabulary size in the best case. In addition, we also introduce two advanced approaches to improve the robustness of the proposed model: using error-correcting codes and combining softmax and binary codes. Experiments on two English ↔ Japanese bidirectional translation tasks show proposed models achieve BLEU scores that approach the softmax, while reducing memory usage to the order of less than 1/10 and improving decoding speed on CPUs by x5 to x10.", "title": "" }, { "docid": "693ce623f4f5b2cdd2eb6f4c45603524", "text": "Metabolomics is perhaps the ultimate level of post-genomic analysis as it can reveal changes in metabolite fluxes that are controlled by only minor changes within gene expression measured using transcriptomics and/or by analysing the proteome that elucidates post-translational control over enzyme activity. Metabolic change is a major feature of plant genetic modification and plant interactions with pathogens, pests, and their environment. In the assessment of genetically modified plant tissues, metabolomics has been used extensively to explore by-products resulting from transgene expression and scenarios of substantial equivalence. Many studies have concentrated on the physiological development of plant tissues as well as on the stress responses involved in heat shock or treatment with stress-eliciting molecules such as methyl jasmonic acid, yeast elicitor or bacterial lipopolysaccharide. Plant-host interactions represent one of the most biochemically complex and challenging scenarios that are currently being assessed by metabolomic approaches. For example, the mixtures of pathogen-colonised and non-challenged plant cells represent an extremely heterogeneous and biochemically rich sample; there is also the further complication of identifying which metabolites are derived from the plant host and which are from the interacting pathogen. This review will present an overview of the analytical instrumentation currently applied to plant metabolomic analysis, literature within the field will be reviewed paying particular regard to studies based on plant-host interactions and finally the future prospects on the metabolomic analysis of plants and plant-host interactions will be discussed.", "title": "" }, { "docid": "793a345ccc11a4054c52616070c64a4c", "text": "Kangaroo vehicle collisions are a serious problem threatening the safety of the drivers on Australian roads. It is estimated, according to a recent report by Australian Associated Motor Insurers, that there are around 20,000 kangaroo vehicle collisions during year 2015 in Australia. As a result, more than AU $75 million in insurance claims, and a number of animal and human severe injuries and fatalities have been reported. Despite how catastrophic these numbers are, yet a little research has been done in order to avoid or minimise the number of kangaroo vehicle collisions. In this work, we are focusing on the problem of recognising and detecting kangaroos in dynamic environments using a deep semantic segmentation convolutional neural network model. Our model is trained on a synthetic labelled depth images obtained using a simulated range sensor. Our approach records average recall value of over 93% in semantically segmenting any number of kangaroos in the generated testing dataset.", "title": "" }, { "docid": "9bcf45278e391a6ab9a0b33e93d82ea9", "text": "Non-orthogonal multiple access (NOMA) is a potential enabler for the development of 5G and beyond wireless networks. By allowing multiple users to share the same time and frequency, NOMA can scale up the number of served users, increase spectral efficiency, and improve user-fairness compared to existing orthogonal multiple access (OMA) techniques. While single-cell NOMA has drawn significant attention recently, much less attention has been given to multi-cell NOMA. This article discusses the opportunities and challenges of NOMA in a multi-cell environment. As the density of base stations and devices increases, inter-cell interference becomes a major obstacle in multi-cell networks. As such, identifying techniques that combine interference management approaches with NOMA is of great significance. After discussing the theory behind NOMA, this article provides an overview of the current literature and discusses key implementation and research challenges, with an emphasis on multi-cell NOMA.", "title": "" }, { "docid": "6f5a3f7ddb99eee445d342e6235280c3", "text": "Although aesthetic experiences are frequent in modern life, there is as of yet no scientifically comprehensive theory that explains what psychologically constitutes such experiences. These experiences are particularly interesting because of their hedonic properties and the possibility to provide self-rewarding cognitive operations. We shall explain why modern art's large number of individualized styles, innovativeness and conceptuality offer positive aesthetic experiences. Moreover, the challenge of art is mainly driven by a need for understanding. Cognitive challenges of both abstract art and other conceptual, complex and multidimensional stimuli require an extension of previous approaches to empirical aesthetics. We present an information-processing stage model of aesthetic processing. According to the model, aesthetic experiences involve five stages: perception, explicit classification, implicit classification, cognitive mastering and evaluation. The model differentiates between aesthetic emotion and aesthetic judgments as two types of output.", "title": "" }, { "docid": "de3f2ad88e3a99388975cc3da73e5039", "text": "Machine-learning techniques have recently been proved to be successful in various domains, especially in emerging commercial applications. As a set of machine-learning techniques, artificial neural networks (ANNs), requiring considerable amount of computation and memory, are one of the most popular algorithms and have been applied in a broad range of applications such as speech recognition, face identification, natural language processing, ect. Conventionally, as a straightforward way, conventional CPUs and GPUs are energy-inefficient due to their excessive effort for flexibility. According to the aforementioned situation, in recent years, many researchers have proposed a number of neural network accelerators to achieve high performance and low power consumption. Thus, the main purpose of this literature is to briefly review recent related works, as well as the DianNao-family accelerators. In summary, this review can serve as a reference for hardware researchers in the area of neural networks.", "title": "" }, { "docid": "0fcdd0dabb19ad2f45a5422caff6f8ff", "text": "Message transmission through internet as medium, is becoming increasingly popular. Hence issues like information security are becoming more relevant than earlier. This necessitates for a secure communication method to transmit messages via internet. Steganography is the science of communicating secret data in several multimedia carriers like audio, text, video or image. A modified technique to enhance the security of secret information over the network is presented in this paper. In this technique, we generate stegnokey with the help of slave image. Proposed technique provides multi-level secured message transmission. Experimental results show that the proposed technique is robust and maintains image quality. Index Terms – Steganography, Least-significant-bit (LSB) substitution, XORing pixel bits, master-slave image.", "title": "" }, { "docid": "5ad4560383ab74545c494ee722b1c57c", "text": "In this paper, a sub-dictionary based sparse coding method is proposed for image representation. The novel sparse coding method substitutes a new regularization item for L1-norm in the sparse representation model. The proposed sparse coding method involves a series of sub-dictionaries. Each sub-dictionary contains all the training samples except for those from one particular category. For the test sample to be represented, all the sub-dictionaries should linearly represent it apart from the one that does not contain samples from that label, and this sub-dictionary is called irrelevant sub-dictionary. This new regularization item restricts the sparsity of each sub-dictionary's residual, and this restriction is helpful for classification. The experimental results demonstrate that the proposed method is superior to the previous related sparse representation based classification.", "title": "" }, { "docid": "768a8cfff3f127a61f12139466911a94", "text": "The metabolism of NAD has emerged as a key regulator of cellular and organismal homeostasis. Being a major component of both bioenergetic and signaling pathways, the molecule is ideally suited to regulate metabolism and major cellular events. In humans, NAD is synthesized from vitamin B3 precursors, most prominently from nicotinamide, which is the degradation product of all NAD-dependent signaling reactions. The scope of NAD-mediated regulatory processes is wide including enzyme regulation, control of gene expression and health span, DNA repair, cell cycle regulation and calcium signaling. In these processes, nicotinamide is cleaved from NAD(+) and the remaining ADP-ribosyl moiety used to modify proteins (deacetylation by sirtuins or ADP-ribosylation) or to generate calcium-mobilizing agents such as cyclic ADP-ribose. This review will also emphasize the role of the intermediates in the NAD metabolome, their intra- and extra-cellular conversions and potential contributions to subcellular compartmentalization of NAD pools.", "title": "" }, { "docid": "740891e605079d61f6352129dc9cb6eb", "text": "Although KDD99 dataset is more than 15 years old, it is still widely used in academic research. To investigate wide usage of this dataset in Machine Learning Research (MLR) and Intrusion Detection Systems (IDS); this study reviews 149 research articles from 65 journals indexed in Science Citation Index Expanded and Emerging Sources Citation Index during the last six years (2010–2015). If we include papers presented in other indexes and conferences, number of studies would be tripled. The number of published studies shows that KDD99 is the most used dataset in IDS and machine learning areas, and it is the de facto dataset for these research areas. To show recent usage of KDD99 and the related sub-dataset (NSL-KDD) in IDS and MLR, the following descriptive statistics about the reviewed studies are given: main contribution of articles, the applied algorithms, compared classification algorithms, software toolbox usage, the size and type of the used dataset for training and testing, and classification output classes (binary, multi-class). In addition to these statistics, a checklist for future researchers that work in this area is provided.", "title": "" }, { "docid": "eb84749cd169d818aa8ee3ee7d96fbcb", "text": "The recovery of 3D tissue structure and morphology during robotic assisted surgery is an important step towards accurate deployment of surgical guidance and control techniques in minimally invasive therapies. In this article, we present a novel stereo reconstruction algorithm that propagates disparity information around a set of candidate feature matches. This has the advantage of avoiding problems with specular highlights, occlusions from instruments and view dependent illumination bias. Furthermore, the algorithm can be used with any feature matching strategy allowing the propagation of depth in very disparate views. Validation is provided for a phantom model with known geometry and this data is available online in order to establish a structured validation scheme in the field. The practical value of the proposed method is further demonstrated by reconstructions on various in vivo images of robotic assisted procedures, which are also available to the community.", "title": "" }, { "docid": "0f66b62ddfd89237bb62fb6b60a7551a", "text": "BACKGROUND\nClinicians' expanding use of cosmetic restorative procedures has generated greater interest in the determination of esthetic guidelines and standards. The overall esthetic impact of a smile can be divided into four specific areas: gingival esthetics, facial esthetics, microesthetics and macroesthetics. In this article, the authors focus on the principles of macroesthetics, which represents the relationships and ratios of relating multiple teeth to each other, to soft tissue and to facial characteristics.\n\n\nCASE DESCRIPTION\nThe authors categorize macroesthetic criteria based on two reference points: the facial midline and the amount and position of tooth reveal. The facial midline is a critical reference position for determining multiple design criteria. The amount and position of tooth reveal in various views and lip configurations also provide valuable guidelines in determining esthetic tooth positions and relationships.\n\n\nCLINICAL IMPLICATIONS\nEsthetics is an inherently subjective discipline. By understanding and applying simple esthetic rules, tools and strategies, dentists have a basis for evaluating natural dentitions and the results of cosmetic restorative procedures. Macroesthetic components of teeth and their relationship to each other can be influenced to produce more natural and esthetically pleasing restorative care.", "title": "" }, { "docid": "7be3d69a599d39042eafbb3dc28d5b18", "text": "The increasing pipeline depth, aggressive clock rates and execution width of modern processors require ever more accurate dynamic branch predictors to fully exploit their potential. Recent research on ahead pipelined branch predictors [11, 19] and branch predictors based on perceptrons [10, 11] have offered either increased accuracy or effective single cycle access times, at the cost of large hardware budgets and additional complexity in the branch predictor recovery mechanism. Here we show that a pipelined perceptron predictor can be constructed so that it has an effective latency of one cycle with a minimal loss of accuracy. We then introduce the concept of a precomputed local perceptron, which allows the use of both local and global history in an ahead pipelined perceptron. Both of these two techniques together allow this new perceptron predictor to match or exceed the accuracy of previous designs except at very small hardware budgets, and allow the elimination of most of the complexity in the rest of the pipeline associated with overriding predictors.", "title": "" }, { "docid": "b5b61c9bc2889ca7442d53a853bbe4ab", "text": "This paper presents a novel switching-converter-free ac–dc light-emitting diode (LED) driver with low-frequency-flicker reduction for general lighting applications. The proposed driving solution can minimize the system size as it enables the monolithic integration of the controller and power transistors while both the bulky off-chip electrolytic capacitors and magnetics are eliminated. Moreover, the driver can effectively reduce the harmful optical flicker at the double-line-frequency by employing a novel quasi-constant power control scheme while maintaining high efficiency and a good power factor (PF). The proposed driver is implemented with a single controller integrated circuit chip, which includes the controller and high-voltage power transistors, and the off-chip diode bridge and valley-fill circuit. The chip is fabricated with a 0.35- $\\mu \\text{m}$ 120-V high-voltage CMOS process and occupies 1.85 mm2. The driver can provide up to 7.8-W power to the LED and achieves 87.6% peak efficiency and an over 0.925 PF with only 17.3% flicker from a 110-Vac 60-Hz input.", "title": "" } ]
scidocsrr
05e2a894cae4006aeb0034cc997d1386
Pattern Discovery for Wide-Window Open Information Extraction in Biomedical Literature
[ { "docid": "6954b96b9ad84ccd069ed3944a980575", "text": "It is now almost 15 years since the publication of the first paper on text mining in the genomics domain, and decades since the first paper on text mining in the medical domain. Enormous progress has been made in the areas of information retrieval, evaluation methodologies and resource construction. Some problems, such as abbreviation-handling, can essentially be considered solved problems, and others, such as identification of gene mentions in text, seem likely to be solved soon. However, a number of problems at the frontiers of biomedical text mining continue to present interesting challenges and opportunities for great improvements and interesting research. In this article we review the current state of the art in biomedical text mining or 'BioNLP' in general, focusing primarily on papers published within the past year.", "title": "" }, { "docid": "835b74c546ba60dfbb62e804daec8521", "text": "The goal of Open Information Extraction (OIE) is to extract surface relations and their arguments from naturallanguage text in an unsupervised, domainindependent manner. In this paper, we propose MinIE, an OIE system that aims to provide useful, compact extractions with high precision and recall. MinIE approaches these goals by (1) representing information about polarity, modality, attribution, and quantities with semantic annotations instead of in the actual extraction, and (2) identifying and removing parts that are considered overly specific. We conducted an experimental study with several real-world datasets and found that MinIE achieves competitive or higher precision and recall than most prior systems, while at the same time producing shorter, semantically enriched extractions.", "title": "" } ]
[ { "docid": "b7aea71af6c926344286fbfa214c4718", "text": "Semantic segmentation is a task that covers most of the perception needs of intelligent vehicles in an unified way. ConvNets excel at this task, as they can be trained end-to-end to accurately classify multiple object categories in an image at the pixel level. However, current approaches normally involve complex architectures that are expensive in terms of computational resources and are not feasible for ITS applications. In this paper, we propose a deep architecture that is able to run in real-time while providing accurate semantic segmentation. The core of our ConvNet is a novel layer that uses residual connections and factorized convolutions in order to remain highly efficient while still retaining remarkable performance. Our network is able to run at 83 FPS in a single Titan X, and at more than 7 FPS in a Jetson TX1 (embedded GPU). A comprehensive set of experiments demonstrates that our system, trained from scratch on the challenging Cityscapes dataset, achieves a classification performance that is among the state of the art, while being orders of magnitude faster to compute than other architectures that achieve top precision. This makes our model an ideal approach for scene understanding in intelligent vehicles applications.", "title": "" }, { "docid": "627868e179aec6c5b807dc22da3258ed", "text": "As people integrate use of the cell phone into their lives, do they view it as just an update of the fixed telephone or assign it special values? This study explores that question in the framework of gratifications sought and their relationship both to differential cell phone use and to social connectedness. Based on a survey of Taiwanese college students, we found that the cell phone supplements the fixed telephone as a means of strengthening users’ family bonds, expanding their psychological neighborhoods, and facilitating symbolic proximity to the people they call. Thus, the cell phone has evolved from a luxury for businesspeople into an important facilitator of many users’ social relationships. For the poorly connected socially, the cell phone offers a unique advantage: it confers instant membership in a community. Finally, gender was found to mediate how users exploit the cell phone to maintain social ties.", "title": "" }, { "docid": "cb16e3091aa29f0c6e50e3d556822df9", "text": "A considerable amount of effort has been devoted to design a classifier in practical situations. In this paper, a simple nonparametric classifier based on the local mean vectors is proposed. The proposed classifier is compared with the 1-NN, k-NN, Euclidean distance (ED), Parzen, and artificial neural network (ANN) classifiers in terms of the error rate on the unknown patterns, particularly in small training sample size situations. Experimental results show that the proposed classifier is promising even in practical situations. 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "94a35547a45c06a90f5f50246968b77e", "text": "In this paper we present a process called color transfer which can borrow one image's color characteristics from another. Recently Reinhard and his colleagues reported a pioneering work of color transfer. Their technology can produce very believable results, but has to transform pixel values from RGB to lαβ. Inspired by their work, we advise an approach which can directly deal with the color transfer in any 3D space.From the view of statistics, we consider pixel's value as a three-dimension stochastic variable and an image as a set of samples, so the correlations between three components can be measured by covariance. Our method imports covariance between three components of pixel values while calculate the mean along each of the three axes. Then we decompose the covariance matrix using SVD algorithm and get a rotation matrix. Finally we can scale, rotate and shift pixel data of target image to fit data points' cluster of source image in the current color space and get resultant image which takes on source image's look and feel. Besides the global processing, a swatch-based method is introduced in order to manipulate images' color more elaborately. Experimental results confirm the validity and usefulness of our method.", "title": "" }, { "docid": "ea2e03fc8e273e9d3627086ce4bd6bde", "text": "Augmented Reality (AR), a concept where the real word is being enhanced with computer generated objects and text, has evolved and become a popular tool to communicate information through. Research on how the technique can be optimized regarding the technical aspects has been made, but not regarding how typography in three dimensions should be designed and used in AR applications. Therefore this master’s thesis investigates three different design attributes of typography in three dimensions. The three attributes are: typeface style, color, and weight including depth, and how they affect the visibility of the text in an indoor AR environment. A user study was conducted, both with regular users but also with users that were considered experts in the field of typography and design, to investigate differences of the visibility regarding the typography’s design attributes. The result shows noteworthy differences between two pairs of AR simulations containing different typography among the regular users. This along with a slight favoritism of bright colored text against dark colored text, even though no notable different could be seen regarding color alone. Discussions regarding the design attributes of the typography affect the legibility of the text, and what could have been done differently to achieve an even more conclusive result. To summarize this thesis, the objective resulted in design guidelines regarding typography for indoor mobile AR applications. Skapande och användande av 3D-typografi i mobila Augmented Reality-applikationer för inomhusbruk", "title": "" }, { "docid": "f4d514a95cc4444dc1cbfdc04737ec75", "text": "Ultra-high speed data links such as 400GbE continuously push transceivers to achieve better performance and lower power consumption. This paper presents a highly parallelized TRX at 56Gb/s with integrated serializer/deserializer, FFE/CTLE/DFE, CDR, and eye-monitoring circuits. It achieves BER<10−12 under 24dB loss at 14GHz while dissipating 602mW of power.", "title": "" }, { "docid": "6abd94555aa69d5d27f75db272952a0e", "text": "Text recognition in images is an active research area which attempts to develop a computer application with the ability to automatically read the text from images. Nowadays there is a huge demand of storing the information available on paper documents in to a computer readable form for later use. One simple way to store information from these paper documents in to computer system is to first scan the documents and then store them as images. However to reuse this information it is very difficult to read the individual contents and searching the contents form these documents line-by-line and word-by-word. The challenges involved are: font characteristics of the characters in paper documents and quality of the images. Due to these challenges, computer is unable to recognize the characters while reading them. Thus, there is a need of character recognition mechanisms to perform document image analysis which transforms documents in paper format to electronic format. In this paper, we have reviewed and analyzed different methods for text recognition from images. The objective of this review paper is to summarize the well-known methods for better understanding of the reader.", "title": "" }, { "docid": "e7232201e629e45b1f8f9a49cb1fdedf", "text": "Semantic Data Mining refers to the data mining tasks that systematically incorporate domain knowledge, especially formal semantics, into the process. In the past, many research efforts have attested the benefits of incorporating domain knowledge in data mining. At the same time, the proliferation of knowledge engineering has enriched the family of domain knowledge, especially formal semantics and Semantic Web ontologies. Ontology is an explicit specification of conceptualization and a formal way to define the semantics of knowledge and data. The formal structure of ontology makes it a nature way to encode domain knowledge for the data mining use. In this survey paper, we introduce general concepts of semantic data mining. We investigate why ontology has the potential to help semantic data mining and how formal semantics in ontologies can be incorporated into the data mining process. We provide detail discussions for the advances and state of art of ontology-based approaches and an introduction of approaches that are based on other form of knowledge representations.", "title": "" }, { "docid": "3dc800707ecbbf0fed60e445cfe02fcc", "text": "We extend the method introduced by Cinzano et al. (2000a) to map the artificial sky brightness in large territories from DMSP satellite data, in order to map the naked eye star visibility and telescopic limiting magnitudes. For these purposes we take into account the altitude of each land area from GTOPO30 world elevation data, the natural sky brightness in the chosen sky direction, based on Garstang modelling, the eye capability with naked eye or a telescope, based on the Schaefer (1990) and Garstang (2000b) approach, and the stellar extinction in the visual photometric band. For near zenith sky directions we also take into account screening by terrain elevation. Maps of naked eye star visibility and telescopic limiting magnitudes are useful to quantify the capability of the population to perceive our Universe, to evaluate the future evolution, to make cross correlations with statistical parameters and to recognize areas where astronomical observations or popularisation can still acceptably be made. We present, as an application, maps of naked eye star visibility and total sky brightness in V band in Europe at the zenith with a resolution of approximately 1 km.", "title": "" }, { "docid": "5750ebcfd885097aeeef66582380c286", "text": "In the present paper, we investigate the validity and reliability of de-facto evaluation standards, defined for measuring or predicting the quality of the interaction with spoken dialogue systems. Two experiments have been carried out with a dialogue system for controlling domestic devices. During these experiments, subjective judgments of quality have been collected by two questionnaire methods (ITU-T Rec. P.851 and SASSI), and parameters describing the interaction have been logged and annotated. Both metrics served the derivation of prediction models according to the PARADISE approach. Although the limited database allows only tentative conclusions to be drawn, the results suggest that both questionnaire methods provide valid measurements of a large number of different quality aspects; most of the perceptive dimensions underlying the subjective judgments can also be measured with a high reliability. The extracted parameters mainly describe quality aspects which are directly linked to the system, environmental and task characteristics. Used as an input to prediction models, the parameters provide helpful information for system design and optimization, but not general predictions of system usability and acceptability. 2005 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "706fb7e2635403662a6b75c410c9fa5b", "text": "Emphasizing the importance of cross-border effectiveness in the contemporary globalized world, we propose that cultural intelligence—the leadership capability to manage effectively in culturally diverse settings—is a critical leadership competency for those with cross-border responsibilities. We tested this hypothesis with multisource data, including multiple intelligences, in a sample of 126 Swiss military officers with both domestic and cross-border leadership responsibilities. Results supported our predictions: (1) general intelligence predicted both domestic and cross-border leadership effectiveness; (2) emotional intelligence was a stronger predictor of domestic leadership effectiveness, and (3) cultural intelligence was a stronger predictor of cross-border leadership effectiveness. Overall,", "title": "" }, { "docid": "fe94f4795d43572b27bbe27db5537e5c", "text": "Event-related desynchronization (ERD) 2.0 sec before and 1.0 sec after movement in the frequency bands of 8-10, 10-12, 12-20 and 20-30 Hz and movement-related cortical potentials (MRCPs) to self-paced movements were studied from subdural recordings over the central region in 3 patients, and from scalp-recorded EEGs in 20 normal volunteers. In direct cortical recordings, the peak ERD response and peak MRCP amplitude to self-paced finger movements were maximal over recording sites in the contralateral hand motor representations. The topography and time of onset of the ERD response to finger and foot movements suggest that the ERD responses in the 8-10 Hz and 10-12 Hz bands are more somatotopically restricted than the responses in the higher frequency bands. The power recovery and subsequent overshoot in the different frequency bands occurred in an orderly fashion with the faster frequencies recovering earlier. The ERD responses on the scalp-recorded EEGs were of lower magnitude and more widely distributed than those occurring on the subdural recordings. Across the population, there was no relation between the magnitude of the ERD response in any of the frequency bands studied and the peak amplitude of the negative slope (pNS') and the frontal peak of the motor potential (fpMP) of the MRCPs. MRCPs and ERD responses originate in similar cortical regions and share some common timing features, but the magnitude and spatial distribution of the two responses appear to be independent of each other, which suggests that the physiological mechanisms governing these two events are different and may represent different aspects of motor cortex activation. Differences in the timing and topographical features of the ERD responses in the various frequency bands also suggest a distinct functional significance for the various spectral components of the electrical activity in the motor cortex.", "title": "" }, { "docid": "c0fd60761aa1215c167b8bd7a35d6cb3", "text": "Digital forensics gained significant importance over the past decade, due to the increase in the number of information security incidents over this time period, but also due to the fact that our society is becoming more dependent on information technology. Performing a digital forensic investigation requires a standardised and formalised process to be followed. There is currently no international standard formalising the digital forensic investigation process, nor does a harmonised digital forensic investigation process exist that is acceptable in this field. This paper proposes a harmonised digital forensic investigation process model. The proposed model is an iterative and multi-tier model. The authors introduce the term “parallel actions”, defined as the principles which should be translated into actions within the digital forensic investigation process (i.e. principle that evidence's integrity must be preserved through the process and that chain of evidence must be preserved). The authors believe that the proposed model is comprehensive and that it harmonises existing state-of-the-art digital forensic investigation process models. Furthermore, we believe that the proposed model can lead to the standardisation of the digital forensic investigation process.", "title": "" }, { "docid": "fd03cf7e243571e9b3e81213fe91fd29", "text": "Most real-world recommender services measure their performance based on the top-N results shown to the end users. Thus, advances in top-N recommendation have far-ranging consequences in practical applications. In this paper, we present a novel method, called Collaborative Denoising Auto-Encoder (CDAE), for top-N recommendation that utilizes the idea of Denoising Auto-Encoders. We demonstrate that the proposed model is a generalization of several well-known collaborative filtering models but with more flexible components. Thorough experiments are conducted to understand the performance of CDAE under various component settings. Furthermore, experimental results on several public datasets demonstrate that CDAE consistently outperforms state-of-the-art top-N recommendation methods on a variety of common evaluation metrics.", "title": "" }, { "docid": "daf63012a3603e5fd2fda4bdd693d010", "text": "Vertical selection is the task of predicting relevant verticals for a Web query so as to enrich the Web search results with complementary vertical results. We investigate a novel variant of this task, where the goal is to detect queries with a question intent. Specifically, we address queries for which the user would like an answer with a human touch. We call these CQA-intent queries, since answers to them are typically found in community question answering (CQA) sites. A typical approach in vertical selection is using a vertical’s specific language model of relevant queries and computing the query-likelihood for each vertical as a selective criterion. This works quite well for many domains like Shopping, Local and Travel. Yet, we claim that queries with CQA intent are harder to distinguish by modeling content alone, since they cover many different topics. We propose to also take the structure of queries into consideration, reasoning that queries with question intent have quite a different structure than other queries. We present a supervised classification scheme, random forest over word-clusters for variable length texts, which can model the query structure. Our experiments show that it substantially improves classification performance in the CQA-intent selection task compared to content-oriented based classification, especially as query length grows.", "title": "" }, { "docid": "fef66948f4f647f88cc3921366f45e49", "text": "Acoustic correlates of stress [duration, fundamental frequency (Fo), and intensity] were investigated in a language (Thai) in which both duration and Fo are employed to signal lexical contrasts. Stimuli consisted of 25 pairs of segmentally/tonally identical, syntactically ambiguous sentences. The first member of each sentence pair contained a two-syllable noun-verb sequence exhibiting a strong-strong (--) stress pattern, the second member a two-syllable noun compound exhibiting a weak-strong (--) stress pattern. Measures were taken of five prosodic dimensions of the rhyme portion of the target syllable: duration, average Fo, Fo standard deviation, average intensity, and intensity standard deviation. Results of linear regression indicated that duration is the predominant cue in signaling the distinction between stressed and unstressed syllables in Thai. Discriminant analysis showed a stress classification accuracy rate of over 99%. Findings are discussed in relation to the varying roles that Fo, intensity, and duration have in different languages given their phonological structure.", "title": "" }, { "docid": "5f9b360a0732d8c1d76d84b80216c5ae", "text": "NoSQL phenomenon has taken the database and IT application world by storm. Growth and penetration of NoSQL applications, driven by Silicon Valley giants like Facebook, Twitter, Yahoo, Google and LinkedIn, has created an unprecedented database revolution, to inspire smaller companies to join the NoSQL bandwagon. While expansion and growth of these databases are adding glory and success to many corporate IT departments, it is very pertinent to explore the security aspects of these new era databases. Confidentiality, integrity and availability (CIA) are the very foundation of data protection and privacy. In this paper a sincere attempt is made to survey, analyze and assess the maturity of NoSQL databases through the lens of CIA triad. While the concept of CIA has its origins in relational databases, it is very important to understand, survey and delineate the security capabilities of this new generation databases, in terms of CIA fulfillment.", "title": "" }, { "docid": "2bd5ca4cbb8ef7eea1f7b2762918d18b", "text": "Deep convolutional neural networks continue to advance the state-of-the-art in many domains as they grow bigger and more complex. It has been observed that many of the parameters of a large network are redundant, allowing for the possibility of learning a smaller network that mimics the outputs of the large network through a process called Knowledge Distillation. We show, however, that standard Knowledge Distillation is not effective for learning small models for the task of pedestrian detection. To improve this process, we introduce a higher-dimensional hint layer to increase information flow. We also estimate the uncertainty in the outputs of the large network and propose a loss function to incorporate this uncertainty. Finally, we attempt to boost the complexity of the small network without increasing its size by using as input hand-designed features that have been demonstrated to be effective for pedestrian detection. For only a 2.8% increase in miss rate, we have succeeded in training a student network that is 8 times faster and 21 times smaller than the teacher network.", "title": "" }, { "docid": "9a28137d2acc2030205b4324dc21977a", "text": "Corona discharge generated by various electrode arrangements is commonly employed for several electrostatic applications, such as charging nonwoven fabrics for air filters and insulating granules in electrostatic separators. The aim of this paper is to analyze the effects of the presence of a grounded metallic shield in the proximity of a high-voltage corona electrode facing a grounded plate electrode. The metallic shield was found to increase the current intensity and decrease the inception voltage of the corona discharge generated by this electrode arrangement, both in the absence and in the presence of a layer of insulating particles at the surface of the plate electrode. With the shield, the current density measured at the surface of the collecting electrode is higher and distributed on a larger area. As a consequence, the charge acquired by millimeter-sized HDPE particles forming a monolayer at the surface of the grounded plate electrode is twice as high as in the absence of the shield. These experiments are discussed in relation with the results of the numerical analysis of the electric field generated by the wire-plate configuration with and without shield.", "title": "" } ]
scidocsrr
816ccb7d0c436422dc7a92de12cbda49
Design Guidelines for Gap Waveguide Technology Based on Glide-Symmetric Holey Structures
[ { "docid": "fbf2a211d53603cbcb7441db3006f035", "text": "This letter presents a new metamaterial-based waveguide technology referred to as ridge gap waveguides. The main advantages of the ridge gap waveguides compared to hollow waveguides are that they are planar and much cheaper to manufacture, in particular at high frequencies such as for millimeter and sub- millimeter waves. The latter is due to the fact that there are no mechanical joints across which electric currents must float. The gap waveguides have lower losses than microstrip lines, and they are completely shielded by metal so no additional packaging is needed, in contrast to the severe packaging problems associated with microstrip circuits. The gap waveguides are realized in a narrow gap between two parallel metal plates by using a texture or multilayer structure on one of the surfaces. The waves follow metal ridges in the textured surface. All wave propagation in other directions is prohibited (in cutoff) by realizing a high surface impedance (ideally a perfect magnetic conductor) in the textured surface at both sides of all ridges. Thereby, cavity resonances do not appear either within the band of operation. The present letter introduces the gap waveguide and presents some initial simulated results.", "title": "" } ]
[ { "docid": "f4df443de6ab0f50375f5b9e9461a27d", "text": "Deep neural perception and control networks have become key components of selfdriving vehicles. User acceptance is likely to benefit from easy-to-interpret visual and textual driving rationales which allow end-users to understand what triggered a particular behavior. Our approach involves two stages. In the first stage, we use visual (spatial) attention model to train a convolutional network end-to-end from images to steering angle commands. The attention model identifies image regions that potentially influence the network’s output. We then apply a causal filtering step to determine which input regions causally influence the vehicle’s control signal. In the second stage, we use a video-to-text language model to produce textual rationales that justify the model’s decision. The explanation generator uses a spatiotemporal attention mechanism, which is encouraged to match the controller’s attention.", "title": "" }, { "docid": "238c9f73acb34acf6e0d1cd8b7adaeaa", "text": "Psychology research reports that people tend to seek companionship with those who have a similar level of extraversion, and markers in dialogue show the speaker’s extraversion. Work in human-computer interaction seeks to understand creating and maintaining rapport between humans and ECAs. This study examines if humans will report greater rapport when interacting with an agent with an extraversion/introversion profile similar to their own. ECAs representing an extrovert and an introvert were created by manipulating three dialogue features. Using an informal, task-oriented setting, participants interacted with one of the agents in an immersive environment. Results suggest that subjects did not report the greatest rapport when interacting with the agent most similar to their level of extraversion. Introduction People often seek companionship with those who have a personality similar to their own [11]. There is evidence that personality types are borne out in dialogue choices [1]. Humans are uniquely physically capable of speech, and tend to find spoken communication as the most efficient and comfortable way to interact, including with technology [2]. They respond to computer personalities in the same way as they would to human personalities [10]. Recent research has sought to understand the nature of creating and maintaining rapport—a sense of emotional connection—when communicating with embodied conversational agents (ECAs) [2, 8, 14]. Successful ECAs could serve in a number of useful applications, from education to care giving. As human relationships are fundamentally social and emotional, these qualities must be incorporated into ECAs if human-agent relationships are to feel natural to users. Research has been focused on the development and maintenance of rapport felt by humans when interacting with an ECA [11] and in developing ECA personalities [3]. However, questions remain as to which agent personality is the best match for developing rapport in human-ECA interactions. In this study, two agents representing an extravert and an introvert were created by manipulating three dialogue features. Using a task-oriented but informal set-", "title": "" }, { "docid": "f575b371d01ad0af38ca83d4adde1eb5", "text": "Multiple-antenna systems, also known as multiple-input multiple-output radio, can improve the capacity and reliability of radio communication. However, the multiple RF chains associated with multiple antennas are costly in terms of size, power, and hardware. Antenna selection is a low-cost low-complexity alternative to capture many of the advantages of MIMO systems. This article reviews classic results on selection diversity, followed by a discussion of antenna selection algorithms at the transmit and receive sides. Extensions of classical results to antenna subset selection are presented. Finally, several open problems in this area are pointed out.", "title": "" }, { "docid": "3176b6784158149f41b5ff5b30164204", "text": "This paper focuses on the design of a tube-based Model Predictive Control law for the control of constrained mobile robots in off-road conditions with longitudinal slip while ensuring robustness and stability. A time-varying trajectory tracking error model is used, where uncertainties are assumed to be bounded and additive. The robust tube-based MPC is compared with other motion control techniques through simulation and physical experiments. These tests show the satisfactory behavior of the presented control strategy.", "title": "" }, { "docid": "2dc69fff31223cd46a0fed60264b2de1", "text": "The authors offer a framework for conceptualizing collective identity that aims to clarify and make distinctions among dimensions of identification that have not always been clearly articulated. Elements of collective identification included in this framework are self-categorization, evaluation, importance, attachment and sense of interdependence, social embeddedness, behavioral involvement, and content and meaning. For each element, the authors take note of different labels that have been used to identify what appear to be conceptually equivalent constructs, provide examples of studies that illustrate the concept, and suggest measurement approaches. Further, they discuss the potential links between elements and outcomes and how context moderates these relationships. The authors illustrate the utility of the multidimensional organizing framework by analyzing the different configuration of elements in 4 major theories of identification.", "title": "" }, { "docid": "bda1e2a1f27673dceed36adddfdc3e36", "text": "IEEE 802.11 WLANs are a very important technology to provide high speed wireless Internet access. Especially at airports, university campuses or in city centers, WLAN coverage is becoming ubiquitous leading to a deployment of hundreds or thousands of Access Points (AP). Managing and configuring such large WLAN deployments is a challenge. Current WLAN management protocols such as CAPWAP are hard to extend with new functionality. In this paper, we present CloudMAC, a novel architecture for enterprise or carrier grade WLAN systems. By partially offloading the MAC layer processing to virtual machines provided by cloud services and by integrating our architecture with OpenFlow, a software defined networking approach, we achieve a new level of flexibility and reconfigurability. In Cloud-MAC APs just forward MAC frames between virtual APs and IEEE 802.11 stations. The processing of MAC layer frames as well as the creation of management frames is handled at the virtual APs while the binding between the virtual APs and the physical APs is managed using OpenFlow. The testbed evaluation shows that CloudMAC achieves similar performance as normal WLANs, but allows novel services to be implemented easily in high level programming languages. The paper presents a case study which shows that dynamically switching off APs to save energy can be performed seamlessly with CloudMAC, while a traditional WLAN architecture causes large interruptions for users.", "title": "" }, { "docid": "4003b1a03be323c78e98650895967a07", "text": "In an experiment on Airbnb, we find that applications from guests with distinctively African-American names are 16% less likely to be accepted relative to identical guests with distinctively White names. Discrimination occurs among landlords of all sizes, including small landlords sharing the property and larger landlords with multiple properties. It is most pronounced among hosts who have never had an African-American guest, suggesting only a subset of hosts discriminate. While rental markets have achieved significant reductions in discrimination in recent decades, our results suggest that Airbnb’s current design choices facilitate discrimination and raise the possibility of erasing some of these civil rights gains.", "title": "" }, { "docid": "748eae887bcda0695cbcf1ba1141dd79", "text": "A wideband bandpass filter (BPF) with reconfigurable bandwidth (BW) is proposed based on a parallel-coupled line structure and a cross-shaped resonator with open stubs. The p-i-n diodes are used as the tuning elements, which can implement three reconfigurable BW states. The prototype of the designed filter reports an absolute BW tuning range of 1.22 GHz, while the fractional BW is varied from 34.8% to 56.5% when centered at 5.7 GHz. The simulation and measured results are in good agreement. Comparing with previous works, the proposed reconfigurable BPF features wider BW tuning range with maximum number of tuning states.", "title": "" }, { "docid": "ade2fd7f83a78a5a7d78c7e8286aeb18", "text": "We present a method for solving the independent set formulation of the graph coloring problem (where there is one variable for each independent set in the graph). We use a column generation method for implicit optimization of the linear program at each node of the branch-and-bound tree. This approach, while requiring the solution of a diicult subproblem as well as needing sophisticated branching rules, solves small to moderate size problems quickly. We have also implemented an exact graph coloring algorithm based on DSATUR for comparison. Implementation details and computational experience are presented.", "title": "" }, { "docid": "7a180e503a0b159d545047443524a05a", "text": "We present two methods for determining the sentiment expressed by a movie review. The semantic orientation of a review can be positive, negative, or neutral. We examine the effect of valence shifters on classifying the reviews. We examine three types of valence shifters: negations, intensifiers, and diminishers. Negations are used to reverse the semantic polarity of a particular term, while intensifiers and diminishers are used to increase and decrease, respectively, the degree to which a term is positive or negative. The first method classifies reviews based on the number of positive and negative terms they contain. We use the General Inquirer to identify positive and negative terms, as well as negation terms, intensifiers, and diminishers. We also use positive and negative terms from other sources, including a dictionary of synonym differences and a very large Web corpus. To compute corpus-based semantic orientation values of terms, we use their association scores with a small group of positive and negative terms. We show that extending the term-counting method with contextual valence shifters improves the accuracy of the classification. The second method uses a Machine Learning algorithm, Support Vector Machines. We start with unigram features and then add bigrams that consist of a valence shifter and another word. The accuracy of classification is very high, and the valence shifter bigrams slightly improve it. The features that contribute to the high accuracy are the words in the lists of positive and negative terms. Previous work focused on either the term-counting method or the Machine Learning method. We show that combining the two methods achieves better results than either method alone.", "title": "" }, { "docid": "e64f1f11ed113ca91094ef36eaf794a7", "text": "We describe the neural-network training framework used in the Kaldi speech recognition toolkit, which is geared towards training DNNs with large amounts of training data using multiple GPU-equipped or multicore machines. In order to be as hardwareagnostic as possible, we needed a way to use multiple machines without generating excessive network traffic. Our method is to average the neural network parameters periodically (typically every minute or two), and redistribute the averaged parameters to the machines for further training. Each machine sees different data. By itself, this method does not work very well. However, we have another method, an approximate and efficient implementation of Natural Gradient for Stochastic Gradient Descent (NG-SGD), which seems to allow our periodic-averaging method to work well, as well as substantially improving the convergence of SGD on a single machine.", "title": "" }, { "docid": "35c8c5f950123154f4445b6c6b2399c2", "text": "Online social media have democratized the broadcasting of information, encouraging users to view the world through the lens of social networks. The exploitation of this lens, termed social sensing, presents challenges for researchers at the intersection of computer science and the social sciences.", "title": "" }, { "docid": "6c75e0532f637448cdec57bf30e76a4e", "text": "A wide range of machine learning problems, including astronomical inference about galaxy clusters, natural image scene classification, parametric statistical inference, and predictions of public opinion, can be well-modeled as learning a function on (samples from) distributions. This thesis explores problems in learning such functions via kernel methods. The first challenge is one of computational efficiency when learning from large numbers of distributions: the computation of typicalmethods scales between quadratically and cubically, and so they are not amenable to large datasets. We investigate the approach of approximate embeddings into Euclidean spaces such that inner products in the embedding space approximate kernel values between the source distributions. We present a new embedding for a class of information-theoretic distribution distances, and evaluate it and existing embeddings on several real-world applications. We also propose the integration of these techniques with deep learning models so as to allow the simultaneous extraction of rich representations for inputs with the use of expressive distributional classifiers. In a related problem setting, common to astrophysical observations, autonomous sensing, and electoral polling, we have the following challenge: when observing samples is expensive, but we can choose where we would like to do so, how do we pick where to observe? We propose the development of a method to do so in the distributional learning setting (which has a natural application to astrophysics), as well as giving a method for a closely related problem where we search for instances of patterns by making point observations. Our final challenge is that the choice of kernel is important for getting good practical performance, but how to choose a good kernel for a given problem is not obvious. We propose to adapt recent kernel learning techniques to the distributional setting, allowing the automatic selection of good kernels for the task at hand. Integration with deep networks, as previously mentioned, may also allow for learning the distributional distance itself. Throughout, we combine theoretical results with extensive empirical evaluations to increase our understanding of the methods.", "title": "" }, { "docid": "4765f21109d36fb2631325fd0442aeac", "text": "The functions of rewards are based primarily on their effects on behavior and are less directly governed by the physics and chemistry of input events as in sensory systems. Therefore, the investigation of neural mechanisms underlying reward functions requires behavioral theories that can conceptualize the different effects of rewards on behavior. The scientific investigation of behavioral processes by animal learning theory and economic utility theory has produced a theoretical framework that can help to elucidate the neural correlates for reward functions in learning, goal-directed approach behavior, and decision making under uncertainty. Individual neurons can be studied in the reward systems of the brain, including dopamine neurons, orbitofrontal cortex, and striatum. The neural activity can be related to basic theoretical terms of reward and uncertainty, such as contiguity, contingency, prediction error, magnitude, probability, expected value, and variance.", "title": "" }, { "docid": "5b15a833cb6b4d9dd56dea59edb02cf8", "text": "BACKGROUND\nQuantification of the biomechanical properties of each individual medial patellar ligament will facilitate an understanding of injury patterns and enhance anatomic reconstruction techniques by improving the selection of grafts possessing appropriate biomechanical properties for each ligament.\n\n\nPURPOSE\nTo determine the ultimate failure load, stiffness, and mechanism of failure of the medial patellofemoral ligament (MPFL), medial patellotibial ligament (MPTL), and medial patellomeniscal ligament (MPML) to assist with selection of graft tissue for anatomic reconstructions.\n\n\nSTUDY DESIGN\nDescriptive laboratory study.\n\n\nMETHODS\nTwenty-two nonpaired, fresh-frozen cadaveric knees were dissected free of all soft tissue structures except for the MPFL, MPTL, and MPML. Two specimens were ultimately excluded because their medial structure fibers were lacerated during dissection. The patella was obliquely cut to test the MPFL and the MPTL-MPML complex separately. To ensure that the common patellar insertion of the MPTL and MPML was not compromised during testing, only one each of the MPML and MPTL were tested per specimen (n = 10 each). Specimens were secured in a dynamic tensile testing machine, and the ultimate load, stiffness, and mechanism of failure of each ligament (MPFL = 20, MPML = 10, and MPTL = 10) were recorded.\n\n\nRESULTS\nThe mean ± SD ultimate load of the MPFL (178 ± 46 N) was not significantly greater than that of the MPTL (147 ± 80 N; P = .706) but was significantly greater than that of the MPML (105 ± 62 N; P = .001). The mean ultimate load of the MPTL was not significantly different from that of the MPML ( P = .210). Of the 20 MPFLs tested, 16 failed by midsubstance rupture and 4 by bony avulsion on the femur. Of the 10 MPTLs tested, 9 failed by midsubstance rupture and 1 by bony avulsion on the patella. Finally, of the 10 MPMLs tested, all 10 failed by midsubstance rupture. No significant difference was found in mean stiffness between the MPFL (23 ± 6 N/mm2) and the MPTL (31 ± 21 N/mm2; P = .169), but a significant difference was found between the MPFL and the MPML (14 ± 8 N/mm2; P = .003) and between the MPTL and MPML ( P = .028).\n\n\nCONCLUSION\nThe MPFL and MPTL had comparable ultimate loads and stiffness, while the MPML had lower failure loads and stiffness. Midsubstance failure was the most common type of failure; therefore, reconstruction grafts should meet or exceed the values reported herein.\n\n\nCLINICAL RELEVANCE\nFor an anatomic medial-sided knee reconstruction, the individual biomechanical contributions of the medial patellar ligamentous structures (MPFL, MPTL, and MPML) need to be characterized to facilitate an optimal reconstruction design.", "title": "" }, { "docid": "03c4e98d0945c9fcd5f8ded1129ce0ff", "text": "On the basis of the proposition that love promotes commitment, the authors predicted that love would motivate approach, have a distinct signal, and correlate with commitment-enhancing processes when relationships are threatened. The authors studied romantic partners and adolescent opposite-sex friends during interactions that elicited love and threatened the bond. As expected, the experience of love correlated with approach-related states (desire, sympathy). Providing evidence for a nonverbal display of love, four affiliation cues (head nods, Duchenne smiles, gesticulation, forward leans) correlated with self-reports and partner estimates of love. Finally, the experience and display of love correlated with commitment-enhancing processes (e.g., constructive conflict resolution, perceived trust) when the relationship was threatened. Discussion focused on love, positive emotion, and relationships.", "title": "" }, { "docid": "9de7af8824594b5de7d510c81585c61b", "text": "The adoption of business process improvement strategies is a challenge to organizations trying to improve the quality and productivity of their services. The quest for the benefits of this improvement on resource optimization and the responsiveness of the organizations has raised several proposals for business process improvement approaches. However, proposals and results of scientific research on process improvement in higher education institutions, extremely complex and unique organizations, are still scarce. This paper presents a method that provides guidance about how practices and knowledge are gathered to contribute for business process improvement based on the communication between different stakeholders.", "title": "" }, { "docid": "28f220e88b9b2947c8203d83210f77d0", "text": "Designers frequently draw curvature lines to convey bending of smooth surfaces in concept sketches. We present a method to extrapolate curvature lines in a rough concept sketch, recovering the intended 3D curvature field and surface normal at each pixel of the sketch. This 3D information allows to enrich the sketch with 3D-looking shading and texturing.\n We first introduce the concept of regularized curvature lines that model the lines designers draw over curved surfaces, encompassing curvature lines and their extension as geodesics over flat or umbilical regions. We build on this concept to define the orthogonal cross field that assigns two regularized curvature lines to each point of a 3D surface. Our algorithm first estimates the projection of this cross field in the drawing, which is nonorthogonal due to foreshortening. We formulate this estimation as a scattered interpolation of the strokes drawn in the sketch, which makes our method robust to sketchy lines that are typical for design sketches. Our interpolation relies on a novel smoothness energy that we derive from our definition of regularized curvature lines. Optimizing this energy subject to the stroke constraints produces a dense nonorthogonal 2D cross field which we then lift to 3D by imposing orthogonality. Thus, one central concept of our approach is the generalization of existing cross field algorithms to the nonorthogonal case.\n We demonstrate our algorithm on a variety of concept sketches with various levels of sketchiness. We also compare our approach with existing work that takes clean vector drawings as input.", "title": "" }, { "docid": "b99b7028ecfac3d52d3fa2264195edb0", "text": "Computers still struggle to understand the interdependency of objects in the scene as a whole, e.g., relations between objects or their attributes. Existing methods often ignore global context cues capturing the interactions among different object instances, and can only recognize a handful of types by exhaustively training individual detectors for all possible relationships. To capture such global interdependency, we propose a deep Variation-structured Re-inforcement Learning (VRL) framework to sequentially discover object relationships and attributes in the whole image. First, a directed semantic action graph is built using language priors to provide a rich and compact representation of semantic correlations between object categories, predicates, and attributes. Next, we use a variation-structured traversal over the action graph to construct a small, adaptive action set for each step based on the current state and historical actions. In particular, an ambiguity-aware object mining scheme is used to resolve semantic ambiguity among object categories that the object detector fails to distinguish. We then make sequential predictions using a deep RL framework, incorporating global context cues and semantic embeddings of previously extracted phrases in the state vector. Our experiments on the Visual Relationship Detection (VRD) dataset and the large-scale Visual Genome dataset validate the superiority of VRL, which can achieve significantly better detection results on datasets involving thousands of relationship and attribute types. We also demonstrate that VRL is able to predict unseen types embedded in our action graph by learning correlations on shared graph nodes.", "title": "" }, { "docid": "976aee37c264dbf53b7b1fbbf0d583c4", "text": "This paper applies Halliday's (1994) theory of the interpersonal, ideational and textual meta-functions of language to conceptual metaphor. Starting from the observation that metaphoric expressions tend to be organized in chains across texts, the question is raised what functions those expressions serve in different parts of a text as well as in relation to each other. The empirical part of the article consists of the sample analysis of a business magazine text on marketing. This analysis is two-fold, integrating computer-assisted quantitative investigation with qualitative research into the organization and multifunctionality of metaphoric chains as well as the cognitive scenarios evolving from those chains. The paper closes by summarizing the main insights along the lines of the three Hallidayan meta-functions of conceptual metaphor and suggesting functional analysis of metaphor at levels beyond that of text. Im vorliegenden Artikel wird Hallidays (1994) Theorie der interpersonellen, ideellen und textuellen Metafunktion von Sprache auf das Gebiet der konzeptuellen Metapher angewandt. Ausgehend von der Beobachtung, dass metaphorische Ausdrücke oft in textumspannenden Ketten angeordnet sind, wird der Frage nachgegangen, welche Funktionen diese Ausdrücke in verschiedenen Teilen eines Textes und in Bezug aufeinander erfüllen. Der empirische Teil der Arbeit besteht aus der exemplarischen Analyse eines Artikels aus einem Wirtschaftsmagazin zum Thema Marketing. Diese Analysis gliedert sich in zwei Teile und verbindet computergestütze quantitative Forschung mit einer qualitativen Untersuchung der Anordnung und Multifunktionalität von Metaphernketten sowie der kognitiven Szenarien, die aus diesen Ketten entstehen. Der Aufsatz schließt mit einer Zusammenfassung der wesentlichen Ergebnisse im Licht der Hallidayschen Metafunktionen konzeptueller Metaphern und gibt einen Ausblick auf eine funktionale Metaphernanalyse, die über die rein textuelle Ebene hinausgeht.", "title": "" } ]
scidocsrr
ffc702545a38626c462dc8784daa5964
A Survey on License Plate Recognition Systems
[ { "docid": "b4316fcbc00b285e11177811b61d2b99", "text": "Automatic license plate recognition (ALPR) is one of the most important aspects of applying computer techniques towards intelligent transportation systems. In order to recognize a license plate efficiently, however, the location of the license plate, in most cases, must be detected in the first place. Due to this reason, detecting the accurate location of a license plate from a vehicle image is considered to be the most crucial step of an ALPR system, which greatly affects the recognition rate and speed of the whole system. In this paper, a region-based license plate detection method is proposed. In this method, firstly, mean shift is used to filter and segment a color vehicle image in order to get candidate regions. These candidate regions are then analyzed and classified in order to decide whether a candidate region contains a license plate. Unlike other existing license plate detection methods, the proposed method focuses on regions, which demonstrates to be more robust to interference characters and more accurate when compared with other methods.", "title": "" }, { "docid": "a8266df9a468b884a12c0ddc2706c26c", "text": "Detecting the region of a license plate is the key component of the vehicle license plate recognition (VLPR) system. A new method is adopted in this paper to analyze road images which often contain vehicles and extract LP from natural properties by finding vertical and horizontal edges from vehicle region. The proposed vehicle license plate detection (VLPD) method consists of three main stages: (1) a novel adaptive image segmentation technique named as sliding concentric windows (SCWs) used for detecting candidate region; (2) color verification for candidate region by using HSI color model on the basis of using hue and intensity in HSI color model verifying green and yellow LP and white LP, respectively; and (3) finally, decomposing candidate region which contains predetermined LP alphanumeric character by using position histogram to verify and detect vehicle license plate (VLP) region. In the proposed method, input vehicle images are commuted into grey images. Then the candidate regions are found by sliding concentric windows. We detect VLP region which contains predetermined LP color by using HSI color model and LP alphanumeric character by using position histogram. Experimental results show that the proposed method is very effective in coping with different conditions such as poor illumination, varied distances from the vehicle and varied weather.", "title": "" } ]
[ { "docid": "ad58798807256cff2eff9d3befaf290a", "text": "Centrality indices are an essential concept in network analysis. For those based on shortest-path distances the computation is at least quadratic in the number of nodes, since it usually involves solving the single-source shortest-paths (SSSP) problem from every node. Therefore, exact computation is infeasible for many large networks of interest today. Centrality scores can be estimated, however, from a limited number of SSSP computations. We present results from an experimental study of the quality of such estimates under various selection strategies for the source vertices. ∗Research supported in part by DFG under grant Br 2158/2-3", "title": "" }, { "docid": "8b4243851ffaf5a673a5dbbb9ec34094", "text": "Proposed cache compression schemes make design-time assumptions on value locality to reduce decompression latency. For example, some schemes assume that common values are spatially close whereas other schemes assume that null blocks are common. Most schemes, however, assume that value locality is best exploited by fixed-size data types (e.g., 32-bit integers). This assumption falls short when other data types, such as floating-point numbers, are common. This paper makes two contributions. First, HyComp -- a hybrid cache compression scheme -- selects the best-performing compression scheme, based on heuristics that predict data types. Data types considered are pointers, integers, floating-point numbers and the special (and trivial) case of null blocks. Second, this paper contributes with a compression method that exploits value locality in data types with predefined semantic value fields, e.g., as in the exponent and the mantissa in floating-point numbers. We show that HyComp, augmented with the proposed floating-point-number compression method, offers superior performance in comparison with prior art.", "title": "" }, { "docid": "f77f82146a0421ff7fccb33d1ce04ef6", "text": "This paper introduces a new large-scale n-gram corpus that is created specifically from social media text. Two distinguishing characteristics of this corpus are its monthly temporal attribute and that it is created from 1.65 billion comments of user-generated text in Reddit. The usefulness of this corpus is exemplified and evaluated by a novel Topic-based Latent Semantic Analysis (TLSA) algorithm. The experimental results show that unsupervised TLSA outperforms all the state-of-the-art unsupervised and semi-supervised methods in SEMEVAL 2015: paraphrase and semantic similarity in Twitter tasks.", "title": "" }, { "docid": "3e6e72747036ca7255b449f4c93e15f7", "text": "In this paper a planar antenna is studied for ultrawide-band (UWB) applications. This antenna consists of a wide-band tapered-slot feeding structure, curved radiators and a parasitic element. It is a modification of the conventional dual exponential tapered slot antenna and can be viewed as a printed dipole antenna with tapered slot feed. The design guideline is introduced, and the antenna parameters including return loss, radiation patterns and gain are investigated. To demonstrate the applicability of the proposed antenna to UWB applications, the transfer functions of a transmitting-receiving system with a pair of identical antennas are measured. Transient waveforms as the transmitting-receiving system being excited by a simulated pulse are discussed at the end of this paper.", "title": "" }, { "docid": "52db010b2fa3ddcbfb73309705006d42", "text": "Recent work in cognitive psychology and social cognition bears heavily on concerns of sociologists of culture. Cognitive research confirms views of culture as fragmented; clarifies the roles of institutions and agency; and illuminates supraindividual aspects of culture. Individuals experience culture as disparate bits of information and as schematic structures that organize that information. Culture carried by institutions, networks, and social movements diffuses, activates, and selects among available schemata. Implications for the study of identity, collective memory, social classification, and logics of action are developed.", "title": "" }, { "docid": "eea288f275b0eab62dddd64a469a2d63", "text": "Glucose control serves as the primary method of diabetes management. Current digital therapeutic approaches for subjects with Type 1 diabetes mellitus (T1DM) such as the artificial pancreas and bolus calculators leverage machine learning techniques for predicting subcutaneous glucose for improved control. Deep learning has recently been applied in healthcare and medical research to achieve state-of-the-art results in a range of tasks including disease diagnosis, and patient state prediction among others. In this work, we present a deep learning model that is capable of predicting glucose levels over a 30-minute horizon with leading accuracy for simulated patient cases (RMSE = 10.02±1.28 [mg/dl] and MARD = 5.95±0.64%) and real patient cases (RMSE = 21.23±1.15 [mg/dl] and MARD = 10.53±1.28%). In addition, the model also provides competitive performance in forecasting adverse glycaemic events with minimal time lag both in a simulated patient dataset (MCChyperglycaemia = 0.82±0.06 and MCChypoglycaemia = 0.76±0.13) and in a real patient dataset (MCChyperglycaemia = 0.79±0.04 and MCChypoglycaemia = 0.28±0.11). This approach is evaluated on a dataset of 10 simulated cases generated from the UVa/Padova simulator and a clinical dataset of 5 real cases each containing glucose readings, insulin bolus, and meal (carbohydrate) data. Performance of the recurrent convolutional neural network is benchmarked against four state-of-the-art algorithms: support vector regression (SVR), latent variable (LVX) model, autoregressive model (ARX), and neural network for predicting glucose algorithm (NNPG).", "title": "" }, { "docid": "345b548c56261f1bf54b6b94b8060396", "text": "We present a method for tracking a hand while it is interacting with an object. This setting is arguably the one where hand-tracking has most practical relevance, but poses significant additional challenges: strong occlusions by the object as well as self-occlusions are the norm, and classical anatomical constraints need to be softened due to the external forces between hand and object. To achieve robustness to partial occlusions, we use an individual local tracker for each segment of the articulated structure. The segments are connected in a pairwise Markov random field, which enforces the anatomical hand structure through soft constraints on the joints between adjacent segments. The most likely hand configuration is found with belief propagation. Both range and color data are used as input. Experiments are presented for synthetic data with ground truth and for real data of people manipulating objects.", "title": "" }, { "docid": "81ddc594cb4b7f3ed05908ce779aa4f4", "text": "Since the length of microblog texts, such as tweets, is strictly limited to 140 characters, traditional Information Retrieval techniques suffer from the vocabulary mismatch problem severely and cannot yield good performance in the context of microblogosphere. To address this critical challenge, in this paper, we propose a new language modeling approach for microblog retrieval by inferring various types of context information. In particular, we expand the query using knowledge terms derived from Freebase so that the expanded one can better reflect users’ search intent. Besides, in order to further satisfy users’ real-time information need, we incorporate temporal evidences into the expansion method, which can boost recent tweets in the retrieval results with respect to a given topic. Experimental results on two official TREC Twitter corpora demonstrate the significant superiority of our approach over baseline methods.", "title": "" }, { "docid": "899e96eacd2c73730c157056c56eea25", "text": "Hyaluronic acid (HA), a macropolysaccharidic component of the extracellular matrix, is common to most species and it is found in many sites of the human body, including skin and soft tissue. Not only does HA play a variety of roles in physiologic and in pathologic events, but it also has been extensively employed in cosmetic and skin-care products as drug delivery agent or for several biomedical applications. The most important limitations of HA are due to its short half-life and quick degradation in vivo and its consequently poor bioavailability. In the aim to overcome these difficulties, HA is generally subjected to several chemical changes. In this paper we obtained an acetylated form of HA with increased bioavailability with respect to the HA free form. Furthermore, an improved radical scavenging and anti-inflammatory activity has been evidenced, respectively, on ABTS radical cation and murine monocyte/macrophage cell lines (J774.A1).", "title": "" }, { "docid": "73e15b7f555a105dcc97471b14637e01", "text": "Cognitive radio is an exciting emerging technology that has the potential of dealing with the stringent requirement and scarcity of the radio spectrum. Such revolutionary and transforming technology represents a paradigm shift in the design of wireless systems, as it will allow the agile and efficient utilization of the radio spectrum by offering distributed terminals or radio cells the ability of radio sensing, self-adaptation, and dynamic spectrum sharing. Cooperative communications and networking is another new communication technology paradigm that allows distributed terminals in a wireless network to collaborate through some distributed transmission or signal processing so as to realize a new form of space diversity to combat the detrimental effects of fading channels. In this paper, we consider the application of these technologies to spectrum sensing and spectrum sharing. One of the most important challenges for cognitive radio systems is to identify the presence of primary (licensed) users over a wide range of spectrum at a particular time and specific geographic location. We consider the use of cooperative spectrum sensing in cognitive radio systems to enhance the reliability of detecting primary users. We shall describe spectrum sensing for cognitive radios and propose robust cooperative spectrum sensing techniques for a practical framework employing cognitive radios. We also investigate cooperative communications for spectrum sharing in a cognitive wireless relay network. To exploit the maximum spectrum opportunities, we present a cognitive space-time-frequency coding technique that can opportunistically adjust its coding structure by adapting itself to the dynamic spectrum environment.", "title": "" }, { "docid": "508fb3c75f0d92ae27b9c735c02d66d6", "text": "The remarkable developmental potential and replicative capacity of human embryonic stem (ES) cells promise an almost unlimited supply of specific cell types for transplantation therapies. Here we describe the in vitro differentiation, enrichment, and transplantation of neural precursor cells from human ES cells. Upon aggregation to embryoid bodies, differentiating ES cells formed large numbers of neural tube–like structures in the presence of fibroblast growth factor 2 (FGF-2). Neural precursors within these formations were isolated by selective enzymatic digestion and further purified on the basis of differential adhesion. Following withdrawal of FGF-2, they differentiated into neurons, astrocytes, and oligodendrocytes. After transplantation into the neonatal mouse brain, human ES cell–derived neural precursors were incorporated into a variety of brain regions, where they differentiated into both neurons and astrocytes. No teratoma formation was observed in the transplant recipients. These results depict human ES cells as a source of transplantable neural precursors for possible nervous system repair.", "title": "" }, { "docid": "3a86f1f91cfaa398a03a56abb34f497c", "text": "We present a practical approach to generate stochastic anisotropic samples with Poisson-disk characteristic over a two-dimensional domain. In contrast to isotropic samples, we understand anisotropic samples as nonoverlapping ellipses whose size and density match a given anisotropic metric. Anisotropic noise samples are useful for many visualization and graphics applications. The spot samples can be used as input for texture generation, for example, line integral convolution (LIC), but can also be used directly for visualization. The definition of the spot samples using a metric tensor makes them especially suitable for the visualization of tensor fields that can be translated into a metric. Our work combines ideas from sampling theory and mesh generation to approximate generalized blue noise properties. To generate these samples with the desired properties, we first construct a set of nonoverlapping ellipses whose distribution closely matches the underlying metric. This set of samples is used as input for a generalized anisotropic Lloyd relaxation to distribute noise samples more evenly. Instead of computing the Voronoi tessellation explicitly, we introduce a discrete approach that combines the Voronoi cell and centroid computation in one step. Our method supports automatic packing of the elliptical samples, resulting in textures similar to those generated by anisotropic reaction-diffusion methods. We use Fourier analysis tools for quality measurement of uniformly distributed samples. The resulting samples have nice sampling properties, for example, they satisfy a blue noise property where low frequencies in the power spectrum are reduced to a minimum..", "title": "" }, { "docid": "8a2f40f2a0082fae378c7907a60159ac", "text": "We present a novel graph-based neural network model for relation extraction. Our model treats multiple pairs in a sentence simultaneously and considers interactions among them. All the entities in a sentence are placed as nodes in a fully-connected graph structure. The edges are represented with position-aware contexts around the entity pairs. In order to consider different relation paths between two entities, we construct up to l-length walks between each pair. The resulting walks are merged and iteratively used to update the edge representations into longer walks representations. We show that the model achieves performance comparable to the state-ofthe-art systems on the ACE 2005 dataset without using any external tools.", "title": "" }, { "docid": "1ff4d4588826459f1d8d200d658b9907", "text": "BACKGROUND\nHealth promotion organizations are increasingly embracing social media technologies to engage end users in a more interactive way and to widely disseminate their messages with the aim of improving health outcomes. However, such technologies are still in their early stages of development and, thus, evidence of their efficacy is limited.\n\n\nOBJECTIVE\nThe study aimed to provide a current overview of the evidence surrounding consumer-use social media and mobile software apps for health promotion interventions, with a particular focus on the Australian context and on health promotion targeted toward an Indigenous audience. Specifically, our research questions were: (1) What is the peer-reviewed evidence of benefit for social media and mobile technologies used in health promotion, intervention, self-management, and health service delivery, with regard to smoking cessation, sexual health, and otitis media? and (2) What social media and mobile software have been used in Indigenous-focused health promotion interventions in Australia with respect to smoking cessation, sexual health, or otitis media, and what is the evidence of their effectiveness and benefit?\n\n\nMETHODS\nWe conducted a scoping study of peer-reviewed evidence for the effectiveness of social media and mobile technologies in health promotion (globally) with respect to smoking cessation, sexual health, and otitis media. A scoping review was also conducted for Australian uses of social media to reach Indigenous Australians and mobile apps produced by Australian health bodies, again with respect to these three areas.\n\n\nRESULTS\nThe review identified 17 intervention studies and seven systematic reviews that met inclusion criteria, which showed limited evidence of benefit from these interventions. We also found five Australian projects with significant social media health components targeting the Indigenous Australian population for health promotion purposes, and four mobile software apps that met inclusion criteria. No evidence of benefit was found for these projects.\n\n\nCONCLUSIONS\nAlthough social media technologies have the unique capacity to reach Indigenous Australians as well as other underserved populations because of their wide and instant disseminability, evidence of their capacity to do so is limited. Current interventions are neither evidence-based nor widely adopted. Health promotion organizations need to gain a more thorough understanding of their technologies, who engages with them, why they engage with them, and how, in order to be able to create successful social media projects.", "title": "" }, { "docid": "14dc4a684d4c9ea310ae8b8b47dee3f6", "text": "Computational models in psychology are precise, fully explicit scientific hypotheses. Over the past 15 years, probabilistic modeling of human cognition has yielded quantitative theories of a wide variety of reasoning and learning phenomena. Recently, Marcus and Davis (2013) critique several examples of this work, using these critiques to question the basic validity of the probabilistic approach. Contra the broad rhetoric of their article, the points made by Marcus and Davis—while useful to consider—do not indicate systematic problems with the probabilistic modeling enterprise. Relevant and robust 3 Computational models in psychology are precise, fully explicit scientific hypotheses. Probabilistic models in particular formalize hypotheses about the beliefs of agents—their knowledge and assumptions about the world—using the structured collection of probabilities referred to as priors, likelihoods, etc. The probability calculus then describes inferences that can be drawn by combining these beliefs with new evidence, without the need to commit to a process-level explanation of how these inferences are performed (Marr, 1982). Over the past 15 years, probabilistic modeling of human cognition has yielded quantitative theories of a wide variety of phenomena (Tenenbaum, Kemp, Griffiths, & Goodman, 2011). Marcus and Davis (2013, henceforth, M&D) critique several examples of this work, using these critiques to question the basic validity of the probabilistic models approach, based on the existence of alternative models and potentially inconsistent data. Contra the broad rhetoric of their article, the points made by M&D—while useful to consider—do not indicate systematic problems with the probabilistic modeling enterprise. Several objections stem from a fundamental confusion about the status of optimality in probabilistic modeling, which has been discussed in responses to other critiques (see: Griffiths, Chater, Norris, & Pouget, 2012; Frank, 2013). Briefly: an optimal analysis is not the optimal analysis for a task or domain. Different probabilistic models instantiate different psychological hypotheses. Optimality provides a bridging assumption between these hypotheses and human behavior; one that can be re-examined or overturned as the data warrant. Model selection. M&D argue that individual probabilistic models require a host of potentially problematic modeling choices. Indeed, probabilistic models are created via a series of choices concerning priors, likelihoods, response functions, etc. Each of these choices embodies a proposal about cognition, and these proposals will often be wrong. The Relevant and robust 4 identification of model assumptions that result in a mismatch to empirical data allows these assumptions to be replaced or refined. Systematic iteration to achieve a better model is part of the normal progress of science. But if choices are made post-hoc, a model can be overfit to the particulars of the empirical data. M&D suggest that certain of our models suffer from this issue. For instance, they show that data on pragmatic inference (Frank & Goodman, 2012) are inconsistent with an alternative variant of the proposed model that uses a hard-max rather than a soft-max function, and ask whether the choice of soft-max was dependent on the data. The soft-max rule is foundational in economics, decision-theory, and cognitive psychology (Luce, 1959, 1977), and we first selected it for this problem based on a completely independent set of experiments (Frank, Goodman, Lai, & Tenenbaum, 2009). So it’s hard to see how a claim of overfitting is warranted here. Modelers must balance unification with exploration of model assumptions across tasks, but this issue is a general one for all computational work, and does not constitute a systematic problem with the probabilistic approach. Task selection. M&D suggested that probabilistic modelers report results on only the narrow range of tasks on which their models succeed. But their critique focused on a few high-profile, short reports that represented our first attempts to engage with important domains of cognition. Such papers necessarily have less in-depth engagement with empirical data than more extensive and mature work, though they also exemplify the applicability of probabilistic modeling to domains previously viewed as too complex for quantitative approaches. There is broader empirical adequacy to probabilistic models of cognition than M&D imply. If M&D had surveyed the literature they would have found substantial additional Relevant and robust 5 evidence for the models they reviewed—and more has accrued since their critique. For example, M&D critiqued Griffiths and Tenenbaum’s (2006) analysis of everyday predictions for failing to provide independent assessments of the contributions of priors and likelihoods, precisely what was done in several later and much longer papers (Griffiths & Tenenbaum, 2011; Lewandowsky, Griffiths, & Kalish, 2009). They similarly critiqued the particular tasks selected by Battaglia, Hamrick, and Tenenbaum (2013) without discussing the growing literature testing similar “noisy Newtonian” models on other phenomena (Gerstenberg, Goodman, Lagnado, & Tenenbaum, 2012; Gerstenberg, Goodman, Lagnado, & Tenenbaum, 2014; Sanborn, Mansinghka, & Griffiths, 2013; Smith, Dechter, Tenenbaum, & Vul, 2013; Téglás et al., 2011). Smith, Battaglia, and Vul (2013) even directly address exactly the challenge M&D posed regarding classic findings of errors in physical intuitions. In other domains, such as concept learning and inductive inference, where there is an extensive experimental tradition, probabilistic models have engaged with diverse empirical data collected by multiple labs over many years (e.g. Goodman, Tenenbaum, Feldman, & Griffiths, 2008; Kemp & Tenenbaum, 2009). M&D also insinuate empirical problems that they do not test. For instance, in criticizing the choice of dependent measure used by Frank and Goodman (2012), they posit that a forced-choice task would yield a qualitatively different pattern (discrete rather than graded responding). In fact, a forced-choice version of the task produces graded patterns of responding across a wide variety of conditions (Stiller, Goodman, & Frank, 2011, 2014; Vogel, Emilsson, Frank, Jurafsky, & Potts, 2014). Conclusions. We agree with M&D that there are real and important challenges for probabilistic models of cognition, as there will be for any approach to modeling a system as complex as the human mind. To us, the most pressing challenges include understanding the Relevant and robust 6 relationship to lower levels of psychological analysis and neural implementation, integrating additional formal tools, clarifying the philosophical status of the models, extending to new domains of cognition, and, yes: engaging with additional empirical data in the current domains while unifying specific model choices into broader principles. As M&D state, “ultimately, the Bayesian approach should be seen as a useful tool”—one that we believe has already proven its robustness and relevance by allowing us to form and test quantitatively accurate psychological hypotheses. Relevant and robust 7", "title": "" }, { "docid": "4fb27373155b20702a02ad814a4e9b61", "text": "Sanskrit since many thousands of years has been the oriental language of India. It is the base for most of the Indian Languages. Ambiguity is inherent in the Natural Language sentences. Here, one word can be used in multiple senses. Morphology process takes word in isolation and fails to disambiguate correct sense of a word. Part-Of-Speech Tagging (POST) takes word sequences in to consideration to resolve the correct sense of a word present in the given sentence. Efficient POST have been developed for processing of English, Japanese, and Chinese languages but it is lacking for Indian languages. In this paper our work present simple rule-based POST for Sanskrit language. It uses rule based approach to tag each word of the sentence. These rules are stored in the database. It parses the given Sanskrit sentence and assigns suitable tag to each word automatically. We have tested this approach for 15 tags and 100 words of the language this rule based tagger gives correct tags for all the inflected words in the given sentence.", "title": "" }, { "docid": "301bc00e99607569dcba6317ebb2f10d", "text": "Bandwidth and gain enhancement of microstrip patch antennas (MPAs) is proposed using reflective metasurface (RMS) as a superstrate. Two different types of the RMS, namelythe double split-ring resonator (DSR) and double closed-ring resonator (DCR) are separately investigated. The two antenna prototypes were manufactured, measured and compared. The experimental results confirm that the RMS loaded MPAs achieve high-gain as well as bandwidth improvement. The desinged antenna using the RMS as a superstrate has a high-gain of over 9.0 dBi and a wide impedance bandwidth of over 13%. The RMS is also utilized to achieve a thin antenna with a cavity height of 6 mm, which is equivalent to λ/21 at the center frequency of 2.45 GHz. At the same time, the cross polarization level and front-to-back ratio of these antennas are also examined. key words: wideband, high-gain, metamaterial, Fabry-Perot cavity (FPC), frequency selective surface (FSS)", "title": "" }, { "docid": "e7b9c3ef571770788cd557f8c4843bcf", "text": "Different efforts have been done to address the problem of information overload on the Internet. Recommender systems aim at directing users through this information space, toward the resources that best meet their needs and interests by extracting knowledge from the previous users’ interactions. In this paper, we propose an algorithm to solve the web page recommendation problem. In our algorithm, we use distributed learning automata to learn the behavior of previous users’ and recommend pages to the current user based on learned pattern. Our experiments on real data set show that the proposed algorithm performs better than the other algorithms that we compared to and, at the same time, it is less complex than other algorithms with respect to memory usage and computational cost too.", "title": "" }, { "docid": "ff5c993fd071b31b6f639d1f64ce28b0", "text": "We show that explicit pragmatic inference aids in correctly generating and following natural language instructions for complex, sequential tasks. Our pragmatics-enabled models reason about why speakers produce certain instructions, and about how listeners will react upon hearing them. Like previous pragmatic models, we use learned base listener and speaker models to build a pragmatic speaker that uses the base listener to simulate the interpretation of candidate descriptions, and a pragmatic listener that reasons counterfactually about alternative descriptions. We extend these models to tasks with sequential structure. Evaluation of language generation and interpretation shows that pragmatic inference improves state-of-the-art listener models (at correctly interpreting human instructions) and speaker models (at producing instructions correctly interpreted by humans) in diverse settings.", "title": "" }, { "docid": "eec60b309731ef2f0adbfe94324a2ca0", "text": "Wireless sensor networks are those networks which are composed by the collection of very small devices mainly named as nodes. These nodes are integrated with small battery life which is very hard or impossible to replace or reinstate. For the sensing, gathering and processing capabilities, the usage of battery is must. Therefore, the battery life of Wireless Sensor Networks should be as large as possible in order to sense the information around it or in which the nodes are placed. The concept of hierarchical routing is mainly highlighted in this paper, in which the nodes work in a hierarchical manner by the formation of Cluster Head within a Cluster. These formed Cluster Heads then transfer the data or information in the form of packets from one cluster to another. In this work, the protocol used for the simulation is Low Energy adaptive Clustering Hierarchy which is one of the most efficient protocol. The nodes are of homogeneous in nature. The simulator used is MATLAB along with Cuckoo Search Algorithm. The Simulation results have been taken out showing the effectiveness of protocol with Cuckoo Search. Keywords— Wireless Sensor Network (WSN), Low Energy adaptive Clustering Hierarchy (LEACH), Cuckoo Search, Cluster Head (CH), Base Station (BS).", "title": "" } ]
scidocsrr
805a1a752479d4d312e09fd4073e0c21
Why People Continue to Use Social Networking Services: Developing a Comprehensive Model
[ { "docid": "65dbd6cfc76d7a81eaa8a1dd49a838bb", "text": "Organizations are attempting to leverage their knowledge resources by employing knowledge management (KM) systems, a key form of which are electronic knowledge repositories (EKRs). A large number of KM initiatives fail due to reluctance of employees to share knowledge through these systems. Motivated by such concerns, this study formulates and tests a theoretical model to explain EKR usage by knowledge contributors. The model employs social exchange theory to identify cost and benefit factors affecting EKR usage, and social capital theory to account for the moderating influence of contextual factors. The model is validated through a large-scale survey of public sector organizations. The results reveal that knowledge self-efficacy and enjoyment in helping others significantly impact EKR usage by knowledge contributors. Contextual factors (generalized trust, pro-sharing norms, and identification) moderate the impact of codification effort, reciprocity, and organizational reward on EKR usage, respectively. It can be seen that extrinsic benefits (reciprocity and organizational reward) impact EKR usage contingent on particular contextual factors whereas the effects of intrinsic benefits (knowledge self-efficacy and enjoyment in helping others) on EKR usage are not moderated by contextual factors. The loss of knowledge power and image do not appear to impact EKR usage by knowledge contributors. Besides contributing to theory building in KM, the results of this study inform KM practice.", "title": "" } ]
[ { "docid": "f6211f28785ac28d8ff91459fe81a6f7", "text": "We describe a novel approach to the measurement of discounting based on calculating the area under the empirical discounting function. This approach avoids some of the problems associated with measures based on estimates of the parameters of theoretical discounting functions. The area measure may be easily calculated for both individual and group data collected using any of a variety of current delay and probability discounting procedures. The present approach is not intended as a substitute for theoretical discounting models. It is useful, however, to have a simple, univariate measure of discounting that is not tied to any specific theoretical framework.", "title": "" }, { "docid": "11a4536e40dde47e024d4fe7541b368c", "text": "Building Information Modeling (BIM) provides an integrated 3D environment to manage large-scale engineering projects. The Architecture, Engineering and Construction (AEC) industry explores 4D visualizations over these datasets for virtual construction planning. However, existing solutions lack adequate visual mechanisms to inspect the underlying schedule and make inconsistencies readily apparent. The goal of this paper is to apply best practices of information visualization to improve 4D analysis of construction plans. We first present a review of previous work that identifies common use cases and limitations. We then consulted with AEC professionals to specify the main design requirements for such applications. These guided the development of CasCADe, a novel 4D visualization system where task sequencing and spatio-temporal simultaneity are immediately apparent. This unique framework enables the combination of diverse analytical features to create an information-rich analysis environment. We also describe how engineering collaborators used CasCADe to review the real-world construction plans of an Oil & Gas process plant. The system made evident schedule uncertainties, identified work-space conflicts and helped analyze other constructability issues. The results and contributions of this paper suggest new avenues for future research in information visualization for the AEC industry.", "title": "" }, { "docid": "57f1671f7b73f0b888f55a1f31a9f1a1", "text": "The ongoing high relevance of business intelligence (BI) for the management and competitiveness of organizations requires a continuous, transparent, and detailed assessment of existing BI solutions in the enterprise. This paper presents a BI maturity model (called biMM) that has been developed and refined over years. It is used for both, in surveys to determine the overall BI maturity in German speaking countries and for the individual assessment in organizations. A recently conducted survey shows that the current average BI maturity can be assigned to the third stage (out of five stages). Comparing future (planned) activities and current challenges allows the derivation of a BI research agenda. The need for action includes among others emphasizing BI specific organizational structures, such as the establishment of BI competence centers, a stronger focus on profitability, and improved effectiveness of the BI architecture.", "title": "" }, { "docid": "0f29172ecf0ed3dfd775c3fa43db4127", "text": "Reusing software through copying and pasting is a continuous plague in software development despite the fact that it creates serious maintenance problems. Various techniques have been proposed to find duplicated redundant code (also known as software clones). A recent study has compared these techniques and shown that token-based clone detection based on suffix trees is extremely fast but yields clone candidates that are often no syntactic units. Current techniques based on abstract syntax trees-on the other hand-find syntactic clones but are considerably less efficient. This paper describes how we can make use of suffix trees to find clones in abstract syntax trees. This new approach is able to find syntactic clones in linear time and space. The paper reports the results of several large case studies in which we empirically compare the new technique to other techniques using the Bellon benchmark for clone detectors", "title": "" }, { "docid": "a323ffc54428cca4cc37e37da5968104", "text": "For decades, the de facto standard for forward error correction was a convolutional code decoded with the Viterbi algorithm, often concatenated with another code (e.g., a Reed-Solomon code). But since the introduction of turbo codes in 1993, much more powerful codes referred to collectively as turbo and turbo-like codes have eclipsed classical methods. These powerful error-correcting techniques achieve excellent error-rate performance that can closely approach Shannon's channel capacity limit. The lure of these large coding gains has resulted in their incorporation into a widening array of telecommunications standards and systems. This paper will briefly characterize turbo and turbo-like codes, examine their implications for physical layer system design, and discuss standards and systems where they are being used. The emphasis will be on telecommunications applications, particularly wireless, though others are mentioned. Some thoughts on the use of turbo and turbo-like codes in the future will also be given.", "title": "" }, { "docid": "c5bc0cd14aa51c24a00107422fc8ca10", "text": "This paper proposes a new high-voltage Pulse Generator (PG), fed from low voltage dc supply Vs. This input supply voltage is utilized to charge two arms of N series-connected modular multilevel converter sub-module capacitors sequentially through a resistive-inductive branch, such that each arm is charged to NVS. With a step-up nano-crystalline transformer of n turns ratio, the proposed PG is able to generate bipolar rectangular pulses of peak ±nNVs, at high repetition rates. However, equal voltage-second area of consecutive pulse pair polarities should be assured to avoid transformer saturation. Not only symmetrical pulses can be generated, but also asymmetrical pulses with equal voltage-second areas are possible. The proposed topology is tested via simulations and a scaled-down experimentation, which establish the viability of the topology for water treatment applications.", "title": "" }, { "docid": "595afbb693585eb599a3e4ea8e65807a", "text": "Hypoglycemia is a major challenge of artificial pancreas systems and a source of concern for potential users and parents of young children with Type 1 diabetes (T1D). Early alarms to warn the potential of hypoglycemia are essential and should provide enough time to take action to avoid hypoglycemia. Many alarm systems proposed in the literature are based on interpretation of recent trends in glucose values. In the present study, subject-specific recursive linear time series models are introduced as a better alternative to capture glucose variations and predict future blood glucose concentrations. These models are then used in hypoglycemia early alarm systems that notify patients to take action to prevent hypoglycemia before it happens. The models developed and the hypoglycemia alarm system are tested retrospectively using T1D subject data. A Savitzky-Golay filter and a Kalman filter are used to reduce noise in patient data. The hypoglycemia alarm algorithm is developed by using predictions of future glucose concentrations from recursive models. The modeling algorithm enables the dynamic adaptation of models to inter-/intra-subject variation and glycemic disturbances and provides satisfactory glucose concentration prediction with relatively small error. The alarm systems demonstrate good performance in prediction of hypoglycemia and ultimately in prevention of its occurrence.", "title": "" }, { "docid": "9ac16df20364b0ae28d3164bbfb08654", "text": "Complex event detection is an advanced form of data stream processing where the stream(s) are scrutinized to identify given event patterns. The challenge for many complex event processing (CEP) systems is to be able to evaluate event patterns on high-volume data streams while adhering to realtime constraints. To solve this problem, in this paper we present a hardware based complex event detection system implemented on field-programmable gate arrays (FPGAs). By inserting the FPGA directly into the data path between the network interface and the CPU, our solution can detect complex events at gigabit wire speed with constant and fully predictable latency, independently of network load, packet size or data distribution. This is a significant improvement over CPU based systems and an architectural approach that opens up interesting opportunities for hybrid stream engines that combine the flexibility of the CPU with the parallelism and processing power of FPGAs.", "title": "" }, { "docid": "de5b79a5debac750a4970516778d926c", "text": "Vertical channel (VC) 3D NAND Flash may be categorized into two types of channel formation: (1) \"U-turn\" string, where both BL and source are connected at top thus channel current flows in a U-turn way; (2) \"Bottom source\", where source is connected at the bottom thus channel current flows only in one way. For the single-gate vertical channel (SGVC) 3D NAND architecture [1], it is also possible to develop a bottom source structure. The detailed array decoding method is illustrated. In this work, the challenges of bottom source processing and thin poly channel formation are extensively studied. It is found that the two-step poly formation and the bottom recess control are two key factors governing the device initial performance. In general, the two-step poly formation with additional poly spacer etching technique seems to cause degradation of both the poly mobility and device subthreshold slope. Sufficient thermal annealing is needed to recover the damage. Moreover, the bottom connection needs an elegant recess control for better read current as well as bottom ground-select transistor (GSL) device optimizations.", "title": "" }, { "docid": "89dcd15d3f7e2f538af4a2654f144dfb", "text": "E-waste comprises discarded electronic appliances, of which computers and mobile telephones are disproportionately abundant because of their short lifespan. The current global production of E-waste is estimated to be 20-25 million tonnes per year, with most E-waste being produced in Europe, the United States and Australasia. China, Eastern Europe and Latin America will become major E-waste producers in the next ten years. Miniaturisation and the development of more efficient cloud computing networks, where computing services are delivered over the internet from remote locations, may offset the increase in E-waste production from global economic growth and the development of pervasive new technologies. E-waste contains valuable metals (Cu, platinum group) as well as potential environmental contaminants, especially Pb, Sb, Hg, Cd, Ni, polybrominated diphenyl ethers (PBDEs), and polychlorinated biphenyls (PCBs). Burning E-waste may generate dioxins, furans, polycyclic aromatic hydrocarbons (PAHs), polyhalogenated aromatic hydrocarbons (PHAHs), and hydrogen chloride. The chemical composition of E-waste changes with the development of new technologies and pressure from environmental organisations on electronics companies to find alternatives to environmentally damaging materials. Most E-waste is disposed in landfills. Effective reprocessing technology, which recovers the valuable materials with minimal environmental impact, is expensive. Consequently, although illegal under the Basel Convention, rich countries export an unknown quantity of E-waste to poor countries, where recycling techniques include burning and dissolution in strong acids with few measures to protect human health and the environment. Such reprocessing initially results in extreme localised contamination followed by migration of the contaminants into receiving waters and food chains. E-waste workers suffer negative health effects through skin contact and inhalation, while the wider community are exposed to the contaminants through smoke, dust, drinking water and food. There is evidence that E-waste associated contaminants may be present in some agricultural or manufactured products for export.", "title": "" }, { "docid": "c9bfd3b31a8a95898d45819037341307", "text": "OBJECTIVE\nInvestigation of the effect of a green tea-caffeine mixture on weight maintenance after body weight loss in moderately obese subjects in relation to habitual caffeine intake.\n\n\nRESEARCH METHODS AND PROCEDURES\nA randomized placebo-controlled double blind parallel trial in 76 overweight and moderately obese subjects, (BMI, 27.5 +/- 2.7 kg/m2) matched for sex, age, BMI, height, body mass, and habitual caffeine intake was conducted. A very low energy diet intervention during 4 weeks was followed by 3 months of weight maintenance (WM); during the WM period, the subjects received a green tea-caffeine mixture (270 mg epigallocatechin gallate + 150 mg caffeine per day) or placebo.\n\n\nRESULTS\nSubjects lost 5.9 +/-1.8 (SD) kg (7.0 +/- 2.1%) of body weight (p < 0.001). At baseline, satiety was positively, and in women, leptin was inversely, related to subjects' habitual caffeine consumption (p < 0.01). High caffeine consumers reduced weight, fat mass, and waist circumference more than low caffeine consumers; resting energy expenditure was reduced less and respiratory quotient was reduced more during weight loss (p < 0.01). In the low caffeine consumers, during WM, green tea still reduced body weight, waist, respiratory quotient and body fat, whereas resting energy expenditure was increased compared with a restoration of these variables with placebo (p < 0.01). In the high caffeine consumers, no effects of the green tea-caffeine mixture were observed during WM.\n\n\nDISCUSSION\nHigh caffeine intake was associated with weight loss through thermogenesis and fat oxidation and with suppressed leptin in women. In habitual low caffeine consumers, the green tea-caffeine mixture improved WM, partly through thermogenesis and fat oxidation.", "title": "" }, { "docid": "39587dd0043a4d16d5470884e04bbf9c", "text": "This article establishes an automated monitoring system for the fish farm aquaculture environment. The fish farm setting is usually in convenient places without common place traffic. The proposed system is network surveillance combined with mobile devices and a remote platform to collect real-time farm environmental information. This system permits real-time observation and control of fish farms with dissolved oxygen sensors, temperature sensing elements using A/D and 8051 module signal conversion. The real-time data is captured and displayed via ZigBee wireless transmission signal transmitter to remote computer terminals. Visual Basic 2010 software is used to design the interface functions and control sensing module. This system is low-cost, low power, easy operation with wireless transmission capability. A continuous, stable power supply is very important for the aquaculture industry. The proposed system will use municipal electricity coupled with a battery power source to provide power with battery intervention if municipal power is interrupted. The battery system is designed to avoid the self-discharge phenomenon which reduces the battery lifetime. Solar power is used to provide charging at any time.", "title": "" }, { "docid": "060e518af9a250c1e6a3abf49555754f", "text": "The deep learning community has proposed optimizations spanning hardware, software, and learning theory to improve the computational performance of deep learning workloads. While some of these optimizations perform the same operations faster (e.g., switching from a NVIDIA K80 to P100), many modify the semantics of the training procedure (e.g., large minibatch training, reduced precision), which can impact a model’s generalization ability. Due to a lack of standard evaluation criteria that considers these trade-offs, it has become increasingly difficult to compare these different advances. To address this shortcoming, DAWNBENCH and the upcoming MLPERF benchmarks use time-to-accuracy as the primary metric for evaluation, with the accuracy threshold set close to state-of-the-art and measured on a held-out dataset not used in training; the goal is to train to this accuracy threshold as fast as possible. In DAWNBENCH, the winning entries improved time-to-accuracy on ImageNet by two orders of magnitude over the seed entries. Despite this progress, it is unclear how sensitive time-to-accuracy is to the chosen threshold as well as the variance between independent training runs, and how well models optimized for time-to-accuracy generalize. In this paper, we provide evidence to suggest that time-to-accuracy has a low coefficient of variance and that the models tuned for it generalize nearly as well as pre-trained models. We additionally analyze the winning entries to understand the source of these speedups, and give recommendations for future benchmarking efforts.", "title": "" }, { "docid": "23ac5c4adf61fad813869882c4d2e7b6", "text": "Most network simulators do not support security features. In this paper, we introduce a new security module for OMNET++ that implements the IEEE 802.15.4 security suite. This module, developed using the C++ language, can simulate all devices and sensors that implement the IEEE 802.15.4 standard. The OMNET++ security module is also evaluated in terms of quality of services in the presence of physical hop attacks. Results show that our module is reliable and can safely be used by researchers.", "title": "" }, { "docid": "cdfcc894d32c9a6a3a076d3e978d400f", "text": "The earliest Convolution Neural Network (CNN) model is leNet-5 model proposed by LeCun in 1998. However, in the next few years, the development of CNN had been almost stopped until the article ‘Reducing the dimensionality of data with neural networks’ presented by Hinton in 2006. CNN started entering a period of rapid development. AlexNet won the championship in the image classification contest of ImageNet with the huge superiority of 11% beyond the second place in 2012, and the proposal of DeepFace and DeepID, as two relatively successful models for high-performance face recognition and authentication in 2014, marking the important position of CNN. Convolution Neural Network (CNN) is an efficient recognition algorithm widely used in image recognition and other fields in recent years. That the core features of CNN include local field, shared weights and pooling greatly reducing the parameters, as well as simple structure, make CNN become an academic focus. In this paper, the Convolution Neural Network’s history and structure are summarized. And then several areas of Convolutional Neural Network applications are enumerated. At last, some new insights for the future research of CNN are presented.", "title": "" }, { "docid": "78d8f8c74bc02ece6e6286ef806cdb1e", "text": "Event based sampling of feedback signals and control inputs are shown to reduce computations. In this paper, the design of event-sampled adaptive neural network (NN) state feedback control of robot manipulators is presented in the presence of uncertain robot dynamics. The event-sampled NN approximation property is utilized to represent the uncertain nonlinear dynamics of the robotic manipulator which is subsequently employed to generate the control torque. A novel weight tuning rule is designed using the Lyapunov method. Further, the Lyapunov stability theory is utilized to develop the event-sampling condition and to demonstrate the tracking performance of the robot manipulator. Finally, simulation results are presented to verify the theoretical claims and to demonstrate the reduction in the computations with event-sampled control execution.", "title": "" }, { "docid": "499d11cefeb1b086f4749310de71385f", "text": "Non-volatile RAM (NVRAM) will fundamentally change in-memory databases as data structures do not have to be explicitly backed up to hard drives or SSDs, but can be inherently persistent in main memory. To guarantee consistency even in the case of power failures, programmers need to ensure that data is flushed from volatile CPU caches where it would be susceptible to power outages to NVRAM.\n In this paper, we present the NVC-Hashmap, a lock-free hashmap that is used for unordered dictionaries and delta indices in in-memory databases. The NVC-Hashmap is then evaluated in both stand-alone and integrated database benchmarks and compared to a B+-Tree based persistent data structure.", "title": "" }, { "docid": "ce1f9cbd9cedf63d7e5bea0bfae415c4", "text": "The present study is an attempt to discover some of the statistically significant outlines motivations and factors that influence the quality in e-ticketing, which affects customers’ perceptions, preferences, and intentions. Consumers, especially business professions – the subjects of this study – are constantly demanding higher quality e-commerce services. All three hypotheses were found to be highly significant and positively related to promoting the perceived value of e-ticketing technologies, especially for sport-related events. Based on technology adoption models, e-ticketing does provide significant levels of perceived value and its linkage to customer satisfaction are important factors as well as operational costs. It seems obvious that box office staffs will become smaller in size as more e-ticketing devices and acceptance increases. Technological applications should continue to grow, and eventual acceptance of ticket kiosks, wireless ticket purchases, will undoubtedly change from being an industry rarity to an industry standard.", "title": "" }, { "docid": "73c7c931622358e50317f6d8cc61110e", "text": "This paper proposes a hydrogenated amorphous silicon thin-film transistorbased (a-Si:H TFT) optical pixel sensor. The proposed optical sensor compensates for variations of ambient light using photo TFTs that incorporate with three primary color filters, and a designed active load is utilized to release the photocurrent from the noise of reflected light. Measurement results reveal that the proposed sensor suppresses the effect of ambient light within the intensity of 12 560 lux and does not react to a blue light with an intensity of 588 lux, proving that the proposed optical sensor remains highly reliable under heavy ambient light and avoids the interference of reflected light.", "title": "" } ]
scidocsrr
0805461ad15470be364022dd34b3c084
Fast object localization and pose estimation in heavy clutter for robotic bin picking
[ { "docid": "f69d669235d54858eb318b53cdadcb47", "text": "We present a complete vision guided robot system for model based 3D pose estimation and picking of singulated 3D objects. Our system employs a novel vision sensor consisting of a video camera surrounded by eight flashes (light emitting diodes). By capturing images under different flashes and observing the shadows, depth edges or silhouettes in the scene are obtained. The silhouettes are segmented into different objects and each silhouette is matched across a database of object silhouettes in different poses to find the coarse 3D pose. The database is pre-computed using a Computer Aided Design (CAD) model of the object. The pose is refined using a fully projective formulation [ACB98] of Lowe’s model based pose estimation algorithm [Low91, Low87]. The estimated pose is transferred to robot coordinate system utilizing the handeye and camera calibration parameters, which allows the robot to pick the object. Our system outperforms conventional systems using 2D sensors with intensity-based features as well as 3D sensors. We handle complex ambient illumination conditions, challenging specular backgrounds, diffuse as well as specular objects, and texture-less objects, on which traditional systems usually fail. Our vision sensor is capable of computing depth edges in real time and is low cost. Our approach is simple and fast for practical implementation. We present real experimental results using our custom designed sensor mounted on a robot arm to demonstrate the effectiveness of our technique. International Journal of Robotics Research This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c ©Mitsubishi Electric Research Laboratories, Inc., 2009 201 Broadway, Cambridge, Massachusetts 02139", "title": "" } ]
[ { "docid": "102d34c80b731daf0783923bb6cf6732", "text": "In the information systems, customer relationship management (CRM) is the overall process of building and maintaining profitable customer relationships by delivering superior customer value and satisfaction with the goal of improving the business relationships with customers. Also, it is the strongest and the most efficient approach to maintaining and creating the relationships with customers. However, to the best of our knowledge and despite its importance, there is not any comprehensive and systematic study about reviewing and analyzing its important techniques. Therefore, in this paper, a comprehensive study and survey on the state of the art mechanisms in the scope of the CRM are done. It follows this goal by looking at five categories in which CRM plays a significant role: E-CRM, knowledge management, data mining, data quality and, social CRM. In each category, a couple of studies are presented and determinants of CRM are described and discussed. The major development in these five categories is reviewed and the new challenges are outlined. Also, a systematic literature review (SLR) in each of these five categories is provided. Furthermore, insights into the identification of open issues and guidelines for future research are provided. © 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "4d6082ab565b98ea6aa88a68ba781fca", "text": "Over the past decade, deep learning has achieved remarkable success in various artificial intelligence research areas. Evolved from the previous research on artificial neural networks, this technology has shown superior performance to other machine learning algorithms in areas such as image and voice recognition, natural language processing, among others. The first wave of applications of deep learning in pharmaceutical research has emerged in recent years, and its utility has gone beyond bioactivity predictions and has shown promise in addressing diverse problems in drug discovery. Examples will be discussed covering bioactivity prediction, de novo molecular design, synthesis prediction and biological image analysis.", "title": "" }, { "docid": "40aea09bf70b2308d99822819c7c0dd4", "text": "There is a large body of evidence supporting the efficacy of low level laser therapy (LLLT), more recently termed photobiomodulation (PBM), for the management of oral mucositis (OM) in patients undergoing radiotherapy for head and neck cancer (HNC). Recent advances in PBM technology, together with a better understanding of mechanisms involved, may expand the applications for PBM in the management of other complications associated with HNC treatment. This article (part 1) describes PBM mechanisms of action, dosimetry, and safety aspects and, in doing so, provides a basis for a companion paper (part 2) which describes the potential breadth of potential applications of PBM in the management of side-effects of (chemo)radiation therapy in patients being treated for HNC and proposes PBM parameters. This study is a narrative non-systematic review. We review PBM mechanisms of action and dosimetric considerations. Virtually, all conditions modulated by PBM (e.g., ulceration, inflammation, lymphedema, pain, fibrosis, neurological and muscular injury) are thought to be involved in the pathogenesis of (chemo)radiation therapy-induced complications in patients treated for HNC. The impact of PBM on tumor behavior and tumor response to treatment has been insufficiently studied. In vitro studies assessing the effect of PBM on tumor cells report conflicting results, perhaps attributable to inconsistencies of PBM power and dose. Nonetheless, the biological bases for the broad clinical activities ascribed to PBM have also been noted to be similar to those activities and pathways associated with negative tumor behaviors and impeded response to treatment. While there are no anecdotal descriptions of poor tumor outcomes in patients treated with PBM, confirming its neutrality with respect to cancer responsiveness is a critical priority. Based on its therapeutic effects, PBM may have utility in a broad range of oral, oropharyngeal, facial, and neck complications of HNC treatment. Although evidence suggests that PBM using LLLT is safe in HNC patients, more research is imperative and vigilance remains warranted to detect any potential adverse effects of PBM on cancer treatment outcomes and survival.", "title": "" }, { "docid": "fdd60fb607cff6983e3181bae79e6aa8", "text": "This paper introduces a new open source platform for end-toend speech processing named ESPnet. ESPnet mainly focuses on end-to-end automatic speech recognition (ASR), and adopts widely-used dynamic neural network toolkits, Chainer and PyTorch, as a main deep learning engine. ESPnet also follows the Kaldi ASR toolkit style for data processing, feature extraction/format, and recipes to provide a complete setup for speech recognition and other speech processing experiments. This paper explains a major architecture of this software platform, several important functionalities, which differentiate ESPnet from other open source ASR toolkits, and experimental results with major ASR benchmarks.", "title": "" }, { "docid": "4db9cf56991edae0f5ca34546a8052c4", "text": "This chapter presents a survey of interpolation and resampling techniques in the context of exact, separable interpolation of regularly sampled data. In this context, the traditional view of interpolation is to represent an arbitrary continuous function as a discrete sum of weighted and shifted synthesis functions—in other words, a mixed convolution equation. An important issue is the choice of adequate synthesis functions that satisfy interpolation properties. Examples of finite-support ones are the square pulse (nearest-neighbor interpolation), the hat function (linear interpolation), the cubic Keys' function, and various truncated or windowed versions of the sinc function. On the other hand, splines provide examples of infinite-support interpolation functions that can be realized exactly at a finite, surprisingly small computational cost. We discuss implementation issues and illustrate the performance of each synthesis function. We also highlight several artifacts that may arise when performing interpolation, such as ringing, aliasing, blocking and blurring. We explain why the approximation order inherent in the synthesis function is important to limit these interpolation artifacts, which motivates the use of splines as a tunable way to keep them in check without any significant cost penalty. I. I NTRODUCTION Interpolation is a technique that pervades many an application. Interpolation is almost never the goal in itself, yet it affects both the desired results and the ways to obtain them. Notwithstanding its nearly universal relevance, some authors give it less importance than it deserves, perhaps because considerations on interpolation are felt as being paltry when compared to the description of a more inspiring grand scheme of things of some algorithm or method. Due to this indifference, it appears as if the basic principles that underlie interpolation might be sometimes cast aside, or even misunderstood. The goal of this chapter is to refresh the notions encountered in classical interpolation, as well as to introduce the reader to more general approaches. 1.1. Definition What is interpolation? Several answers coexist. One of them defines interpolation as an informed estimate of the unknown [1]. We prefer the following—admittedly less concise—definition: modelbased recovery of continuous data from discrete data within a known range of abscissa. The reason for this preference is to allow for a clearer distinction between interpolation and extrapolation. The former postulates the existence of a known range where the model applies, and asserts that the deterministicallyrecovered continuous data is entirely described by the discrete data, while the latter authorizes the use of the model outside of the known range, with the implicit assumption that the model is \"good\" near data samples, and possibly less good elsewhere. Finally, the three most important hypothesis for interpolation are:", "title": "" }, { "docid": "c6739c19b24deef9efcb3da866b9ddbc", "text": "Market makers have to continuously set bid and ask quotes for the stocks they have under consideration. Hence they face a complex optimization problem in which their return, based on the bid-ask spread they quote and the frequency they indeed provide liquidity, is challenged by the price risk they bear due to their inventory. In this paper, we provide optimal bid and ask quotes and closed-form approximations are derived using spectral arguments.", "title": "" }, { "docid": "cb9d22d417fd89332083d1dbdb8e601c", "text": "Automatically determining three-dimensional human pose from monocular RGB image data is a challenging problem. The two-dimensional nature of the input results in intrinsic ambiguities which make inferring depth particularly difficult. Recently, researchers have demonstrated that the flexible statistical modelling capabilities of deep neural networks are sufficient to make such inferences with reasonable accuracy. However, many of these models use coordinate output techniques which are memory-intensive, not differentiable, and/or do not spatially generalise well. We propose improvements to 3D coordinate prediction which avoid the aforementioned undesirable traits by predicting 2D marginal heatmaps under an augmented soft-argmax scheme. Our resulting model, MargiPose, produces visually coherent heatmaps whilst maintaining differentiability. We are also able to achieve state-of-the-art accuracy on publicly available 3D human pose estimation data.", "title": "" }, { "docid": "1516f9d674d911cef4b8d5cd8780afe7", "text": "This paper describes a novel approach to event-based debugging. The approach is based on a (coarsegrained) dataflow view of events: a high-level event is recognized when an appropriate combination of lower-level events on which it depends has occurred. Event recognition is controlled using familiar programming language constructs. This approach is more flexible and powerful than current ones. It allows arbitrary debugger language commands to be executed when attempting to form higher-level events. It also allows users to specify event recognition in much the same way that they write programs. This paper also describes a prototype, Dalek, that employs the dataflow approach for debugging sequential programs. Dalek demonstrates the feasibility and attractiveness of the dataflow approach. One important motivation for this work is that current sequential debugging tools are inadequate. Dalek contributes toward remedying such inadequacies by providing events and a powerful debugging language. Generalizing the dataflow approach so that it can aid in the debugging of concurrent programs is under investigation.", "title": "" }, { "docid": "3c9e92fde1bfabf07482f49d1ba38413", "text": "Most of current Recommender Systems based on Content-Based Filtering, Collaborative Filtering, Demographic Filtering and Hybrid Filtering which are concentrated on user and item entities. Many research papers are improved by pointing out either Multiple Criteria Rating approach or Multidimensional approach for Recommender System. This paper proposes an advanced Recommender System to provide higher quality of recommendations by combining the Multiple Criteria rating and the Multidimensional approaches. For the Multiple Criteria approach, this paper proposed a method that changes the way of weighting to be more suitable and also concern about the frequency of the selection movie features. To do Multidimensional approach, the Multiple Linear Regression is applied to analyze the contextual information of user characteristics. According to the experimental evaluation, the combining of Multiple Criteria Rating and Multidimensional approaches provide more accurate recommendation results than the current Hybrid Recommender Systems.", "title": "" }, { "docid": "f10ac6d718b07a22b798ef236454b806", "text": "The capability to operate cloud-native applications can generate enormous business growth and value. But enterprise architects should be aware that cloud-native applications are vulnerable to vendor lock-in. We investigated cloud-native application design principles, public cloud service providers, and industrial cloud standards. All results indicate that most cloud service categories seem to foster vendor lock-in situations which might be especially problematic for enterprise architectures. This might sound disillusioning at first. However, we present a reference model for cloud-native applications that relies only on a small subset of well standardized IaaS services. The reference model can be used for codifying cloud technologies. It can guide technology identification, classification, adoption, research and development processes for cloud-native application and for vendor lock-in aware enterprise architecture engineering methodologies.", "title": "" }, { "docid": "5bd168673acca10828a03cbfd80e8932", "text": "Since a biped humanoid inherently suffers from instability and always risks tipping itself over, ensuring high stability and reliability of walk is one of the most important goals. This paper proposes a walk control consisting of a feedforward dynamic pattern and a feedback sensory reflex. The dynamic pattern is a rhythmic and periodic motion, which satisfies the constraints of dynamic stability and ground conditions, and is generated assuming that the models of the humanoid and the environment are known. The sensory reflex is a simple, but rapid motion programmed in respect to sensory information. The sensory reflex we propose in this paper consists of the zero moment point reflex, the landing-phase reflex, and the body-posture reflex. With the dynamic pattern and the sensory reflex, it is possible for the humanoid to walk rhythmically and to adapt itself to the environmental uncertainties. The effectiveness of our proposed method was confirmed by dynamic simulation and walk experiments on an actual 26-degree-of-freedom humanoid.", "title": "" }, { "docid": "8a2f40f2a0082fae378c7907a60159ac", "text": "We present a novel graph-based neural network model for relation extraction. Our model treats multiple pairs in a sentence simultaneously and considers interactions among them. All the entities in a sentence are placed as nodes in a fully-connected graph structure. The edges are represented with position-aware contexts around the entity pairs. In order to consider different relation paths between two entities, we construct up to l-length walks between each pair. The resulting walks are merged and iteratively used to update the edge representations into longer walks representations. We show that the model achieves performance comparable to the state-ofthe-art systems on the ACE 2005 dataset without using any external tools.", "title": "" }, { "docid": "42d15f1d4eefe97938719a2372289f8d", "text": "With the flourishing of multi-functional wearable devices and the widespread use of smartphones, MHN becomes a promising paradigm of ubiquitous healthcare to continuously monitor our health conditions, remotely diagnose phenomena, and share health information in real time. However, MHNs raise critical security and privacy issues, since highly sensitive health information is collected, and users have diverse security and privacy requirements about such information. In this article, we investigate security and privacy protection in MHNs from the perspective of QoP, which offers users adjustable security protections at fine-grained levels. Specifically, we first introduce the architecture of MHN, and point out the security and privacy challenges from the perspective of QoP. We then present some countermeasures for security and privacy protection in MHNs, including privacy- preserving health data aggregation, secure health data processing, and misbehavior detection. Finally, we discuss some open problems and pose future research directions in MHNs.", "title": "" }, { "docid": "c3f3ed8a363d8dcf9ac1efebfa116665", "text": "We report a new phenomenon associated with language comprehension: the action-sentence compatibility effect (ACE). Participants judged whether sentences were sensible by making a response that required moving toward or away from their bodies. When a sentence implied action in one direction (e.g., \"Close the drawer\" implies action away from the body), the participants had difficulty making a sensibility judgment requiring a response in the opposite direction. The ACE was demonstrated for three sentences types: imperative sentences, sentences describing the transfer of concrete objects, and sentences describing the transfer of abstract entities, such as \"Liz told you the story.\" These dataare inconsistent with theories of language comprehension in which meaning is represented as a set of relations among nodes. Instead, the data support an embodied theory of meaning that relates the meaning of sentences to human action.", "title": "" }, { "docid": "96e10f0858818ce150dba83882557aee", "text": "Embedding and visualizing large-scale high-dimensional data in a two-dimensional space is an important problem since such visualization can reveal deep insights out of complex data. Most of the existing embedding approaches, however, run on an excessively high precision, ignoring the fact that at the end, embedding outputs are converted into coarsegrained discrete pixel coordinates in a screen space. Motivated by such an observation and directly considering pixel coordinates in an embedding optimization process, we accelerate Barnes-Hut tree-based t-distributed stochastic neighbor embedding (BH-SNE), known as a state-of-the-art 2D embedding method, and propose a novel method called PixelSNE, a highly-efficient, screen resolution-driven 2D embedding method with a linear computational complexity in terms of the number of data items. Our experimental results show the significantly fast running time of PixelSNE by a large margin against BH-SNE, while maintaining the minimal degradation in the embedding quality. Finally, the source code of our method is publicly available at https: //github.com/awesome-davian/sasne.", "title": "" }, { "docid": "f0217e1579461afbfea5eccb2b3a4567", "text": "There is an industry-driven public obsession with antioxidants, which are equated to safe, health-giving molecules to be swallowed as mega-dose supplements or in fortified foods. Sometimes they are good for you, but sometimes they may not be, and pro-oxidants can be better for you in some circumstances. This article re-examines and challenges some basic assumptions in the nutritional antioxidant field.", "title": "" }, { "docid": "6c72d16c788509264f573a322c9ebaf6", "text": "A 5-year clinical and laboratory study of Nigerian children with renal failure (RF) was performed to determine the factors that limited their access to dialysis treatment and what could be done to improve access. There were 48 boys and 33 girls (aged 20 days to 15 years). Of 81 RF patients, 55 were eligible for dialysis; 33 indicated ability to afford dialysis, but only 6 were dialyzed, thus giving a dialysis access rate of 10.90% (6/55). Ability to bear dialysis cost/dialysis accessibility ratio was 5.5:1 (33/6). Factors that limited access to dialysis treatment in our patients included financial restrictions from parents (33%), no parental consent for dialysis (6%), lack or failure of dialysis equipment (45%), shortage of dialysis personnel (6%), reluctance of renal staff to dialyze (6%), and late presentation in hospital (4%). More deaths were recorded among undialyzed than dialyzed patients (P<0.01); similarly, undialyzed patients had more deaths compared with RF patients who required no dialysis (P<0.025). Since most of our patients could not be dialyzed owing to a range of factors, preventive nephrology is advocated to reduce the morbidity and mortality from RF due to preventable diseases.", "title": "" }, { "docid": "164fca8833981d037f861aada01d5f7f", "text": "Kernel methods provide a principled way to perform non linear, nonparametric learning. They rely on solid functional analytic foundations and enjoy optimal statistical properties. However, at least in their basic form, they have limited applicability in large scale scenarios because of stringent computational requirements in terms of time and especially memory. In this paper, we take a substantial step in scaling up kernel methods, proposing FALKON, a novel algorithm that allows to efficiently process millions of points. FALKON is derived combining several algorithmic principles, namely stochastic subsampling, iterative solvers and preconditioning. Our theoretical analysis shows that optimal statistical accuracy is achieved requiring essentially O(n) memory and O(n √ n) time. An extensive experimental analysis on large scale datasets shows that, even with a single machine, FALKON outperforms previous state of the art solutions, which exploit parallel/distributed architectures.", "title": "" }, { "docid": "a2bedb2aa906ab814e356cce9db2cc28", "text": "The normative concepts offer a principled basis for engineering flexible multiagent systems for business and other crossorganizational settings. However, producing suitable specifications is nontrivial: the difficulty is an obstacle to the adoption of multiagent systems in industry. This paper considers normative relationships of six main types, namely, commitments (both practical and dialectical), authorizations, powers, prohibitions, and sanctions. It applies natural language processing and machine learning to extract these relationships from business contracts, establishing that they are realistic and their encoding can assist modelers, thereby lowering a barrier to adoption. A ten-fold cross-validation over more than 800 sentences randomly drawn from a corpus of real-life contracts (and manually labeled) yields promising results for the viability of this approach.", "title": "" }, { "docid": "93dd0ad4eb100d4124452e2f6626371d", "text": "The role of background music in audience responses to commercials (and other marketing elements) has received increasing attention in recent years. This article extends the discussion of music’s influence in two ways: (1) by using music theory to analyze and investigate the effects of music’s structural profiles on consumers’ moods and emotions and (2) by examining the relationship between music’s evoked moods that are congruent versus incongruent with the purchase occasion and the resulting effect on purchase intentions. The study reported provides empirical support for the notion that when music is used to evoke emotions congruent with the symbolic meaning of product purchase, the likelihood of purchasing is enhanced. D 2003 Elsevier Inc. All rights reserved.", "title": "" } ]
scidocsrr
ad1c3484ba01247a5603177ba29f8ffa
Location-Based Recommendation System Using Bayesian User's Preference Model in Mobile Devices
[ { "docid": "100d05992fd0178e3d66070b42f96f73", "text": "Mobile advertising complements the Internet and interactive television advertising and makes it possible for advertisers to create tailormade campaigns targeting users according to where they are, their needs of the moment and the devices they are using (i.e. contextualized mobile advertising). Therefore, it is necessary that a fully personalized mobile advertising infrastructure be made. In this paper, we present such a personalized contextualized mobile advertising infrastructure for the advertisement of commercial/non-commercial activities. We name this infrastructure MALCR, in which the primary ingredient is a recommendation mechanism that is supported by the following concepts: (1) minimize users’ inputs (a typical interaction metaphor for mobile devices) for implicit browsing behaviors to be best utilized; (2) implicit browsing behaviors are then analyzed with a view to understanding the users’ interests in the values of features of advertisements; (3) having understood the users’ interests, Mobile Ads relevant to a designated location are subsequently scored and ranked; (4) Top-N scored advertisements are recommended. The recommendation mechanism is novel in its combination of two-level Neural Network learning, Neural Network sensitivity analysis, and attribute-based filtering. This recommendation mechanism is also justified (by thorough evaluations) to show its ability in furnishing effective personalized contextualized mobile advertising. q 2003 Elsevier Science Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "9e310ac4876eee037e0d5c2a248f6f45", "text": "The self-balancing two-wheel chair (SBC) is an unconventional type of personal transportation vehicle. It has unstable dynamics and therefore requires a special control to stabilize and prevent it from falling and to ensure the possibility of speed control and steering by the rider. This paper discusses the dynamic modeling and controller design for the system. The model of SBC is based on analysis of the motions of the inverted pendulum on a mobile base complemented with equations of the wheel motion and motor dynamics. The proposed control design involves a multi-loop PID control. Experimental verification and prototype implementation are discussed.", "title": "" }, { "docid": "22fe98f01a5379a9ea280c22028da43f", "text": "Linux containers showed great superiority when compared to virtual machines and hypervisors in terms of networking, disk and memory management, start-up and compilation speed, and overall processing performance. In this research, we are questioning whether it is more secure to run services inside Linux containers than running them directly on a host base operating system or not. We used Docker v1.10 to conduct a series of experiments to assess the attack surface of hosts running services inside Docker containers compared to hosts running the same services on the base operating system represented in our paper as Debian Jessie. Our vulnerability assessment shows that using Docker containers increase the attack surface of a given host, not the other way around.", "title": "" }, { "docid": "4e85039497c60f8241d598628790f543", "text": "Knowledge management (KM) is a dominant theme in the behavior of contemporary organizations. While KM has been extensively studied in developed economies, it is much less well understood in developing economies, notably those that are characterized by different social and cultural traditions to the mainstream of Western societies. This is notably the case in China. This chapter develops and tests a theoretical model that explains the impact of leadership style and interpersonal trust on the intention of information and knowledge workers in China to share their knowledge with their peers. All the hypotheses are supported, showing that both initiating structure and consideration have a significant effect on employees’ intention to share knowledge through trust building: 28.2% of the variance in employees’ intention to share knowledge is explained. The authors discuss the theoretical contributions of the chapter, identify future research opportunities, and highlight the implications for practicing managers. DOI: 10.4018/978-1-60566-920-5.ch009", "title": "" }, { "docid": "faa60bb1166c83893fabf82c815b4820", "text": "We propose two novel methodologies for the automatic generation of rhythmic poetry in a variety of forms. The first approach uses a neural language model trained on a phonetic encoding to learn an implicit representation of both the form and content of English poetry. This model can effectively learn common poetic devices such as rhyme, rhythm and alliteration. The second approach considers poetry generation as a constraint satisfaction problem where a generative neural language model is tasked with learning a representation of content, and a discriminative weighted finite state machine constrains it on the basis of form. By manipulating the constraints of the latter model, we can generate coherent poetry with arbitrary forms and themes. A large-scale extrinsic evaluation demonstrated that participants consider machine-generated poems to be written by humans 54% of the time. In addition, participants rated a machinegenerated poem to be the most human-like amongst all evaluated.", "title": "" }, { "docid": "66aff99642972dbe0280c83e4d702e96", "text": "We develop a workload model based on the observed behavior of parallel computers at the San Diego Supercomputer Center and the Cornell Theory Center. This model gives us insight into the performance of strategies for scheduling moldable jobs on space-sharing parallel computers. We find that Adaptive Static Partitioning (ASP), which has been reported to work well for other workloads, does not perform as well as strategies that adapt better to system load. The best of the strategies we consider is one that explicitly reduces allocations when load is high (a variation of Sevcik's (1989) A+ strategy).", "title": "" }, { "docid": "741488f7ca4a5666d738319f15fe2846", "text": "This article highlights the importance of effective communication skills for nurses. It focuses on core communication skills, their definitions and the positive outcomes that result when applied to practice. Effective communication is central to the provision of compassionate, high-quality nursing care. The article aims to refresh and develop existing knowledge and understanding of effective communication skills. Nurses reading this article will be encouraged to develop a more conscious style of communicating with patients and carers, with the aim of improving health outcomes and patient satisfaction.", "title": "" }, { "docid": "e04bc357c145c38ed555b3c1fa85c7da", "text": "This paper presents Hybrid (RSA & AES) encryption algorithm to safeguard data security in Cloud. Security being the most important factor in cloud computing has to be dealt with great precautions. This paper mainly focuses on the following key tasks: 1. Secure Upload of data on cloud such that even the administrator is unaware of the contents. 2. Secure Download of data in such a way that the integrity of data is maintained. 3. Proper usage and sharing of the public, private and secret keys involved for encryption and decryption. The use of a single key for both encryption and decryption is very prone to malicious attacks. But in hybrid algorithm, this problem is solved by the use of three separate keys each for encryption as well as decryption. Out of the three keys one is the public key, which is made available to all, the second one is the private key which lies only with the user. In this way, both the secure upload as well as secure download of the data is facilitated using the two respective keys. Also, the key generation technique used in this paper is unique in its own way. This has helped in avoiding any chances of repeated or redundant key.", "title": "" }, { "docid": "387827eae5fb528506c83d5fb161cd63", "text": "Distinction work task power-matching control strategy was adapted to excavator for improving fuel efficiency; the accuracy of rotate engine speed at each work task was core to excavator for saving energy. 21t model excavator ZG3210-9 was taken as the study object to analyze the rotate speed setting and control method, linear position feedback throttle motor was employed to control the governor of engine to adjust rotate speed. Improved double closed loop PID method was adapted to control the engine, feedback of rotate speed and throttle position was taken as the input of the PID control mode. Control system was designed in CoDeSys platform with G16 controller, throttle motor control experiment and engine auto control experiment were carried on the excavator for tuning PID parameters. The result indicated that the double closed-loop PID method can take control and set the engine rotate speed automatically with the maximum error of 8 rpm. The linear model between throttle feedback position and rotate speed is established, which provides the control basis for dynamic energy saving of excavator.", "title": "" }, { "docid": "3a4841b9aefdd0f96125132eaabdac49", "text": "Unstructured text data produced on the internet grows rapidly, and sentiment analysis for short texts becomes a challenge because of the limit of the contextual information they usually contain. Learning good vector representations for sentences is a challenging task and an ongoing research area. Moreover, learning long-term dependencies with gradient descent is difficult in neural network language model because of the vanishing gradients problem. Natural Language Processing (NLP) systems traditionally treat words as discrete atomic symbols; the model can leverage small amounts of information regarding the relationship between the individual symbols. In this paper, we propose ConvLstm, neural network architecture that employs Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) on top of pre-trained word vectors. In our experiments, ConvLstm exploit LSTM as a substitute of pooling layer in CNN to reduce the loss of detailed local information and capture long term dependencies in sequence of sentences. We validate the proposed model on two sentiment datasets IMDB, and Stanford Sentiment Treebank (SSTb). Empirical results show that ConvLstm achieved comparable performances with less parameters on sentiment analysis tasks.", "title": "" }, { "docid": "24f68da70b879cc74b00e2bc9cae6f96", "text": "This paper presents the power management scheme for a power electronics based low voltage microgrid in islanding operation. The proposed real and reactive power control is based on the virtual frequency and voltage frame, which can effectively decouple the real and reactive power flows and improve the system transient and stability performance. Detailed analysis of the virtual frame operation range is presented, and a control strategy to guarantee that the microgrid can be operated within the predetermined voltage and frequency variation limits is also proposed. Moreover, a reactive power control with adaptive voltage droop method is proposed, which automatically updates the maximum reactive power limit of a DG unit based on its current rating and actual real power output and features enlarged power output range and further improved system stability. Both simulation and experimental results are provided in this paper.", "title": "" }, { "docid": "2c5e8e4025572925e72e9f51db2b3d95", "text": "This article reveals our work on refactoring plug-ins for Eclipse's C++ Development Tooling (CDT).\n With CDT a reliable open source IDE exists for C/C++ developers. Unfortunately it has been lacking of overarching refactoring support. There used to be just one single refactoring - Rename. But our plug-in provides several new refactorings which support a C++ developer in his everyday work.", "title": "" }, { "docid": "8976cba604fdc5b00b506098941a6805", "text": "Influenza is an acute respiratory illness that occurs virtually every year and results in substantial disease, death and expense. Detection of Influenza in its earliest stage would facilitate timely action that could reduce the spread of the illness. Existing systems such as CDC and EISS which try to collect diagnosis data, are almost entirely manual, resulting in about two-week delays for clinical data acquisition. Twitter, a popular microblogging service, provides us with a perfect source for early-stage flu detection due to its realtime nature. For example, when a flu breaks out, people that get the flu may post related tweets which enables the detection of the flu breakout promptly. In this paper, we investigate the real-time flu detection problem on Twitter data by proposing Flu Markov Network (Flu-MN): a spatio-temporal unsupervised Bayesian algorithm based on a 4 phase Markov Network, trying to identify the flu breakout at the earliest stage. We test our model on real Twitter datasets from the United States along with baselines in multiple applications, such as real-time flu breakout detection, future epidemic phase prediction, or Influenza-like illness (ILI) physician visits. Experimental results show the robustness and effectiveness of our approach. We build up a real time flu reporting system based on the proposed approach, and we are hopeful that it would help government or health organizations in identifying flu outbreaks and facilitating timely actions to decrease unnecessary mortality.", "title": "" }, { "docid": "5d76b2578fa2aa05a607ab0a542ab81f", "text": "60 A practical approach to the optimal design of precast, prestressed concrete highway bridge girder systems is presented. The approach aims at standardizing the optimal design of bridge systems, as opposed to standardizing girder sections. Structural system optimization is shown to be more relevant than conventional girder optimization for an arbitrarily chosen structural system. Bridge system optimization is defined as the optimization of both longitudinal and transverse bridge configurations (number of spans, number of girders, girder type, reinforcements and tendon layout). As a result, the preliminary design process is much simplified by using some developed design charts from which selection of the optimum bridge system, number and type of girders, and amounts of prestressed and non-prestressed reinforcements are easily obtained for a given bridge length, width and loading type.", "title": "" }, { "docid": "a98887592358e43394469037a4632c3a", "text": "The construct of school engagement has attracted growing interest as a way to ameliorate the decline in academic achievement and increase in dropout rates. The current study tested the fit of a second-order multidimensional factor model of school engagement, using large-scale representative data on 1103 students in middle school. In order to make valid model comparisons by group, we evaluated the extent to which the measurement structure of this model was invariant by gender and by race/ethnicity (European-American vs. African-American students). Finally, we examined differences in latent factor means by these same groups. From our confirmatory factor analyses, we concluded that school engagement was a multidimensional construct, with evidence to support the hypothesized second-order engagement factor structure with behavioral, emotional, and cognitive dimensions. In this sample, boys and girls did not substantially differ, nor did European-American and African-American students, in terms of the underlying constructs of engagement and the composition of these constructs. Finally, there were substantial differences in behavioral and emotional engagement by gender and by racial/ethnic groups in terms of second-order factor mean differences.", "title": "" }, { "docid": "6ac202a4897d400a60b72dc660ead142", "text": "This paper proposes a simple yet highly accurate system for the recognition or unconstrained handwritten numerals. It starts with an examination of the basic characteristic loci (CL) features used along with a nearest neighbor classifier achieving a recognition rate of 90.5%. We then illustrate how the basic CL implementation can be extended and used in conjunction with a multilayer perception neural network classifier to increase the recognition rate to 98%. This proposed recognition system was tested on a totally unconstrained handwritten numeral database while training it with only 600 samples exclusive from the test set. An accuracy exceeding 98% is also expected if a larger training set is used. Lastly, to demonstrate the effectiveness of the system its performance is also compared to that of some other common recognition schemes. These systems use moment Invariants as features along with nearest neighbor classification schemes.", "title": "" }, { "docid": "8f3395cb7d1deb163fb92195a41f9c40", "text": "Temporal limitations of GIS databases are never more apparent than when the time of a change to any spatial object is unknown. This paper examines an unusual type of spatiotemporal imprecision where an event occurs at a known location but at an unknown time. Aoristic analysis can provide a temporal weight and give an indication of the probability that the event occurred within a deŽ ned period. Visualisation of temporal weights can be enhanced by modiŽ cations to existing surface generation algorithms and a temporal intensity surface can be created. An example from burglaries in Central Nottingham (UK) shows that aoristic analysis can smooth irregularities arising from poor database interrogation, and provide an alternative conceptualisation of space and time that is both comprehensible and meaningful.", "title": "" }, { "docid": "dce1e76671789752cf5e6914e2acbf47", "text": "Powered exoskeletons can facilitate rehabilitation of patients with upper limb disabilities. Designs using rotary motors usually result in bulky exoskeletons to reduce the problem of moving inertia. This paper presents a new linearly actuated elbow exoskeleton that consists of a slider crank mechanism and a linear motor. The linear motor is placed beside the upper arm and closer to shoulder joint. Thus better inertia properties can be achieved while lightweight and compactness are maintained. A passive joint is introduced to compensate for the exoskeleton-elbow misalignment and intersubject size variation. A linear series elastic actuator (SEA) is proposed to obtain accurate force and impedance control at the exoskeleton-elbow interface. Bidirectional actuation between exoskeleton and forearm is verified, which is required for various rehabilitation processes. We expect this exoskeleton can provide a means of robot-aided elbow rehabilitation.", "title": "" }, { "docid": "1eebba5c408031931629077bdfb2a37b", "text": "This paper presents a lumped-parameter magnetic model for an interior permanent-magnet synchronous machine. The model accounts for the effects of saturation through a nonlinear reluctance-element network used to estimate the-axis inductance. The magnetic model is used to calculate inductance and torque in the presence of saturation. Furthermore, these calculations are compared to those from finite-element analysis with good agreement.", "title": "" }, { "docid": "6af3bc7d8600d4b3dc4bbf2a2d33adf2", "text": "Skin diseases are most common form of infections occurring in people of all ages. A patient can recover from severe skin diseases if it is detected and treated in the early stages and this can achieve cure ratios of over 95%. Early diagnosis is dependent upon patient attention and accurate assessment by a medical practitioner. Due to the costs of dermatologists to monitor every patient, there is a need for a computerized system to evaluate patient‘s risk of skin disease using images of their skin lesions captured using a standard digital camera. The traditional diagnosis technique comprised of a recording of what the human eye can see using a digital camera whereas this idea aims at improving the quality of existing diagnostic systems by proposing advanced feature extraction and classification methods. In the Proposed method, 45 digital images collected from MIT BMI unit and this database consists of warts, benign skin cancer and malignant skin cancer image apart from normal skin images. These images are subjected to various pre-processing techniques such as resizing, RGB to LAB conversion and contrast enhancement. Then these images are undergone image segmentation using c-means and watershed algorithms individually. Feature extraction is performed using Grey Level Co-occurrence Matrix (GLCM) and Image Quality Assessment (IQA) methods for examining texture which gave the statistical parameters of each algorithm respectively. These features are unitedly used to obtain better classification efficiency. In this work, different types of skin diseases are commonly classified as Benign Skin Cancer, Malignant Skin Cancer and Warts using multi-SVM (Support Vector Machine). Support Vector Machines (SVM) are supervised learning models with associated algorithms that analyze database images for classification analysis. The diagnosis system involves two stages of process such as training and testing. Features values of the training data set is compared to the testing data set of each type. C-means algorithm provides better segmentation and feature extraction compared to watershed algorithm", "title": "" } ]
scidocsrr
2eea883530a1e3b58c5968d5136f856c
Large scale multi-label classification via metalabeler
[ { "docid": "2ad76db05382d5bbdae27d5192cccd72", "text": "Very large-scale classification taxonomies typically have hundreds of thousands of categories, deep hierarchies, and skewed category distribution over documents. However, it is still an open question whether the state-of-the-art technologies in automated text categorization can scale to (and perform well on) such large taxonomies. In this paper, we report the first evaluation of Support Vector Machines (SVMs) in web-page classification over the full taxonomy of the Yahoo! categories. Our accomplishments include: 1) a data analysis on the Yahoo! taxonomy; 2) the development of a scalable system for large-scale text categorization; 3) theoretical analysis and experimental evaluation of SVMs in hierarchical and non-hierarchical settings for classification; 4) an investigation of threshold tuning algorithms with respect to time complexity and their effect on the classification accuracy of SVMs. We found that, in terms of scalability, the hierarchical use of SVMs is efficient enough for very large-scale classification; however, in terms of effectiveness, the performance of SVMs over the Yahoo! Directory is still far from satisfactory, which indicates that more substantial investigation is needed.", "title": "" }, { "docid": "40f21a8702b9a0319410b716bda0a11e", "text": "A number of supervised learning methods have been introduced in the last decade. Unfortunately, the last comprehensive empirical evaluation of supervised learning was the Statlog Project in the early 90's. We present a large-scale empirical comparison between ten supervised learning methods: SVMs, neural nets, logistic regression, naive bayes, memory-based learning, random forests, decision trees, bagged trees, boosted trees, and boosted stumps. We also examine the effect that calibrating the models via Platt Scaling and Isotonic Regression has on their performance. An important aspect of our study is the use of a variety of performance criteria to evaluate the learning methods.", "title": "" }, { "docid": "9a97ba6e4b4e80af129fdf48964017f2", "text": "Automatically categorizing documents into pre-defined topic hierarchies or taxonomies is a crucial step in knowledge and content management. Standard machine learning techniques like Support Vector Machines and related large margin methods have been successfully applied for this task, albeit the fact that they ignore the inter-class relationships. In this paper, we propose a novel hierarchical classification method that generalizes Support Vector Machine learning and that is based on discriminant functions that are structured in a way that mirrors the class hierarchy. Our method can work with arbitrary, not necessarily singly connected taxonomies and can deal with task-specific loss functions. All parameters are learned jointly by optimizing a common objective function corresponding to a regularized upper bound on the empirical loss. We present experimental results on the WIPO-alpha patent collection to show the competitiveness of our approach.", "title": "" } ]
[ { "docid": "5e182532bfd10dee3f8d57f14d1f4455", "text": "Camera calibrating is a crucial problem for further metric scene measurement. Many techniques and some studies concerning calibration have been presented in the last few years. However, it is still di1cult to go into details of a determined calibrating technique and compare its accuracy with respect to other methods. Principally, this problem emerges from the lack of a standardized notation and the existence of various methods of accuracy evaluation to choose from. This article presents a detailed review of some of the most used calibrating techniques in which the principal idea has been to present them all with the same notation. Furthermore, the techniques surveyed have been tested and their accuracy evaluated. Comparative results are shown and discussed in the article. Moreover, code and results are available in internet. ? 2002 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "24a3924f15cb058668e8bcb7ba53ee66", "text": "This paper presents a latest survey of different technologies used in medical image segmentation using Fuzzy C Means (FCM).The conventional fuzzy c-means algorithm is an efficient clustering algorithm that is used in medical image segmentation. To update the study of image segmentation the survey has performed. The techniques used for this survey are Brain Tumor Detection Using Segmentation Based on Hierarchical Self Organizing Map, Robust Image Segmentation in Low Depth Of Field Images, Fuzzy C-Means Technique with Histogram Based Centroid Initialization for Brain Tissue Segmentation in MRI of Head Scans.", "title": "" }, { "docid": "0fcefddfe877b804095838eb9de9581d", "text": "This paper examines the torque ripple and cogging torque variation in surface-mounted permanent-magnet synchronous motors (PMSMs) with skewed rotor. The effect of slot/pole combinations and magnet shapes on the magnitude and harmonic content of torque waveforms in a PMSM drive has been studied. Finite element analysis and experimental results show that the skewing with steps does not necessarily reduce the torque ripple but may cause it to increase for certain magnet designs and configurations. The electromagnetic torque waveforms, including cogging torque, have been analyzed for four different PMSM configurations having the same envelop dimensions and output requirements.", "title": "" }, { "docid": "91c024a832bfc07bc00b7086bcf77add", "text": "Topic-focused multi-document summarization aims to produce a summary biased to a given topic or user profile. This paper presents a novel extractive approach based on manifold-ranking of sentences to this summarization task. The manifold-ranking process can naturally make full use of both the relationships among all the sentences in the documents and the relationships between the given topic and the sentences. The ranking score is obtained for each sentence in the manifold-ranking process to denote the biased information richness of the sentence. Then the greedy algorithm is employed to impose diversity penalty on each sentence. The summary is produced by choosing the sentences with both high biased information richness and high information novelty. Experiments on DUC2003 and DUC2005 are performed and the ROUGE evaluation results show that the proposed approach can significantly outperform existing approaches of the top performing systems in DUC tasks and baseline approaches.", "title": "" }, { "docid": "5bce1b4fb024307bdad27d79f6e26b45", "text": "SMS-based One-Time Passwords (SMS OTP) were introduced to counter phishing and other attacks against Internet services such as online banking. Today, SMS OTPs are commonly used for authentication and authorization for many different applications. Recently, SMS OTPs have come under heavy attack, especially by smartphone trojans. In this paper, we analyze the security architecture of SMS OTP systems and study attacks that pose a threat to Internet-based authentication and authorization services. We determined that the two foundations SMS OTP is built on, cellular networks and mobile handsets, were completely different at the time when SMS OTP was designed and introduced. Throughout this work, we show why SMS OTP systems cannot be considered secure anymore. Based on our findings, we propose mechanisms to secure SMS OTPs against common attacks and specifically against smartphone trojans.", "title": "" }, { "docid": "7fadd4cafa4997c8af947cbdf26f4a43", "text": "This article presents a meta-analysis of the experimental literature that has examined the effect of performance and mastery achievement goals on intrinsic motivation. Summary analyses provided support for the hypothesis that the pursuit of performance goals has an undermining effect on intrinsic motivation relative to the pursuit of mastery goals. Moderator analyses were conducted in an attempt to explain significant variation in the magnitude and direction of this effect across studies. Results indicated that the undermining effect of performance goals relative to mastery goals was contingent on whether participants received confirming or nonconfirming competence feedback, and on whether the experimental procedures induced a performance-approach or performance-avoidance orientation. These findings provide conceptual clarity to the literature on achievement goals and intrinsic motivation and suggest numerous avenues for subsequent empirical work.", "title": "" }, { "docid": "8147143579de86a5eeb668037c2b8c5d", "text": "In this paper we propose that the conventional dichotomy between exemplar-based and prototype-based models of concept learning is helpfully viewed as an instance of what is known in the statistical learning literature as the bias/variance tradeoff. The bias/variance tradeoff can be thought of as a sliding scale that modulates how closely any learning procedure adheres to its training data. At one end of the scale (high variance), models can entertain very complex hypotheses, allowing them to fit a wide variety of data very closely--but as a result can generalize poorly, a phenomenon called overfitting. At the other end of the scale (high bias), models make relatively simple and inflexible assumptions, and as a result may fit the data poorly, called underfitting. Exemplar and prototype models of category formation are at opposite ends of this scale: prototype models are highly biased, in that they assume a simple, standard conceptual form (the prototype), while exemplar models have very little bias but high variance, allowing them to fit virtually any combination of training data. We investigated human learners' position on this spectrum by confronting them with category structures at variable levels of intrinsic complexity, ranging from simple prototype-like categories to much more complex multimodal ones. The results show that human learners adopt an intermediate point on the bias/variance continuum, inconsistent with either of the poles occupied by most conventional approaches. We present a simple model that adjusts (regularizes) the complexity of its hypotheses in order to suit the training data, which fits the experimental data better than representative exemplar and prototype models.", "title": "" }, { "docid": "8fa31615d2164e9146be35d046dd71cf", "text": "An empirical investigation of information retrieval (IR) using the MEDLINE 1 database was carried out to study user behaviour, performance and to investigate the reasons for sub-optimal searches. The experimental subjects were drawn from two groups of final year medical students who differed in their knowledge of the search system, i.e. novice and expert users. The subjects carried out four search tasks and their recall and precision performance was recorded. Data was captured on the search strategies used, duration and logs of submitted queries. Differences were found between the groups for the performance measure of recall in only one of the four experimental tasks. Overall performance was poor. Analysis of strategies, timing data and query logs showed that there were many different causes for search failure or success. Poor searchers either gave up too quickly, employed few search terms, used only simple queries or used the wrong search terms. Good searchers persisted longer, used a larger, richer set of terms, constructed more complex queries and were more diligent in evaluating the retrieved results. However, individual performances were not correlated with all of these factors. Poor performers frequently exhibited several factors of good searcher behaviour and failed for just one reason. Overall end-user searching behaviour is complex and it appears that just one factor can cause poor performance, whereas good performance can result from sub-optimal strategies that compensate for some difficulties. The implications of the results for the design of IR interfaces are discussed.", "title": "" }, { "docid": "73af8236cc76e386aa76c6d20378d774", "text": "Turkish Wikipedia Named-Entity Recognition and Text Categorization (TWNERTC) dataset is a collection of automatically categorized and annotated sentences obtained from Wikipedia. We constructed large-scale gazetteers by using a graph crawler algorithm to extract relevant entity and domain information from a semantic knowledge base, Freebase1. The constructed gazetteers contains approximately 300K entities with thousands of fine-grained entity types under 77 different domains. Since automated processes are prone to ambiguity, we also introduce two new content specific noise reduction methodologies. Moreover, we map fine-grained entity types to the equivalent four coarse-grained types, person, loc, org, misc. Eventually, we construct six different dataset versions and evaluate the quality of annotations by comparing ground truths from human annotators. We make these datasets publicly available to support studies on Turkish named-entity recognition (NER) and text categorization (TC).", "title": "" }, { "docid": "a83b417c2be604427eacf33b1db91468", "text": "We report a male infant with iris coloboma, choanal atresia, postnatal retardation of growth and psychomotor development, genital anomaly, ear anomaly, and anal atresia. In addition, there was cutaneous syndactyly and nail hypoplasia of the second and third fingers on the right and hypoplasia of the left second finger nail. Comparable observations have rarely been reported and possibly represent genetic heterogeneity.", "title": "" }, { "docid": "77278e6ba57e82c88f66bd9155b43a50", "text": "Up to the time when a huge corruption scandal, popularly labeled tangentopoli”(bribe city), brought down the political establishment that had ruled Italy for several decades, that country had reported one of the largest shares of capital spending in GDP among the OECD countries. After the scandal broke out and several prominent individuals were sent to jail, or even committed suicide, capital spending fell sharply. The fall seems to have been caused by a reduction in the number of capital projects being undertaken and, perhaps more importantly, by a sharp fall in the costs of the projects still undertaken. Information released by Transparency International (TI) reports that, within the space of two or three years, in the city of Milan, the city where the scandal broke out in the first place, the cost of city rail links fell by 52 percent, the cost of one kilometer of subway fell by 57 percent, and the budget for the new airport terminal was reduced by 59 percent to reflect the lower construction costs. Although one must be aware of the logical fallacy of post hoc, ergo propter hoc, the connection between the two events is too strong to be attributed to a coincidence. In fact, this paper takes the view that it could not have been a coincidence.", "title": "" }, { "docid": "70eed1677463969a4ed443988d8d7521", "text": "Security, privacy, and fairness have become critical in the era of data science and machine learning. More and more we see that achieving universally secure, private, and fair systems is practically impossible. We have seen for example how generative adversarial networks can be used to learn about the expected private training data; how the exploitation of additional data can reveal private information in the original one; and how what looks like unrelated features can teach us about each other. Confronted with this challenge, in this paper we open a new line of research, where the security, privacy, and fairness is learned and used in a closed environment. The goal is to ensure that a given entity (e.g., the company or the government), trusted to infer certain information with our data, is blocked from inferring protected information from it. For example, a hospital might be allowed to produce diagnosis on the patient (the positive task), without being able to infer the gender of the subject (negative task). Similarly, a company can guarantee that internally it is not using the provided data for any undesired task, an important goal that is not contradicting the virtually impossible challenge of blocking everybody from the undesired task. We design a system that learns to succeed on the positive task while simultaneously fail at the negative one, and illustrate this with challenging cases where the positive task is actually harder than the negative one being blocked. Fairness, to the information in the negative task, is often automatically obtained as a result of this proposed approach. The particular framework and examples open the door to security, privacy, and fairness in very important closed scenarios, ranging from private data accumulation companies like social networks to law-enforcement and hospitals. J. Sokolić and M. R. D. Rodrigues are with the Department of Electronic and Electrical Engineering, Univeristy College London, London, UK (e-mail: {jure.sokolic.13, m.rodrigues}@ucl.ac.uk). Q. Qiu and G. Sapiro are with the Department of Electrical and Computer Engineering, Duke University, NC, USA (e-mail: {qiang.qiu, guillermo.sapiro}@duke.edu). The work of Guillermo Sapiro was partially supported by NSF, ONR, ARO, NGA. May 24, 2017 DRAFT ar X iv :1 70 5. 08 19 7v 1 [ st at .M L ] 2 3 M ay 2 01 7", "title": "" }, { "docid": "2c3566048334e60ae3f30bd631e4da87", "text": "The Indian Railways is world&apos;s fourth largest railway network in the world after USA, Russia and China. There is a severe problem of collisions of trains. So Indian railway is working in this aspect to promote the motto of &quot;SAFE JOURNEY&quot;. A RFID based railway track finding system for railway has been proposed in this paper. In this system the RFID tags and reader are used which are attached in the tracks and engine consecutively. So Train engine automatically get the data of path by receiving it from RFID tag and detect it. If path is correct then train continue to run on track and if it is wrong then a signal is generated and sent to the control station and after this engine automatically stop in a minimum time and the display of LCD show the &quot;WRONG PATH&quot;. So the collision and accident of train can be avoided. With the help of this system the train engine would be programmed to move according to the requirement. The another feature of this system is automatic track changer by which the track jointer would move automatically according to availability of trains.", "title": "" }, { "docid": "923a714ed2811e29647870a2694698b1", "text": "Although weight and activation quantization is an effective approach for Deep Neural Network (DNN) compression and has a lot of potentials to increase inference speed leveraging bit-operations, there is still a noticeable gap in terms of prediction accuracy between the quantized model and the full-precision model. To address this gap, we propose to jointly train a quantized, bit-operation-compatible DNN and its associated quantizers, as opposed to using fixed, handcrafted quantization schemes such as uniform or logarithmic quantization. Our method for learning the quantizers applies to both network weights and activations with arbitrary-bit precision, and our quantizers are easy to train. The comprehensive experiments on CIFAR-10 and ImageNet datasets show that our method works consistently well for various network structures such as AlexNet, VGG-Net, GoogLeNet, ResNet, and DenseNet, surpassing previous quantization methods in terms of accuracy by an appreciable margin. Code available at https://github.com/Microsoft/LQ-Nets", "title": "" }, { "docid": "e51d244f45cda8826dc94ba35a12d066", "text": "This article describes part of our contribution to the “Bell Kor’s Pragmatic Chaos” final solution, which won the Netflix Grand Prize. The other portion of the contribution was creat ed while working at AT&T with Robert Bell and Chris Volinsky, as reported in our 2008 Progress Prize report [3]. The final solution includes all the predictors described there. In th is article we describe only the newer predictors. So what is new over last year’s solution? First we further improved the baseline predictors (Sec. III). This in turn impr oves our other models, which incorporate those predictors, like the matrix factorization model (Sec. IV). In addition, an exten sion of the neighborhood model that addresses temporal dynamics was introduced (Sec. V). On the Restricted Boltzmann Machines (RBM) front, we use a new RBM model with superior accuracy by conditioning the visible units (Sec. VI). The fin al addition is the introduction of a new blending algorithm, wh ich is based on gradient boosted decision trees (GBDT) (Sec. VII ).", "title": "" }, { "docid": "2348652010d1dec37a563e3eed15c090", "text": "This study firstly examines the current literature concerning ERP implementation problems during implementation phases and causes of ERP implementation failure. A multiple case study research methodology was adopted to understand “why” and “how” these ERP systems could not be implemented successfully. Different stakeholders (including top management, project manager, project team members and ERP consultants) from these case studies were interviewed, and ERP implementation documents were reviewed for triangulation. An ERP life cycle framework was applied to study the ERP implementation process and the associated problems in each phase of ERP implementation. Fourteen critical failure factors were identified and analyzed, and three common critical failure factors (poor consultant effectiveness, project management effectiveness and poo555îr quality of business process re-engineering) were examined and discussed. Future research on ERP implementation and critical failure factors is discussed. It is hoped that this research will help to bridge the current literature gap and provide practical advice for both academics and practitioners.", "title": "" }, { "docid": "1fb8701f0ad0a9e894e4195bc02d5c25", "text": "As graphics processing units (GPUs) are broadly adopted, running multiple applications on a GPU at the same time is beginning to attract wide attention. Recent proposals on multitasking GPUs have focused on either spatial multitasking, which partitions GPU resource at a streaming multiprocessor (SM) granularity, or simultaneous multikernel (SMK), which runs multiple kernels on the same SM. However, multitasking performance varies heavily depending on the resource partitions within each scheme, and the application mixes. In this paper, we propose GPU Maestro that performs dynamic resource management for efficient utilization of multitasking GPUs. GPU Maestro can discover the best performing GPU resource partition exploiting both spatial multitasking and SMK. Furthermore, dynamism within a kernel and interference between the kernels are automatically considered because GPU Maestro finds the best performing partition through direct measurements. Evaluations show that GPU Maestro can improve average system throughput by 20.2% and 13.9% over the baseline spatial multitasking and SMK, respectively.", "title": "" }, { "docid": "126a6d3308c0b4d1e17139cb16da867d", "text": "INTRODUCTION\n3D-printed anatomical models play an important role in medical and research settings. The recent successes of 3D anatomical models in healthcare have led many institutions to adopt the technology. However, there remain several issues that must be addressed before it can become more wide-spread. Of importance are the problems of cost and time of manufacturing. Machine learning (ML) could be utilized to solve these issues by streamlining the 3D modeling process through rapid medical image segmentation and improved patient selection and image acquisition. The current challenges, potential solutions, and future directions for ML and 3D anatomical modeling in healthcare are discussed. Areas covered: This review covers research articles in the field of machine learning as related to 3D anatomical modeling. Topics discussed include automated image segmentation, cost reduction, and related time constraints. Expert commentary: ML-based segmentation of medical images could potentially improve the process of 3D anatomical modeling. However, until more research is done to validate these technologies in clinical practice, their impact on patient outcomes will remain unknown. We have the necessary computational tools to tackle the problems discussed. The difficulty now lies in our ability to collect sufficient data.", "title": "" }, { "docid": "c1e39be2fa21a4f47d163c1407490dc8", "text": "Most existing anaphora resolution algorithms are designed to account only for anaphors with NP-antecedents. This paper describes an algorithm for the resolution of discourse deictic anaphors, which constitute a large percentage of anaphors in spoken dialogues. The success of the resolution is dependent on the classification of all pronouns and demonstratives into individual, discourse deictic and vague anaphora. Finally, the empirical results of the application of the algorithm to a corpus of spoken dialogues are presented.", "title": "" } ]
scidocsrr
937cfe2ebc07de07d2b7395249078728
The OpenGRASP benchmarking suite: An environment for the comparative analysis of grasping and dexterous manipulation
[ { "docid": "1cefbe0177c56d92e34c4b5a88a29099", "text": "Typical tasks of future service robots involve grasping and manipulating a large variety of objects differing in size and shape. Generating stable grasps on 3D objects is considered to be a hard problem, since many parameters such as hand kinematics, object geometry, material properties and forces have to be taken into account. This results in a high-dimensional space of possible grasps that cannot be searched exhaustively. We believe that the key to find stable grasps in an efficient manner is to use a special representation of the object geometry that can be easily analyzed. In this paper, we present a novel grasp planning method that evaluates local symmetry properties of objects to generate only candidate grasps that are likely to be of good quality. We achieve this by computing the medial axis which represents a 3D object as a union of balls. We analyze the symmetry information contained in the medial axis and use a set of heuristics to generate geometrically and kinematically reasonable candidate grasps. These candidate grasps are tested for force-closure. We present the algorithm and show experimental results on various object models using an anthropomorphic hand of a humanoid robot in simulation.", "title": "" } ]
[ { "docid": "07fe7ad68e4f7bb1a978cda02a564044", "text": "Temporomandibular disorders (TMDs) affect 8–12 % of the adolescent and adult population, resulting in patient discomfort and affecting quality of life. Despite the growing incidence of these disorders, an effective screening modality to detect TMDs is still lacking. Although magnetic resonance imaging is the gold standard for imaging of the temporomandibular joint (TMJ), it has a few drawbacks such as cost and its time-consuming nature. High-resolution ultrasonography is a non-invasive and cost-effective imaging modality that enables simultaneous visualization of the hard and soft tissue components of the TMJ. This study aimed to evaluate the correlations between the clinical signs and symptoms of patients with chronic TMJ disorders and their ultrasonographic findings, thereby enabling the use of ultrasonography as an imaging modality for screening of TMDs. Twenty patients with chronic TMDs were selected according to the Research Diagnostic Criteria for TMDs. Ultrasonographic imaging of individual TMJs was performed to assess the destructive changes, effusion, and disc dislocation. Fisher’s exact test was used to examine the correlations between the findings obtained from the ultrasonographic investigation and the clinical signs and symptoms. There was a significant correlation between pain and joint effusion as well as between clicking and surface erosion. The present findings suggest that ultrasonography can be used as a screening modality to assess the hard and soft tissue changes in patients presenting with signs and symptoms of TMDs.", "title": "" }, { "docid": "fb05042ac52f448d9c7d3f820df4b790", "text": "Protein gamma-turn prediction is useful in protein function studies and experimental design. Several methods for gamma-turn prediction have been developed, but the results were unsatisfactory with Matthew correlation coefficients (MCC) around 0.2–0.4. Hence, it is worthwhile exploring new methods for the prediction. A cutting-edge deep neural network, named Capsule Network (CapsuleNet), provides a new opportunity for gamma-turn prediction. Even when the number of input samples is relatively small, the capsules from CapsuleNet are effective to extract high-level features for classification tasks. Here, we propose a deep inception capsule network for gamma-turn prediction. Its performance on the gamma-turn benchmark GT320 achieved an MCC of 0.45, which significantly outperformed the previous best method with an MCC of 0.38. This is the first gamma-turn prediction method utilizing deep neural networks. Also, to our knowledge, it is the first published bioinformatics application utilizing capsule network, which will provide a useful example for the community. Executable and source code can be download at http://dslsrv8.cs.missouri.edu/~cf797/MUFoldGammaTurn/download.html.", "title": "" }, { "docid": "eafedf73a6a59df046416a2611f312bd", "text": "The inverse halftoning algorithm is used to reconstruct a gray image from an input halftone image. Based on the recently published lookup table (LUT) technique, this paper presents a novel edge-based LUT method for inverse halftoning which improves the quality of the reconstructed gray image. The proposed method first uses the LUT-based inverse halftoning method as a preprocessing step to transform the given halftone image to a base gray image, and then the edges are extracted and classified from the base gray image. According to these classified edges, a novel edge-based LUT is built up to reconstruct the gray image. Based on a set of 30 real training images with both low- and high-frequency contents, experimental results demonstrated that the proposed method achieves a better image quality when compared to the currently published two methods, by Chang et al. and Mes$80e and Vaidyanathan.", "title": "" }, { "docid": "b7ee04e61d8666b6d865e69e24f69a6f", "text": "CONTEXT\nThis article presents the main results from a large-scale analytical systematic review on knowledge exchange interventions at the organizational and policymaking levels. The review integrated two broad traditions, one roughly focused on the use of social science research results and the other focused on policymaking and lobbying processes.\n\n\nMETHODS\nData collection was done using systematic snowball sampling. First, we used prospective snowballing to identify all documents citing any of a set of thirty-three seminal papers. This process identified 4,102 documents, 102 of which were retained for in-depth analysis. The bibliographies of these 102 documents were merged and used to identify retrospectively all articles cited five times or more and all books cited seven times or more. All together, 205 documents were analyzed. To develop an integrated model, the data were synthesized using an analytical approach.\n\n\nFINDINGS\nThis article developed integrated conceptualizations of the forms of collective knowledge exchange systems, the nature of the knowledge exchanged, and the definition of collective-level use. This literature synthesis is organized around three dimensions of context: level of polarization (politics), cost-sharing equilibrium (economics), and institutionalized structures of communication (social structuring).\n\n\nCONCLUSIONS\nThe model developed here suggests that research is unlikely to provide context-independent evidence for the intrinsic efficacy of knowledge exchange strategies. To design a knowledge exchange intervention to maximize knowledge use, a detailed analysis of the context could use the kind of framework developed here.", "title": "" }, { "docid": "916767707946aaa4ade639a56e01d8be", "text": "Copyright © 2017 Massachusetts Medical Society. It is estimated that 470,000 patients receive radiotherapy each year in the United States.1 As many as half of patients with cancer will receive radiotherapy.2 Improvements in diagnosis, therapy, and supportive care have led to increasing numbers of cancer survivors.3 In response, the emphasis of radiation oncology has expanded beyond cure to include reducing side effects, particularly late effects, which may substantially affect a patient’s quality of life. Radiotherapy is used to treat benign and malignant diseases and can be used alone or in combination with chemotherapy, surgery, or both. For primary tumors or metastatic deposits, palliative radiotherapy is often used to reduce pain or mass effect (due to spinal cord compression, brain metastases, or airway obstruction). Therapeutic radiation can be delivered from outside the patient, known as external-beam radiation therapy, or EBRT (see the Glossary in the Supplementary Appendix, available with the full text of this article at NEJM.org), by implanting radioactive sources in cavities or tissues (brachytherapy), or through systemic administration of radiopharmaceutical agents. Multiple technological and biologic advances have fundamentally altered the field of radiation oncology since it was last reviewed in the Journal.4", "title": "" }, { "docid": "d5019a5536950482e166d68dc3a7cac7", "text": "Co-contamination of the environment with toxic chlorinated organic and heavy metal pollutants is one of the major problems facing industrialized nations today. Heavy metals may inhibit biodegradation of chlorinated organics by interacting with enzymes directly involved in biodegradation or those involved in general metabolism. Predictions of metal toxicity effects on organic pollutant biodegradation in co-contaminated soil and water environments is difficult since heavy metals may be present in a variety of chemical and physical forms. Recent advances in bioremediation of co-contaminated environments have focussed on the use of metal-resistant bacteria (cell and gene bioaugmentation), treatment amendments, clay minerals and chelating agents to reduce bioavailable heavy metal concentrations. Phytoremediation has also shown promise as an emerging alternative clean-up technology for co-contaminated environments. However, despite various investigations, in both aerobic and anaerobic systems, demonstrating that metal toxicity hampers the biodegradation of the organic component, a paucity of information exists in this area of research. Therefore, in this review, we discuss the problems associated with the degradation of chlorinated organics in co-contaminated environments, owing to metal toxicity and shed light on possible improvement strategies for effective bioremediation of sites co-contaminated with chlorinated organic compounds and heavy metals.", "title": "" }, { "docid": "4bee0074e303cf696a40b8bd244be040", "text": "Countering cyber threats, especially attack detection, is a challenging area of research in the field of information assurance. Intruders use polymorphic mechanisms to masquerade the attack payload and evade the detection techniques. Many supervised and unsupervised learning approaches from the field of machine learning and pattern recognition have been used to increase the efficacy of intrusion detection systems (IDSs). Supervised learning approaches use only labeled samples to train a classifier, but obtaining sufficient labeled samples is cumbersome, and requires the efforts of domain experts. However, unlabeled samples can easily be obtained in many real world problems. Compared to supervised learning approaches, semi-supervised learning (SSL) addresses this issue by considering large amount of unlabeled samples together with the labeled samples to build a better classifier. This paper proposes a novel fuzziness based semi-supervised learning approach by utilizing unlabeled samples assisted with supervised learning algorithm to improve the classifier’s performance for the IDSs. A single hidden layer feed-forward neural network (SLFN) is trained to output a fuzzy membership vector, and the sample categorization (low, mid, and high fuzziness categories) on unlabeled samples is performed using the fuzzy quantity. The classifier is retrained after incorporating each category separately into the original training set. The experimental results using this technique of intrusion detection on the NSL-KDD dataset show that unlabeled samples belonging to low and high fuzziness groups make major contributions to improve the classifier’s performance compared to existing classifiers e.g., naive bayes, support vector machine, random forests, etc. © 2016 Published by Elsevier Inc.", "title": "" }, { "docid": "05bbbaf76ec39e22369806c3008a93b5", "text": "In the elderly, Alzheimer’s disease (AD) is the most common form of dementia (Hebert et al., 2003). The two pathologies that characterize the disease are the presence of large numbers of intracellular neurofibrillary tangles (NFTs) and extracellular neuritic plaques in the brain (e.g., Braak and Braak, 1991; 1998; Selkoe, 2001). Neurofibrillary tangles consist of hyperphosphorylated, twisted filaments of the cytoskeletal protein tau (e.g., Duff, 2006), whereas plaques are primarily made up of amyloid ┚(A┚ [Selkoe, 2001; Dickson and Vickers, 2002]), a 39-43 amino acid long peptide derived from the proteolytic processing of the amyloid precursor protein (APP [Selkoe, 2001; Vetrivel and Thinakaran, 2006]). When APP is sequentially cleaved by the ┚-secretase and ┛-secretase, one of the resulting breakdown product is A┚, in contrast, initial cleavage by ┙-secretase (in the middle of the A┚ sequence) leads to production of APPs┙ and the C83 peptide (Selkoe, 2001). Most cases of AD are sporadic, however approximately 5 % of AD cases are familial (Price and Sisodia, 1995; Selkoe, 2001), these cases are related to mutations in the genes for APP, and presenilin 1 and 2 (PS1 and PS2 [Price and Sisodia, 1995; Hardy, 1997; Selkoe, 2001]). Transgenic mice expressing mutated human AD genes offer a powerful model to study the role of A┚ in the development of pathology (e.g., Duff and Suleman, 2004; McGowan et al, 2006). The present study employs three lines of transgenic mice expressing both human APPswe and/or PS1 mutations. These lines of mice develop elevated levels of A┚42 at different ages, and at different locations (Van Groen et al., 2005; Wang et al., 2003).", "title": "" }, { "docid": "88857022b84ad66a904a5800a1228bd8", "text": "A number of domain adaptation techniques have been developed in classical supervised machine learning to solve the problem of learning under domain shift. Unfortunately, almost no attempt has been made to exploit their potential use for causal inference. In this paper, we present a conceptual framework which leverages on one of these successes by adopting Domain Adversarial of Neural Network (DANN) [1] and make it work within the context of causal inference. More specifically, we attempt to modify an existing counterfactual prediction framework based on domain adaptation (DA) [2] with a deep neural network and discrepancy distance measure to enable us instead to perform causal inference in an adversarial way. We achieved this by jointly minimizing the risk in the feature reconstruction and the counterfactual prediction while maximizing the error in discriminating between the distribution of subjects that received intervention and those that received no intervention. This way, a domain invariant yet discriminative model that will generalize well on the counterfactual target distribution is produced.", "title": "" }, { "docid": "3e8bffdcf0df0a34b95ecc5432984777", "text": "We focus on grounding (i.e., localizing or linking) referring expressions in images, e.g., \"largest elephant standing behind baby elephant\". This is a general yet challenging vision-language task since it does not only require the localization of objects, but also the multimodal comprehension of context - visual attributes (e.g., \"largest\", \"baby\") and relationships (e.g., \"behind\") that help to distinguish the referent from other objects, especially those of the same category. Due to the exponential complexity involved in modeling the context associated with multiple image regions, existing work oversimplifies this task to pairwise region modeling by multiple instance learning. In this paper, we propose a variational Bayesian method, called Variational Context, to solve the problem of complex context modeling in referring expression grounding. Our model exploits the reciprocal relation between the referent and context, i.e., either of them influences estimation of the posterior distribution of the other, and thereby the search space of context can be greatly reduced. We also extend the model to unsupervised setting where no annotation for the referent is available. Extensive experiments on various benchmarks show consistent improvement over state-of-the-art methods in both supervised and unsupervised settings. The code is available at https://github.com/yuleiniu/vc/.", "title": "" }, { "docid": "859d229a47e19b284f4fddac44b274b4", "text": "The goal of the ontology requirements specification activity is to state why the ontology is being built, what its intended uses are, who the endusers are, and which requirements the ontology should fulfill. The novelty of this paper lies in the systematization of the ontology requirements specification activity since the paper proposes detailed methodological guidelines for specifying ontology requirements efficiently. These guidelines will help ontology engineers to capture ontology requirements and produce the ontology requirements specification document (ORSD). The ORSD will play a key role during the ontology development process because it facilitates, among other activities, (1) the search and reuse of existing knowledge-aware resources with the aim of re-engineering them into ontologies, (2) the search and reuse of existing ontological resources (ontologies, ontology modules, ontology statements as well as ontology design patterns), and (3) the verification of the ontology along the ontology development. In parallel to the guidelines, we present the ORSD that resulted from the ontology requirements specification activity within the SEEMP project, and how this document facilitated not only the reuse of existing knowledge-aware resources but also the verification of the SEEMP ontologies. Moreover, we present some use cases in which the methodological guidelines proposed here were applied.", "title": "" }, { "docid": "620642c5437dc26cac546080c4465707", "text": "One of the most distinctive linguistic characteristics of modern academic writing is its reliance on nominalized structures. These include nouns that have been morphologically derived from verbs (e.g., development, progression) as well as verbs that have been ‘converted’ to nouns (e.g., increase, use). Almost any sentence taken from an academic research article will illustrate the use of such structures. For example, consider the opening sentences from three education research articles; derived nominalizations are underlined and converted nouns given in italics: 1", "title": "" }, { "docid": "bd963a55c28304493118028fe5f47bab", "text": "Tables are a common structuring element in many documents, s uch as PDF files. To reuse such tables, appropriate methods need to b e develop, which capture the structure and the content information. We have d e loped several heuristics which together recognize and decompose tables i n PDF files and store the extracted data in a structured data format (XML) for easi er reuse. Additionally, we implemented a prototype, which gives the user the ab ility of making adjustments on the extracted data. Our work shows that purel y heuristic-based approaches can achieve good results, especially for lucid t ables.", "title": "" }, { "docid": "f8b201105e3b92ed4ef2a884cb626c0d", "text": "Several years of academic and industrial research efforts have converged to a common understanding on fundamental security building blocks for the upcoming vehicular communication (VC) systems. There is a growing consensus toward deploying a special-purpose identity and credential management infrastructure, i.e., a vehicular public-key infrastructure (VPKI), enabling pseudonymous authentication, with standardization efforts toward that direction. In spite of the progress made by standardization bodies (IEEE 1609.2 and ETSI) and harmonization efforts [Car2Car Communication Consortium (C2C-CC)], significant questions remain unanswered toward deploying a VPKI. Deep understanding of the VPKI, a central building block of secure and privacy-preserving VC systems, is still lacking. This paper contributes to the closing of this gap. We present SECMACE, a VPKI system, which is compatible with the IEEE 1609.2 and ETSI standards specifications. We provide a detailed description of our state-of-the-art VPKI that improves upon existing proposals in terms of security and privacy protection, and efficiency. SECMACE facilitates multi-domain operations in the VC systems and enhances user privacy, notably preventing linking pseudonyms based on timing information and offering increased protection even against honest-but-curious VPKI entities. We propose multiple policies for the vehicle–VPKI interactions and two large-scale mobility trace data sets, based on which we evaluate the full-blown implementation of SECMACE. With very little attention on the VPKI performance thus far, our results reveal that modest computing resources can support a large area of vehicles with very few delays and the most promising policy in terms of privacy protection can be supported with moderate overhead.", "title": "" }, { "docid": "5e0ac4a3957f5eba26790f54678df7fc", "text": "Recent statistics show that in 2015 more than 140 millions new malware samples have been found. Among these, a large portion is due to ransomware, the class of malware whose specific goal is to render the victim’s system unusable, in particular by encrypting important files, and then ask the user to pay a ransom to revert the damage. Several ransomware include sophisticated packing techniques, and are hence difficult to statically analyse. We present EldeRan, a machine learning approach for dynamically analysing and classifying ransomware. EldeRan monitors a set of actions performed by applications in their first phases of installation checking for characteristics signs of ransomware. Our tests over a dataset of 582 ransomware belonging to 11 families, and with 942 goodware applications, show that EldeRan achieves an area under the ROC curve of 0.995. Furthermore, EldeRan works without requiring that an entire ransomware family is available beforehand. These results suggest that dynamic analysis can support ransomware detection, since ransomware samples exhibit a set of characteristic features at run-time that are common across families, and that helps the early detection of new variants. We also outline some limitations of dynamic analysis for ransomware and propose possible solutions.", "title": "" }, { "docid": "7e44e32b6e19a884f12b2f4b337909ca", "text": "Many computational problems can be solved by multiple algorithms, with different algorithms fastest for different problem sizes, input distributions, and hardware characteristics. We consider the problem ofalgorithm selection: dynamically choose an algorithm to attack an instance of a problem with the goal of minimizing the overall execution time. We formulate the problem as a kind of Markov decision process (MDP), and use ideas from reinforcement learning to solve it. This paper introduces a kind of MDP that models the algorithm selection problem by allowing multiple state transitions. The well known Q-learning algorithm is adapted for this case in a way that combines both Monte-Carlo and Temporal Difference methods. Also, this work uses, and extends in a way to control problems, the Least-Squares Temporal Difference algorithm (LSTD ) of Boyan. The experimental study focuses on the classic problems of order statistic selection and sorting. The encouraging results reveal the potential of applying learning methods to traditional computational problems.", "title": "" }, { "docid": "06681674a5633b2d7c5c397867c1f042", "text": "The increasingly important role that technologies play in today's business success is well known. To ensure proper selection and development of the key technologies, a deliberate technology plan is needed. In this paper, a strategic technology planning framework is proposed. A hierarchical decision model and its sensitivity analysis are presented as two major steps of the framework to provide effective technology assessment and to generate technology scenarios. The hierarchical model links an organization's competitive goals and strategies in evaluating the technology alternativespsila overall contributions to business success; the sensitivity analysis helps to forecast and implement possible future changes in the economic environment, industry policies, and organization strategies. With the proposed framework, organizations can start to implement their technology plans synoptically and follow up with incremental adaptations as necessary. A case study on Taiwan's semiconductor foundry industry is presented to demonstrate the model in detail.", "title": "" }, { "docid": "26429dfbcf0562376b3308882d5efbea", "text": "This review discusses the methodology of the standardized on-the-road driving test and standard operation procedures to conduct the test and analyze the data. The on-the-road driving test has proven to be a sensitive and reliable method to examine driving ability after administration of central nervous system (CNS) drugs. The test is performed on a public highway in normal traffic. Subjects are instructed to drive with a steady lateral position and constant speed. Its primary parameter, the standard deviation of lateral position (SDLP), ie, an index of 'weaving', is a stable measure of driving performance with high test-retest reliability. SDLP differences from placebo are dose-dependent, and do not depend on the subject's baseline driving skills (placebo SDLP). It is important that standard operation procedures are applied to conduct the test and analyze the data in order to allow comparisons between studies from different sites.", "title": "" }, { "docid": "93177b2546e8efa1eccad4c81468f9fe", "text": "Online Transaction Processing (OLTP) databases include a suite of features - disk-resident B-trees and heap files, locking-based concurrency control, support for multi-threading - that were optimized for computer technology of the late 1970's. Advances in modern processors, memories, and networks mean that today's computers are vastly different from those of 30 years ago, such that many OLTP databases will now fit in main memory, and most OLTP transactions can be processed in milliseconds or less. Yet database architecture has changed little.\n Based on this observation, we look at some interesting variants of conventional database systems that one might build that exploit recent hardware trends, and speculate on their performance through a detailed instruction-level breakdown of the major components involved in a transaction processing database system (Shore) running a subset of TPC-C. Rather than simply profiling Shore, we progressively modified it so that after every feature removal or optimization, we had a (faster) working system that fully ran our workload. Overall, we identify overheads and optimizations that explain a total difference of about a factor of 20x in raw performance. We also show that there is no single \"high pole in the tent\" in modern (memory resident) database systems, but that substantial time is spent in logging, latching, locking, B-tree, and buffer management operations.", "title": "" } ]
scidocsrr
1a8df535ad1388fc0d355b9165aa32a4
SHREC ’ 16 : Partial Matching of Deformable Shapes
[ { "docid": "9af22f6a1bbb4cbb13508b654e5fd7a5", "text": "We present a 3-D correspondence method to match the geometric extremities of two shapes which are partially isometric. We consider the most general setting of the isometric partial shape correspondence problem, in which shapes to be matched may have multiple common parts at arbitrary scales as well as parts that are not similar. Our rank-and-vote-and-combine algorithm identifies and ranks potentially correct matches by exploring the space of all possible partial maps between coarsely sampled extremities. The qualified top-ranked matchings are then subjected to a more detailed analysis at a denser resolution and assigned with confidence values that accumulate into a vote matrix. A minimum weight perfect matching algorithm is finally iterated to combine the accumulated votes into an optimal (partial) mapping between shape extremities, which can further be extended to a denser map. We test the performance of our method on several data sets and benchmarks in comparison with state of the art.", "title": "" } ]
[ { "docid": "5b08a93afae9cf64b5300c586bfb3fdc", "text": "Social interactions are characterized by distinct forms of interdependence, each of which has unique effects on how behavior unfolds within the interaction. Despite this, little is known about the psychological mechanisms that allow people to detect and respond to the nature of interdependence in any given interaction. We propose that interdependence theory provides clues regarding the structure of interdependence in the human ancestral past. In turn, evolutionary psychology offers a framework for understanding the types of information processing mechanisms that could have been shaped under these recurring conditions. We synthesize and extend these two perspectives to introduce a new theory: functional interdependence theory (FIT). FIT can generate testable hypotheses about the function and structure of the psychological mechanisms for inferring interdependence. This new perspective offers insight into how people initiate and maintain cooperative relationships, select social partners and allies, and identify opportunities to signal social motives.", "title": "" }, { "docid": "87e0bec51e1188b7c8ae88c2e111b2b5", "text": "For the last few years, the EC Commission has been reviewing its application of Article 82EC which prohibits the abuse of a dominant position on the Common Market. The review has resulted in a Communication from the EC Commission which for the first time sets out its enforcement priorities under Article 82EC. The review had been limited to the so-called ‘exclusionary’ abuses and excluded ‘exploitative’ abuses; the enforcement priorities of the EC Commission set out in the Guidance (2008) are also limited to ‘exclusionary’ abuses. This is, however, odd since the EC Commission expresses the objective of Article 82EC as enhancing consumer welfare: exploitative abuses can directly harm consumers unlike exclusionary abuses which can only indirectly harm consumers as the result of exclusion of competitors. This paper questions whether and under which circumstances exploitation can and/or should be found ‘abusive’. It argues that ‘exploitative’ abuse can and should be used as the test of anticompetitive effects on the market under an effects-based approach and thus conduct should only be found abusive if it is ‘exploitative’. Similarly, mere exploitation does not demonstrate harm to competition and without the latter, exploitation on its own should not be found abusive. December 2008", "title": "" }, { "docid": "28ec8c6e9166ae838cff90d776dbb102", "text": "During the past few years, many publications about computer applications in the field of drawing, classification and analysis of archaeological pottery have been presented at various congresses by various researchers. This paper will review and analyze the most relevant works published so far. It focuses on computer applications oriented towards the graphical visualization and analysis of data relevant to archaeological pottery. The intention is to order and systematize these data and to review those publications that are most relevant to computerized systems of archaeological pottery. This review and analysis will introduce the methodology used in the CATA project (Archaeological Wheel­made Pottery of Andalusia), the procedures used in the CATA project for the representation, archiving, analysis and retrieval of data concerning pottery vessels and their fragments. The main aim of the CATA project is to provide a scientific tool for the analysis of pottery finds in eastern Andalusia. These findings will be introduced into a database with documentational and graphical capabilities for visualizing pottery fragments and vessels. The objective is to create a general tool that can be applied to any kind of ceramic material found in any geographical location.", "title": "" }, { "docid": "543a4aacf3d0f3c33071b0543b699d3c", "text": "This paper describes a buffer sharing technique that strikes a balance between the use of disk bandwidth and memory in order to maximize the performance of a video-on-demand server. We make the key observation that the configuration parameters of the system should be independent of the physical characteristics of the data (e.g., popularity of a clip). Instead, the configuration parameters are fixed and our strategy adjusts itself dynamically at run-time to support a pattern of access to the video clips.", "title": "" }, { "docid": "8bd44a21a890e7c44fec4e56ddd39af2", "text": "This paper focuses on the problem of discovering users' topics of interest on Twitter. While previous efforts in modeling users' topics of interest on Twitter have focused on building a \"bag-of-words\" profile for each user based on his tweets, they overlooked the fact that Twitter users usually publish noisy posts about their lives or create conversation with their friends, which do not relate to their topics of interest. In this paper, we propose a novel framework to address this problem by introducing a modified author-topic model named twitter-user model. For each single tweet, our model uses a latent variable to indicate whether it is related to its author's interest. Experiments on a large dataset we crawled using Twitter API demonstrate that our model outperforms traditional methods in discovering user interest on Twitter.", "title": "" }, { "docid": "ae167d6e1ff2b1ee3bd23e3e02800fab", "text": "The aim of this paper is to improve the classification performance based on the multiclass imbalanced datasets. In this paper, we introduce a new resampling approach based on Clustering with sampling for Multiclass Imbalanced classification using Ensemble (C-MIEN). C-MIEN uses the clustering approach to create a new training set for each cluster. The new training sets consist of the new label of instances with similar characteristics. This step is applied to reduce the number of classes then the complexity problem can be easily solved by C-MIEN. After that, we apply two resampling techniques (oversampling and undersampling) to rebalance the class distribution. Finally, the class distribution of each training set is balanced and ensemble approaches are used to combine the models obtained with the proposed method through majority vote. Moreover, we carefully design the experiments and analyze the behavior of C-MIEN with different parameters (imbalance ratio and number of classifiers). The experimental results show that C-MIEN achieved higher performance than state-of-the-art methods.", "title": "" }, { "docid": "c7afa12d10877eb7397176f2c4ab143e", "text": "Software-defined networking (SDN) has received a great deal of attention from both academia and industry in recent years. Studies on SDN have brought a number of interesting technical discussions on network architecture design, along with scientific contributions. Researchers, network operators, and vendors are trying to establish new standards and provide guidelines for proper implementation and deployment of such novel approach. It is clear that many of these research efforts have been made in the southbound of the SDN architecture, while the northbound interface still needs improvements. By focusing in the SDN northbound, this paper surveys the body of knowledge and discusses the challenges for developing SDN software. We investigate the existing solutions and identify trends and challenges on programming for SDN environments. We also discuss future developments on techniques, specifications, and methodologies for programmable networks, with the orthogonal view from the software engineering discipline.", "title": "" }, { "docid": "f2c9e07a2a083c4c766291bb77677330", "text": "The development of mobile cloud computing technology has made location-based service (LBS) increasingly more popular. Given the continuous requests to cloud LBS servers, the amounts of location and trajectory information collected by LBS servers are continuously increasing. Privacy awareness for LBS has been extensively studied in recent years. Among the privacy concerns about LBS, trajectory privacy preservation is particularly important. Based on privacy preservation models, previous work have mainly focused on peer-to-peer and centralized architectures. However, the burden on users is heavy in peer-to-peer architectures, because user devices need to communicate with LBS servers directly. In centralized architectures, a trusted third party (TTP) is introduced, and acts as a bridge between users and the LBS server. Anonymity technologies, such as k-anonymity, mix-zone, and dummy technologies, are usually implemented by the TTP to ensure safety. There are certain drawbacks in TTP architectures: Users have no physical control of the TTP. Moreover, the TTP is more attractive to adversaries, because substantially more sensitive information is stored by the TTP. To solve the above-mentioned problems, in this paper, we propose a fog structure to store partial important data with the dummy anonymity technology to ensure physical control, which can be considered as absolutely trust. Compared with cloud computing, fog computing is a promising technique that extends the cloud computing to the edge of a network. Moreover, fog computing provides local computation and storage abilities, wide geo-distribution, and support for mobility. Therefore, mobile users’ partial important information can be stored on a fog server to ensure better management. We take the principles of similarity, intersection, practicability, and correlation into consideration and design a dummy rotation algorithm with several properties. The effectiveness of the proposed method is validated through extensive simulations, which show that the proposed method can provide enhanced privacy preservation.", "title": "" }, { "docid": "bdb738a5df12bbd3862f0e5320856473", "text": "The Extended Kalman Filter (EKF) has become a standard technique used in a number of nonlinear estimation and machine learning applications. These include estimating the state of a nonlinear dynamic system, estimating parameters for nonlinear system identification (e.g., learning the weights of a neural network), and dual estimation (e.g., the ExpectationMaximization (EM) algorithm)where both states and parameters are estimated simultaneously. This paper points out the flaws in using the EKF, and introduces an improvement, the Unscented Kalman Filter (UKF), proposed by Julier and Uhlman [5]. A central and vital operation performed in the Kalman Filter is the propagation of a Gaussian random variable (GRV) through the system dynamics. In the EKF, the state distribution is approximated by a GRV, which is then propagated analytically through the first-order linearization of the nonlinear system. This can introduce large errors in the true posterior mean and covariance of the transformed GRV, which may lead to sub-optimal performance and sometimes divergence of the filter. The UKF addresses this problem by using a deterministic sampling approach. The state distribution is again approximated by a GRV, but is now represented using a minimal set of carefully chosen sample points. These sample points completely capture the true mean and covariance of the GRV, and when propagated through the true nonlinear system, captures the posterior mean and covariance accurately to the 3rd order (Taylor series expansion) for any nonlinearity. The EKF, in contrast, only achieves first-order accuracy. Remarkably, the computational complexity of the UKF is the same order as that of the EKF. Julier and Uhlman demonstrated the substantial performance gains of the UKF in the context of state-estimation for nonlinear control. Machine learning problems were not considered. We extend the use of the UKF to a broader class of nonlinear estimation problems, including nonlinear system identification, training of neural networks, and dual estimation problems. Our preliminary results were presented in [13]. In this paper, the algorithms are further developed and illustrated with a number of additional examples. This work was sponsored by the NSF under grant grant IRI-9712346", "title": "" }, { "docid": "c3c47c2e0c091916c8b2f4a0ca988f2f", "text": "Four experiments demonstrated implicit self-esteem compensation (ISEC) in response to threats involving gender identity (Experiment 1), implicit racism (Experiment 2), and social rejection (Experiments 3-4). Under conditions in which people might be expected to suffer a blow to self-worth, they instead showed high scores on 2 implicit self-esteem measures. There was no comparable effect on explicit self-esteem. However, ISEC was eliminated following self-affirmation (Experiment 3). Furthermore, threat manipulations increased automatic intergroup bias, but ISEC mediated these relationships (Experiments 2-3). Thus, a process that serves as damage control for the self may have negative social consequences. Finally, pretest anxiety mediated the relationship between threat and ISEC (Experiment 3), whereas ISEC negatively predicted anxiety among high-threat participants (Experiment 4), suggesting that ISEC may function to regulate anxiety. The implications of these findings for automatic emotion regulation, intergroup bias, and implicit self-esteem measures are discussed.", "title": "" }, { "docid": "226c2d8682aca7c8548b4245db519c28", "text": "In visual question answering (VQA), an algorithm must answer text-based questions about images. While multiple datasets for VQA have been created since late 2014, they all have flaws in both their content and the way algorithms are evaluated on them. As a result, evaluation scores are inflated and predominantly determined by answering easier questions, making it difficult to compare different methods. In this paper, we analyze existing VQA algorithms using a new dataset called the Task Driven Image Understanding Challenge (TDIUC), which has over 1.6 million questions organized into 12 different categories. We also introduce questions that are meaningless for a given image to force a VQA system to reason about image content. We propose new evaluation schemes that compensate for over-represented question-types and make it easier to study the strengths and weaknesses of algorithms. We analyze the performance of both baseline and state-of-the-art VQA models, including multi-modal compact bilinear pooling (MCB), neural module networks, and recurrent answering units. Our experiments establish how attention helps certain categories more than others, determine which models work better than others, and explain how simple models (e.g. MLP) can surpass more complex models (MCB) by simply learning to answer large, easy question categories.", "title": "" }, { "docid": "9a79af1c226073cc129087695295a4e5", "text": "This paper presents an effective approach for resume information extraction to support automatic resume management and routing. A cascaded information extraction (IE) framework is designed. In the first pass, a resume is segmented into a consecutive blocks attached with labels indicating the information types. Then in the second pass, the detailed information, such as Name and Address, are identified in certain blocks (e.g. blocks labelled with Personal Information), instead of searching globally in the entire resume. The most appropriate model is selected through experiments for each IE task in different passes. The experimental results show that this cascaded hybrid model achieves better F-score than flat models that do not apply the hierarchical structure of resumes. It also shows that applying different IE models in different passes according to the contextual structure is effective.", "title": "" }, { "docid": "d480813d8723b2e81ffc0747e02e32cc", "text": "In practice, multiple types of distortions are associated with an image quality degradation process. The existing machine learning (ML) based image quality assessment (IQA) approaches generally established a unified model for all distortion types, or each model is trained independently for each distortion type by using single-task learning, which lead to the poor generalization ability of the models as applied to practical image processing. There are often the underlying cross relatedness amongst these single-task learnings in IQA, which is ignored by the previous approaches. To solve this problem, we propose a multi-task learning framework to train IQA models simultaneously across individual tasks each of which concerns one distortion type. These relatedness can be therefore exploited to improve the generalization ability of IQA models from single-task learning. In addition, pairwise image quality rank instead of image quality rating is optimized in learning task. By mapping image quality rank to image quality rating, a novel no-reference (NR) IQA approach can be derived. The experimental results confirm that the proposed Multi-task Rank Learning based IQA (MRLIQ) approach is prominent among all state-of-the-art NR-IQA approaches.", "title": "" }, { "docid": "adad5599122e63cde59322b7ba46461b", "text": "Understanding how images of objects and scenes behave in response to specific ego-motions is a crucial aspect of proper visual development, yet existing visual learning methods are conspicuously disconnected from the physical source of their images. We propose to exploit proprioceptive motor signals to provide unsupervised regularization in convolutional neural networks to learn visual representations from egocentric video. Specifically, we enforce that our learned features exhibit equivariance i.e. they respond systematically to transformations associated with distinct ego-motions. With three datasets, we show that our unsupervised feature learning system significantly outperforms previous approaches on visual recognition and next-best-view prediction tasks. In the most challenging test, we show that features learned from video captured on an autonomous driving platform improve large-scale scene recognition in a disjoint domain.", "title": "" }, { "docid": "e6e6eb1f1c0613a291c62064144ff0ba", "text": "Mobile phones have become the most popular way to communicate with other individuals. While cell phones have become less of a status symbol and more of a fashion statement, they have created an unspoken social dependency. Adolescents and young adults are more likely to engage in SMS messing, making phone calls, accessing the internet from their phone or playing a mobile driven game. Once pervaded by boredom, teenagers resort to instant connection, to someone, somewhere. Sensation seeking behavior has also linked adolescents and young adults to have the desire to take risks with relationships, rules and roles. Individuals seek out entertainment and avoid boredom at all times be it appropriate or inappropriate. Cell phones are used for entertainment, information and social connectivity. It has been demonstrated that individuals with low self – esteem use cell phones to form and maintain social relationships. They form an attachment with cell phone which molded their mind that they cannot function without their cell phone on a day-to-day basis. In this context, the study attempts to examine the extent of use of mobile phone and its influence on the academic performance of the students. A face to face survey using structured questionnaire was the method used to elicit the opinions of students between the age group of 18-25 years in three cities covering all the three regions the State of Andhra Pradesh in India. The survey was administered among 1200 young adults through two stage random sampling to select the colleges and respondents from the selected colleges, with 400 from each city. In Hyderabad, 201 males and 199 females participated in the survey. In Visakhapatnam, 192 males and 208 females participated. In Tirupati, 220 males and 180 females completed the survey. Two criteria were taken into consideration while choosing the participants for the survey. The participants are college-going and were mobile phone users. Each of the survey responses was entered and analyzed using SPSS software. The Statistical Package for Social Sciences (SPSS 16) had been used to work out the distribution of samples in terms of percentages for each specified parameter.", "title": "" }, { "docid": "247c8cd5e076809a208849abe4dce3e5", "text": "This paper deals with the application of a novel neural network technique, support vector machine (SVM), in !nancial time series forecasting. The objective of this paper is to examine the feasibility of SVM in !nancial time series forecasting by comparing it with a multi-layer back-propagation (BP) neural network. Five real futures contracts that are collated from the Chicago Mercantile Market are used as the data sets. The experiment shows that SVM outperforms the BP neural network based on the criteria of normalized mean square error (NMSE), mean absolute error (MAE), directional symmetry (DS) and weighted directional symmetry (WDS). Since there is no structured way to choose the free parameters of SVMs, the variability in performance with respect to the free parameters is investigated in this study. Analysis of the experimental results proved that it is advantageous to apply SVMs to forecast !nancial time series. ? 2001 Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "04a2f0eb4ae1b86271186aeec5f34cba", "text": "The seasonal human influenza A/H3N2 virus undergoes rapid evolution, which produces significant year-to-year sequence turnover in the population of circulating strains. Adaptive mutations respond to human immune challenge and occur primarily in antigenic epitopes, the antibody-binding domains of the viral surface protein haemagglutinin. Here we develop a fitness model for haemagglutinin that predicts the evolution of the viral population from one year to the next. Two factors are shown to determine the fitness of a strain: adaptive epitope changes and deleterious mutations outside the epitopes. We infer both fitness components for the strains circulating in a given year, using population-genetic data of all previous strains. From fitness and frequency of each strain, we predict the frequency of its descendent strains in the following year. This fitness model maps the adaptive history of influenza A and suggests a principled method for vaccine selection. Our results call for a more comprehensive epidemiology of influenza and other fast-evolving pathogens that integrates antigenic phenotypes with other viral functions coupled by genetic linkage.", "title": "" }, { "docid": "103b784d7cc23663584486fa3ca396bb", "text": "A single, stationary topic model such as latent Dirichlet allocation is inappropriate for modeling corpora that span long time periods, as the popularity of topics is likely to change over time. A number of models that incorporate time have been proposed, but in general they either exhibit limited forms of temporal variation, or require computationally expensive inference methods. In this paper we propose non-parametric Topics over Time (npTOT), a model for time-varying topics that allows an unbounded number of topics and flexible distribution over the temporal variations in those topics’ popularity. We develop a collapsed Gibbs sampler for the proposed model and compare against existing models on synthetic and real document sets.", "title": "" }, { "docid": "e6e34a487a006aa38a98573b34b9c437", "text": "In this paper, we study the problem of training largescale face identification model with imbalanced training data. This problem naturally exists in many real scenarios including large-scale celebrity recognition, movie actor annotation, etc. Our solution contains two components. First, we build a face feature extraction model, and improve its performance, especially for the persons with very limited training samples, by introducing a regularizer to the cross entropy loss for the multinomial logistic regression (MLR) learning. This regularizer encourages the directions of the face features from the same class to be close to the direction of their corresponding classification weight vector in the logistic regression. Second, we build a multiclass classifier using MLR on top of the learned face feature extraction model. Since the standard MLR has poor generalization capability for the one-shot classes even if these classes have been oversampled, we propose a novel supervision signal called underrepresented-classes promotion loss, which aligns the norms of the weight vectors of the one-shot classes (a.k.a. underrepresented-classes) to those of the normal classes. In addition to the original cross entropy loss, this new loss term effectively promotes the underrepresented classes in the learned model and leads to a remarkable improvement in face recognition performance. We test our solution on the MS-Celeb-1M low-shot learning benchmark task. Our solution recognizes 94.89% of the test images at the precision of 99% for the one-shot classes. To the best of our knowledge, this is the best performance among all the published methods using this benchmark task with the same setup, including all the participants in the recent MS-Celeb-1M challenge at ICCV 2017.", "title": "" }, { "docid": "ebca43d1e96ead6d708327d807b9e72f", "text": "Weakly supervised semantic segmentation has been a subject of increased interest due to the scarcity of fully annotated images. We introduce a new approach for solving weakly supervised semantic segmentation with deep Convolutional Neural Networks (CNNs). The method introduces a novel layer which applies simplex projection on the output of a neural network using area constraints of class objects. The proposed method is general and can be seamlessly integrated into any CNN architecture. Moreover, the projection layer allows strongly supervised models to be adapted to weakly supervised models effortlessly by substituting ground truth labels. Our experiments have shown that applying such an operation on the output of a CNN improves the accuracy of semantic segmentation in a weakly supervised setting with image-level labels.", "title": "" } ]
scidocsrr
70012c0bf61c771762c481d738cebde1
Automatic Event Detection for Signal-based Surveillance
[ { "docid": "ea84c28e02a38caff14683681ea264d7", "text": "This paper presents a hierarchical framework for detecting local and global anomalies via hierarchical feature representation and Gaussian process regression. While local anomaly is typically detected as a 3D pattern matching problem, we are more interested in global anomaly that involves multiple normal events interacting in an unusual manner such as car accident. To simultaneously detect local and global anomalies, we formulate the extraction of normal interactions from training video as the problem of efficiently finding the frequent geometric relations of the nearby sparse spatio-temporal interest points. A codebook of interaction templates is then constructed and modeled using Gaussian process regression. A novel inference method for computing the likelihood of an observed interaction is also proposed. As such, our model is robust to slight topological deformations and can handle the noise and data unbalance problems in the training data. Simulations show that our system outperforms the main state-of-the-art methods on this topic and achieves at least 80% detection rates based on three challenging datasets.", "title": "" } ]
[ { "docid": "56525ce9536c3c8ea03ab6852b854e95", "text": "The Distributed Denial of Service (DDoS) attacks are a serious threat in today's Internet where packets from large number of compromised hosts block the path to the victim nodes and overload the victim servers. In the newly proposed future Internet Architecture, Named Data Networking (NDN), the architecture itself has prevention measures to reduce the overload to the servers. This on the other hand increases the work and security threats to the intermediate routers. Our project aims at identifying the DDoS attack in NDN which is known as Interest flooding attack, mitigate the consequence of it and provide service to the legitimate users. We have developed a game model for the DDoS attacks and provide possible countermeasures to stop the flooding of interests. Through this game theory model, we either forward or redirect or drop the incoming interest packets thereby reducing the PIT table consumption. This helps in identifying the nodes that send malicious interest packets and eradicate their actions of sending malicious interests further. The main highlight of this work is that we have implemented the Game Theory model in the NDN architecture. It was primarily imposed for the IP internet architecture.", "title": "" }, { "docid": "1cc586730cf0c1fd57cf6ff7548abe24", "text": "Researchers have proposed various methods to extract 3D keypoints from the surface of 3D mesh models over the last decades, but most of them are based on geometric methods, which lack enough flexibility to meet the requirements for various applications. In this paper, we propose a new method on the basis of deep learning by formulating the 3D keypoint detection as a regression problem using deep neural network (DNN) with sparse autoencoder (SAE) as our regression model. Both local information and global information of a 3D mesh model in multi-scale space are fully utilized to detect whether a vertex is a keypoint or not. SAE can effectively extract the internal structure of these two kinds of information and formulate highlevel features for them, which is beneficial to the regression model. Three SAEs are used to formulate the hidden layers of the DNN and then a logistic regression layer is trained to process the high-level features produced in the third SAE. Numerical experiments show that the proposed DNN based 3D keypoint detection algorithm outperforms current five state-of-the-art methods for various 3D mesh models.", "title": "" }, { "docid": "3e5041c6883ce6ab59234ed2c8c995b7", "text": "Self-amputation of the penis treated immediately: case report and review of the literature. Self-amputation of the penis is rare in urological practice. It occurs more often in a context psychotic disease. It can also be secondary to alcohol or drugs abuse. Treatment and care vary according on the severity of the injury, the delay of consultation and the patient's mental state. The authors report a case of self-amputation of the penis in an alcoholic context. The authors analyze the etiological and urological aspects of this trauma.", "title": "" }, { "docid": "fcfebde52c63b9286791476673dc4b70", "text": "A chat dialogue system, a chatbot, or a conversational agent is a computer program designed to hold a conversation using natural language. Many popular chat dialogue systems are based on handcrafted rules, written in Artificial Intelligence Markup Language (AIML). However, a manual design of rules requires significant efforts, as in practice most chatbots require hundreds if not thousands of rules. This paper presents the method of automated extraction of AIML rules from real Twitter conversation data. Our preliminary experimental results show the possibility of obtaining natural-language conversation between the user and a dialogue system without the necessity of handcrafting its knowledgebase.", "title": "" }, { "docid": "139cd2b11e4126bfaa2522fdc812e066", "text": "We consider aspects pertinent to evaluating creativity to b e input, output and the process by which the output is achieved. These issues may be further divided, and we highlight associated justifications and controversies. Appropriate meth ods of measuring these aspects are suggested and discussed.", "title": "" }, { "docid": "d3e65fbcc3484f304f78039731f2ba30", "text": "Rademacher complexity is often used to characterize the learnability of a hypothesis class and is known to be related to the class size. We leverage this observation and introduce a new technique for estimating the size of an arbitrary weighted set, defined as the sum of weights of all elements in the set. Our technique provides upper and lower bounds on a novel generalization of Rademacher complexity to the weighted setting in terms of the weighted set size. This generalizes Massart’s Lemma, a known upper bound on the Rademacher complexity in terms of the unweighted set size. We show that the weighted Rademacher complexity can be estimated by solving a randomly perturbed optimization problem, allowing us to derive high-probability bounds on the size of any weighted set. We apply our method to the problems of calculating the partition function of an Ising model and computing propositional model counts (#SAT). Our experiments demonstrate that we can produce tighter bounds than competing methods in both the weighted and unweighted settings.", "title": "" }, { "docid": "dd6059d3317348863fdc46eef6142e83", "text": "CHiME-3 is a research community challenge organised in 2015 to evaluate speech recognition systems for mobile multi-microphone devices used in noisy daily environments. This paper describes NTT's CHiME-3 system, which integrates advanced speech enhancement and recognition techniques. Newly developed techniques include the use of spectral masks for acoustic beam-steering vector estimation and acoustic modelling with deep convolutional neural networks based on the \"network in network\" concept. In addition to these improvements, our system has several key differences from the official baseline system. The differences include multi-microphone training, dereverberation, and cross adaptation of neural networks with different architectures. The impacts that these techniques have on recognition performance are investigated. By combining these advanced techniques, our system achieves a 3.45% development error rate and a 5.83% evaluation error rate. Three simpler systems are also developed to perform evaluations with constrained set-ups.", "title": "" }, { "docid": "a14c840ec650a0760be382e1baa3cc12", "text": "As the smart grid initiative is pushing for smarter controls, the need for artificial intelligence-based decision-making tools, such as agent-based programs, is becoming more prevalent. However, these tools alone are not sufficient to study the behavior of the algorithms in simulations. They need to be interfaced with power systems analysis tools such as PowerWorld Simulator. This paper proposes a framework for co-simulation with two types of tools: a) MathWorks Matlab and PowerWorld, and b) the multiagent middleware JADE and PowerWorld. The source code for this framework has been made available in the public domain for the research community.", "title": "" }, { "docid": "36347412c7d30ae6fde3742bbc4f21b9", "text": "iii", "title": "" }, { "docid": "21c3f6d61eeeb4df1bdb500f388f71f3", "text": "Status of This Memo This document specifies an Internet standards track protocol for the Internet community, and requests discussion and suggestions for improvements. Please refer to the current edition of the \"Internet Official Protocol Standards\" (STD 1) for the standardization state and status of this protocol. Distribution of this memo is unlimited. Abstract The Extensible Authentication Protocol (EAP), defined in RFC 3748, enables extensible network access authentication. This document specifies the EAP key hierarchy and provides a framework for the transport and usage of keying material and parameters generated by EAP authentication algorithms, known as \"methods\". It also provides a detailed system-level security analysis, describing the conditions under which the key management guidelines described in RFC 4962 can be satisfied.", "title": "" }, { "docid": "38dfeb7a0b906ec9894d2e03b56ad6e2", "text": "This article reviews recent research into the use of hierarchic agglomerative clustering methods for document retrieval. After an introduction to the calculation of interdocument similarities and to clustering methods that are appropriate for document clustering, the article discusses algorithms that can be used to allow the implementation of these methods on databases of nontrivial size. The validation of document hierarchies is described using tests based on the theory of random graphs and on empirical characteristics of document collections that are to be clustered. A range of search strategies is available for retrieval from document hierarchies and the results are presented of a series of research projects that have used these strategies to search the clusters resulting from several different types of hierarchic agglomerative clustering method. It is suggested that the complete linkage method is probably the most effective method in terms of retrieval performance; however, it is also difficult to implement in an efficient manner. Other applications of document clustering techniques are discussed briefly; experimental evidence suggests that nearest neighbor clusters, possibly represented as a network model, provide a reasonably efficient and effective means of including interdocument similarity information in document retrieval systems.", "title": "" }, { "docid": "43977abf063f974689065fe29945297a", "text": "In this short paper we propose several objective and subjective metrics and present a comparison between two “commodity” VR systems: HTC Vive and Oculus Rift. Objective assessment focuses on frame rate, impact of ambiance light, and impact of sensors' line of sight obstruction. Subjective study aims at evaluating and comparing the pick-and-place task performance in a virtual world. We collected user ratings of overall quality, perceived ease of use, and perceived intuitiveness, with results indicating that HTC Vive slightly outperforms the Oculus Rift for the pick-and-place task under test.", "title": "" }, { "docid": "172a1c752333b4b87136bb6323cd0373", "text": "We present a highly-flexible UIMA-based pipeline for developing structural kernelbased systems for relational learning from text, i.e., for generating training and test data for ranking, classifying short text pairs or measuring similarity between pieces of text. For example, the proposed pipeline can represent an input question and answer sentence pairs as syntacticsemantic structures, enriching them with relational information, e.g., links between question class, focus and named entities, and serializes them as training and test files for the tree kernel-based reranking framework. The pipeline generates a number of dependency and shallow chunkbased representations shown to achieve competitive results in previous work. It also enables easy evaluation of the models thanks to cross-validation facilities.", "title": "" }, { "docid": "aea6549564d08f383b5a3526173ad448", "text": "We demonstrated an excellent output power (Pout) density performance using a novel InAlGaN/GaN-HEMT with an 80-nm gate for a W-band amplifier. To eliminate current collapse, a unique double-layer silicon nitride (SiN) passivation film with oxidation resistance was adopted. The developed discrete GaN-HEMT achieved a Pout density of 3.0 W/mm at 96 GHz, and we fabricated W-band amplifier MMIC using the air-bridge wiring technology. The Pout density of the MMIC reached 3.6 W/mm at 86 GHz. We proved the potential of the developed InAlGaN/GaN-HEMT experimentally using our unique device technology. With the aim of future applications, we developed a novel wiring-inter-layer technology. It consists of a cavity structure and a moisture-resistant dielectric film technology. We demonstrated excellent high-frequency performances and low current collapse originating in humidity-degradation using AlGaN/GaN-HEMT. This is also a valuable technology for InAlGaN/GaN-HEMT.", "title": "" }, { "docid": "1decfffb283be978ff7c22e69f28cecc", "text": "Music Information Retrieval (MIR) is an interdisciplinary research area that has grown out of the need to manage burgeoning collections of music in digital form. Its diverse disciplinary communities, exemplified by the recently established ISMIR conference series, have yet to articulate a common research agenda or agree on methodological principles and metrics of success. In order for MIR to succeed, researchers need to work with real user communities and develop research resources such as reference music collections , so that the wide variety of techniques being developed in MIR can be meaningfully compared with one another. Out of these efforts, a common MIR practice can emerge.", "title": "" }, { "docid": "c16a6e967bec774cdefacc110753743e", "text": "In this letter, a top-gated field-effect device (FED) manufactured from monolayer graphene is investigated. Except for graphene deposition, a conventional top-down CMOS-compatible process flow is applied. Carrier mobilities in graphene pseudo-MOS structures are compared to those obtained from the top-gated Graphene-FEDs. The extracted values exceed the universal mobility of silicon and silicon-on-insulator MOSFETs", "title": "" }, { "docid": "5fc3cbcca7aba6f48da7df299de4abe2", "text": "1. We studied the responses of 103 neurons in visual area V4 of anesthetized macaque monkeys to two novel classes of visual stimuli, polar and hyperbolic sinusoidal gratings. We suspected on both theoretical and experimental grounds that these stimuli would be useful for characterizing cells involved in intermediate stages of form analysis. Responses were compared with those obtained with conventional Cartesian sinusoidal gratings. Five independent, quantitative analyses of neural responses were carried out on the entire population of cells. 2. For each cell, responses to the most effective Cartesian, polar, and hyperbolic grating were compared directly. In 18 of 103 cells, the peak response evoked by one stimulus class was significantly different from the peak response evoked by the remaining two classes. Of the remaining 85 cells, 74 had response peaks for the three stimulus classes that were all within a factor of 2 of one another. 3. An information-theoretic analysis of the trial-by-trial responses to each stimulus showed that all but two cells transmitted significant information about the stimulus set as a whole. Comparison of the information transmitted about each stimulus class showed that 23 of 103 cells transmitted a significantly different amount of information about one class than about the remaining two classes. Of the remaining 80 cells, 55 had information transmission rates for the three stimulus classes that were all within a factor of 2 of one another. 4. To identify cells that had orderly tuning profiles in the various stimulus spaces, responses to each stimulus class were fit with a simple Gaussian model. Tuning curves were successfully fit to the data from at least one stimulus class in 98 of 103 cells, and such fits were obtained for at least two classes in 87 cells. Individual neurons showed a wide range of tuning profiles, with response peaks scattered throughout the various stimulus spaces; there were no major differences in the distributions of the widths or positions of tuning curves obtained for the different stimulus classes. 5. Neurons were classified according to their response profiles across the stimulus set with two objective methods, hierarchical cluster analysis and multidimensional scaling. These two analyses produced qualitatively similar results. The most distinct group of cells was highly selective for hyperbolic gratings. The majority of cells fell into one of two groups that were selective for polar gratings: one selective for radial gratings and one selective for concentric or spiral gratings. There was no group whose primary selectivity was for Cartesian gratings. 6. To determine whether cells belonging to identified classes were anatomically clustered, we compared the distribution of classified cells across electrode penetrations with the distribution that would be expected if the cells were distributed randomly. Cells with similar response profiles were often anatomically clustered. 7. A position test was used to determine whether response profiles were sensitive to precise stimulus placement. A subset of Cartesian and non-Cartesian gratings was presented at several positions in and near the receptive field. The test was run on 13 cells from the present study and 28 cells from an earlier study. All cells showed a significant degree of invariance in their selectivity across changes in stimulus position of up to 0.5 classical receptive field diameters. 8. A length and width test was used to determine whether cells preferring non-Cartesian gratings were selective for Cartesian grating length or width. Responses to Cartesian gratings shorter or narrower than the classical receptive field were compared with those obtained with full-field Cartesian and non-Cartesian gratings in 29 cells. Of the four cells that had shown significant preferences for non-Cartesian gratings in the main test, none showed tuning for Cartesian grating length or width that would account for their non-Cartesian res", "title": "" }, { "docid": "0b6846c4dd89be21af70b144c93f7a7b", "text": "Most existing collaborative filtering models only consider the use of user feedback (e.g., ratings) and meta data (e.g., content, demographics). However, in most real world recommender systems, context information, such as time and social networks, are also very important factors that could be considered in order to produce more accurate recommendations. In this work, we address several challenges for the context aware movie recommendation tasks in CAMRa 2010: (1) how to combine multiple heterogeneous forms of user feedback? (2) how to cope with dynamic user and item characteristics? (3) how to capture and utilize social connections among users? For the first challenge, we propose a novel ranking based matrix factorization model to aggregate explicit and implicit user feedback. For the second challenge, we extend this model to a sequential matrix factorization model to enable time-aware parametrization. Finally, we introduce a network regularization function to constrain user parameters based on social connections. To the best of our knowledge, this is the first study that investigates the collective modeling of social and temporal dynamics. Experiments on the CAMRa 2010 dataset demonstrated clear improvements over many baselines.", "title": "" }, { "docid": "cc1876cf1d71be6c32c75bd2ded25e65", "text": "Traditional anomaly detection on social media mostly focuses on individual point anomalies while anomalous phenomena usually occur in groups. Therefore, it is valuable to study the collective behavior of individuals and detect group anomalies. Existing group anomaly detection approaches rely on the assumption that the groups are known, which can hardly be true in real world social media applications. In this article, we take a generative approach by proposing a hierarchical Bayes model: Group Latent Anomaly Detection (GLAD) model. GLAD takes both pairwise and point-wise data as input, automatically infers the groups and detects group anomalies simultaneously. To account for the dynamic properties of the social media data, we further generalize GLAD to its dynamic extension d-GLAD. We conduct extensive experiments to evaluate our models on both synthetic and real world datasets. The empirical results demonstrate that our approach is effective and robust in discovering latent groups and detecting group anomalies.", "title": "" }, { "docid": "41c69d2cc40964e54d9ea8a8d4f5f154", "text": "In computer vision, action recognition refers to the act of classifying an action that is present in a given video and action detection involves locating actions of interest in space and/or time. Videos, which contain photometric information (e.g. RGB, intensity values) in a lattice structure, contain information that can assist in identifying the action that has been imaged. The process of action recognition and detection often begins with extracting useful features and encoding them to ensure that the features are specific to serve the task of action recognition and detection. Encoded features are then processed through a classifier to identify the action class and their spatial and/or temporal locations. In this report, a thorough review of various action recognition and detection algorithms in computer vision is provided by analyzing the two-step process of a typical action recognition and detection algorithm: (i) extraction and encoding of features, and (ii) classifying features into action classes. In efforts to ensure that computer vision-based algorithms reach the capabilities that humans have of identifying actions irrespective of various nuisance variables that may be present within the field of view, the state-of-the-art methods are reviewed and some remaining problems are addressed in the final chapter.", "title": "" } ]
scidocsrr
9b94ccd15468190ea42e0ba8b5f1241a
Protein Surface Representation and Comparison : New Approaches in Structural Proteomics
[ { "docid": "8c29f90a844a7f38d0b622d7729eaa9e", "text": "One of the challenges in 3D shape matching arises from the fact that in many applications, models should be considered to be the same if they differ by a rotation. Consequently, when comparing two models, a similarity metric implicitly provides the measure of similarity at the optimal alignment. Explicitly solving for the optimal alignment is usually impractical. So, two general methods have been proposed for addressing this issue: (1) Every model is represented using rotation invariant descriptors. (2) Every model is described by a rotation dependent descriptor that is aligned into a canonical coordinate system defined by the model. In this paper, we describe the limitations of canonical alignment and discuss an alternate method, based on spherical harmonics, for obtaining rotation invariant representations. We describe the properties of this tool and show how it can be applied to a number of existing, orientation dependent descriptors to improve their matching performance. The advantages of this tool are two-fold: First, it improves the matching performance of many descriptors. Second, it reduces the dimensionality of the descriptor, providing a more compact representation, which in turn makes comparing two models more efficient.", "title": "" } ]
[ { "docid": "169ea06b2ec47b77d01fe9a4d4f8a265", "text": "One of the main challenges in security today is defending against malware attacks. As trends and anecdotal evidence show, preventing these attacks, regardless of their indiscriminate or targeted nature, has proven difficult: intrusions happen and devices get compromised, even at security-conscious organizations. As a consequence, an alternative line of work has focused on detecting and disrupting the individual steps that follow an initial compromise and are essential for the successful progression of the attack. In particular, several approaches and techniques have been proposed to identify the command and control (C8C) channel that a compromised system establishes to communicate with its controller.\n A major oversight of many of these detection techniques is the design’s resilience to evasion attempts by the well-motivated attacker. C8C detection techniques make widespread use of a machine learning (ML) component. Therefore, to analyze the evasion resilience of these detection techniques, we first systematize works in the field of C8C detection and then, using existing models from the literature, go on to systematize attacks against the ML components used in these approaches.", "title": "" }, { "docid": "612271aa8848349735422395a91ffe7b", "text": "The contamination of groundwater by heavy metal, originating either from natural soil sources or from anthropogenic sources is a matter of utmost concern to the public health. Remediation of contaminated groundwater is of highest priority since billions of people all over the world use it for drinking purpose. In this paper, thirty five approaches for groundwater treatment have been reviewed and classified under three large categories viz chemical, biochemical/biological/biosorption and physico-chemical treatment processes. Comparison tables have been provided at the end of each process for a better understanding of each category. Selection of a suitable technology for contamination remediation at a particular site is one of the most challenging job due to extremely complex soil chemistry and aquifer characteristics and no thumb-rule can be suggested regarding this issue. In the past decade, iron based technologies, microbial remediation, biological sulphate reduction and various adsorbents played versatile and efficient remediation roles. Keeping the sustainability issues and environmental ethics in mind, the technologies encompassing natural chemistry, bioremediation and biosorption are recommended to be adopted in appropriate cases. In many places, two or more techniques can work synergistically for better results. Processes such as chelate extraction and chemical soil washings are advisable only for recovery of valuable metals in highly contaminated industrial sites depending on economical feasibility.", "title": "" }, { "docid": "15dbf1ad05c8219be484c01145c09b6c", "text": "In this paper, we study the contextual bandit problem (also known as the multi-armed bandit problem with expert advice) for linear payoff functions. For T rounds, K actions, and d dimensional feature vectors, we prove an O ( √ Td ln(KT ln(T )/δ) ) regret bound that holds with probability 1− δ for the simplest known (both conceptually and computationally) efficient upper confidence bound algorithm for this problem. We also prove a lower bound of Ω( √ Td) for this setting, matching the upper bound up to logarithmic factors.", "title": "" }, { "docid": "291c0e0936335f8de3b3944a97b47e25", "text": "The k-nearest neighbor (k-NN) is a traditional method and one of the simplest methods for classification problems. Even so, results obtained through k-NN had been promising in many different fields. Therefore, this paper presents the study on blasts classifying in acute leukemia into two major forms which are acute myelogenous leukemia (AML) and acute lymphocytic leukemia (ALL) by using k-NN. 12 main features that represent size, color-based and shape were extracted from acute leukemia blood images. The k values and distance metric of k-NN were tested in order to find suitable parameters to be applied in the method of classifying the blasts. Results show that by having k = 4 and applying cosine distance metric, the accuracy obtained could reach up to 80%. Thus, k-NN is applicable in the classification problem.", "title": "" }, { "docid": "4261e44dad03e8db3c0520126b9c7c4d", "text": "One of the major drawbacks of magnetic resonance imaging (MRI) has been the lack of a standard and quantifiable interpretation of image intensities. Unlike in other modalities, such as X-ray computerized tomography, MR images taken for the same patient on the same scanner at different times may appear different from each other due to a variety of scanner-dependent variations and, therefore, the absolute intensity values do not have a fixed meaning. The authors have devised a two-step method wherein all images (independent of patients and the specific brand of the MR scanner used) can be transformed in such a may that for the same protocol and body region, in the transformed images similar intensities will have similar tissue meaning. Standardized images can be displayed with fixed windows without the need of per-case adjustment. More importantly, extraction of quantitative information about healthy organs or about abnormalities can be considerably simplified. This paper introduces and compares new variants of this standardizing method that can help to overcome some of the problems with the original method.", "title": "" }, { "docid": "d64179da43db5f5bd15ff7e31e38d391", "text": "Real-world graph applications are typically domain-specific and model complex business processes in the property graph data model. To implement a domain-specific graph algorithm in the context of such a graph application, simply providing a set of built-in graph algorithms is usually not sufficient nor does it allow algorithm customization to the user's needs. To cope with these issues, graph database vendors provide---in addition to their declarative graph query languages---procedural interfaces to write user-defined graph algorithms.\n In this paper, we introduce GraphScript, a domain-specific graph query language tailored to serve advanced graph analysis tasks and the specification of complex graph algorithms. We describe the major language design of GraphScript, discuss graph-specific optimizations, and describe the integration into an enterprise data platform.", "title": "" }, { "docid": "35fbe0b70445a04ae39d345107c89269", "text": "Pycnodysostosis, an autosomal recessive osteochondrodysplasia characterized by osteosclerosis and short stature, maps to chromosome 1q21. Cathepsin K, a cysteine protease gene that is highly expressed in osteoclasts, localized to the pycnodysostosis region. Nonsense, missense, and stop codon mutations in the gene encoding cathepsin K were identified in patients. Transient expression of complementary DNA containing the stop codon mutation resulted in messenger RNA but no immunologically detectable protein. Thus, pycnodysostosis results from gene defects in a lysosomal protease with highest expression in osteoclasts. These findings suggest that cathepsin K is a major protease in bone resorption, providing a possible rationale for the treatment of disorders such as osteoporosis and certain forms of arthritis.", "title": "" }, { "docid": "92b4a18334345b55aae40b99adcc3840", "text": "Online social networks (OSNs) are becoming increasingly popular and Identity Clone Attacks (ICAs) that aim at creating fake identities for malicious purposes on OSNs are becoming a significantly growing concern. Such attacks severely affect the trust relationships a victim has built with other users if no active protection is applied. In this paper, we first analyze and characterize the behaviors of ICAs. Then we propose a detection framework that is focused on discovering suspicious identities and then validating them. Towards detecting suspicious identities, we propose two approaches based on attribute similarity and similarity of friend networks. The first approach addresses a simpler scenario where mutual friends in friend networks are considered; and the second one captures the scenario where similar friend identities are involved. We also present experimental results to demonstrate flexibility and effectiveness of the proposed approaches. Finally, we discuss some feasible solutions to validate suspicious identities.", "title": "" }, { "docid": "76454b3376ec556025201a2f694e1f1c", "text": "Recurrent neural networks (RNNs) provide state-of-the-art accuracy for performing analytics on datasets with sequence (e.g., language model). This paper studied a state-of-the-art RNN variant, Gated Recurrent Unit (GRU). We first proposed memoization optimization to avoid 3 out of the 6 dense matrix vector multiplications (SGEMVs) that are the majority of the computation in GRU. Then, we study the opportunities to accelerate the remaining SGEMVs using FPGAs, in comparison to 14-nm ASIC, GPU, and multi-core CPU. Results show that FPGA provides superior performance/Watt over CPU and GPU because FPGA's on-chip BRAMs, hard DSPs, and reconfigurable fabric allow for efficiently extracting fine-grained parallelisms from small/medium size matrices used by GRU. Moreover, newer FPGAs with more DSPs, on-chip BRAMs, and higher frequency have the potential to narrow the FPGA-ASIC efficiency gap.", "title": "" }, { "docid": "32b4b275dc355dff2e3e168fe6355772", "text": "The management of coupon promotions is an important issue for marketing managers since it still is the major promotion medium. However, the distribution of coupons does not go without problems. Although manufacturers and retailers are investing heavily in the attempt to convince as many customers as possible, overall coupon redemption rate is low. This study improves the strategy of retailers and manufacturers concerning their target selection since both parties often end up in a battle for customers. Two separate models are built: one model makes predictions concerning redemption behavior of coupons that are distributed by the retailer while another model does the same for coupons handed out by manufacturers. By means of the feature-selection technique ‘Relief-F’ the dimensionality of the models is reduced, since it searches for the variables that are relevant for predicting the outcome. In this way, redundant variables are not used in the model-building process. The model is evaluated on real-life data provided by a retailer in FMCG. The contributions of this study for retailers as well as manufacturers are threefold. First, the possibility to classify customers concerning their coupon usage is shown. In addition, it is demonstrated that retailers and manufacturers can stay clear of each other in their marketing campaigns. Finally, the feature-selection technique ‘Relief-F’ proves to facilitate and optimize the performance of the models.", "title": "" }, { "docid": "72bb2c55ef03969aa89d4d688fc4f43e", "text": "The problem of charge sensitive amplifier and pole-zero cancellation circuit designed in CMOS technology for high rates of input pulses is considered. The continuously sensitive charge amplifier uses a MOS transistor biased in triode region to discharge the integration capacitance. Low noise requirements of the front-end electronics place the feedback CSA resistance in hundreds of the megaohm range. However the high counting rate of input pulses generates a DC voltage shift at the CSA output which could degrade the circuit performance. We analyze two circuit architectures for biasing transistors in feedback of CSA and PZC circuit taking into account the pile-up effects in the signal processing chain.", "title": "" }, { "docid": "7b7776af302df446c7ced33eba386a12", "text": "BACKGROUND\nEmerging research from psychology and the bio-behavioral sciences recognizes the importance of supporting patients to mobilize their personal strengths to live well with chronic illness. Positive technology and positive computing could be used as underlying design approaches to guide design and development of new technology-based interventions for this user group that support mobilizing their personal strengths.\n\n\nOBJECTIVE\nA codesigning workshop was organized with the aim to explore user requirements and ideas for how technology can be used to help people with chronic illness activate their personal strengths in managing their everyday challenges.\n\n\nMETHODS\nThirty-five participants from diverse backgrounds (patients, health care providers, designers, software developers, and researchers) participated. The workshop combined principles of (1) participatory and service design to enable meaningful participation and collaboration of different stakeholders and (2) an appreciative inquiry methodology to shift participants' attention to positive traits, values, and aspects that are meaningful and life-giving and stimulate participants' creativity, engagement, and collaboration. Utilizing these principles, participants were engaged in group activities to develop ideas for strengths-supportive tools. Each group consisted of 3-8 participants with different backgrounds. All group work was analysed using thematic analyses.\n\n\nRESULTS\nParticipants were highly engaged in all activities and reported a wide variety of requirements and ideas, including more than 150 personal strength examples, more than 100 everyday challenges that could be addressed by using personal strengths, and a wide range of functionality requirements (eg, social support, strength awareness and reflection, and coping strategies). 6 concepts for strength-supportive tools were created. These included the following: a mobile app to support a person to store, reflect on, and mobilize one's strengths (Strengths treasure chest app); \"empathy glasses\" enabling a person to see a situation from another person's perspective (Empathy Simulator); and a mobile app allowing a person to receive supportive messages from close people in a safe user-controlled environment (Cheering squad app). Suggested design elements for making the tools engaging included: metaphors (eg, trees, treasure island), visualization techniques (eg, dashboards, color coding), and multimedia (eg, graphics). Maintaining a positive focus throughout the tool was an important requirement, especially for feedback and framing of content.\n\n\nCONCLUSIONS\nCombining participatory, service design, and appreciative inquiry methods were highly useful to engage participants in creating innovative ideas. Building on peoples' core values and positive experiences empowered the participants to expand their horizons from addressing problems and symptoms, which is a very common approach in health care today, to focusing on their capacities and that which is possible, despite their chronic illness. The ideas and user requirements, combined with insights from relevant theories (eg, positive technology, self-management) and evidence from the related literature, are critical to guide the development of future more personalized and strengths-focused self-management tools.", "title": "" }, { "docid": "4f73815cc6bbdfbacee732d8724a3f74", "text": "Networks can be considered as approximation schemes. Multilayer networks of the perceptron type can approximate arbitrarily well continuous functions (Cybenko 1988, 1989; Funahashi 1989; Stinchcombe and White 1989). We prove that networks derived from regularization theory and including Radial Basis Functions (Poggio and Girosi 1989), have a similar property. From the point of view of approximation theory, however, the property of approximating continuous functions arbitrarily well is not sufficient for characterizing good approximation schemes. More critical is the property ofbest approximation. The main result of this paper is that multilayer perceptron networks, of the type used in backpropagation, do not have the best approximation property. For regularization networks (in particular Radial Basis Function networks) we prove existence and uniqueness of best approximation.", "title": "" }, { "docid": "bf21e1b7a41e9e3a5ede61a61aed699d", "text": "In this paper classification and association rule mining algorithms are discussed and demonstrated. Particularly, the problem of association rule mining, and the investigation and comparison of popular association rules algorithms. The classic problem of classification in data mining will be also discussed. The paper also considers the use of association rule mining in classification approach in which a recently proposed algorithm is demonstrated for this purpose. Finally, a comprehensive experimental study against 13 UCI data sets is presented to evaluate and compare traditional and association rule based classification techniques with regards to classification accuracy, number of derived rules, rules features and processing time.", "title": "" }, { "docid": "a064ad01edd6a369d939736e04831e50", "text": "Asthma is frequently undertreated, resulting in a relatively high prevalence of patients with uncontrolled disease, characterized by the presence of symptoms and risk of adverse outcomes. Patients with uncontrolled asthma have a higher risk of morbidity and mortality, underscoring the importance of identifying uncontrolled disease and modifying management plans to improve control. Several assessment tools exist to evaluate control with various cutoff points and measures, but these tools do not reliably correlate with physiological measures and should be considered a supplement to physiological tests. When attempting to improve control in patients, nonpharmacological interventions should always be attempted before changing or adding pharmacotherapies. Among patients with severe, uncontrolled asthma, individualized treatment based on asthma phenotype and eosinophil presence should be considered. The efficacy of the anti-IgE antibody omalizumab has been well established for patients with allergic asthma, and novel biologic agents targeting IL-5, IL-13, IL-4, and other allergic pathways have been investigated for patients with allergic or eosinophilic asthma. Fevipiprant (a CRTH2 [chemokine receptor homologous molecule expressed on Th2 cells] antagonist) and imatinib (a tyrosine kinase inhibition) are examples of nonbiologic therapies that may be useful for patients with severe, uncontrolled asthma. Incorporation of new and emerging treatment into therapeutic strategies for patients with severe asthma may improve outcomes for this patient population.", "title": "" }, { "docid": "3ef63d103a27598c2350abdacf56dbcd", "text": "Neuromodulation shows increasing promise in the treatment of psychiatric disorders, particularly obsessive–compulsive disorder (OCD). Development of tools and techniques including deep brain stimulation, transcranial magnetic stimulation, and electroconvulsive therapy may yield additional options for patients who fail to respond to standard treatments. This article reviews the motivation for and use of these treatments in OCD. We begin with a brief description of the illness followed by discussion of the circuit models thought to underlie the disorder. These circuits provide targets for intervention. Basal ganglia and talamocortical pathophysiology, including cortico-striato-thalamo-cortical loops is a focus of this discussion. Neuroimaging findings and historical treatments that led to the use of neuromodulation for OCD are presented. We then present evidence from neuromodulation studies using deep brain stimulation, electroconvulsive therapy, and transcranial magnetic stimulation, with targets including nucleus accumbens, subthalamic nucleus inferior thalamic peduncle, dorsolateral prefrontal cortex, supplementary motor area, and orbitofrontal cortex. Finally, we explore potential future neuromodulation approaches that may further refine and improve treatment.", "title": "" }, { "docid": "8e1a65dd8bf9d8a4b67c46a0067ca42d", "text": "Reading Genetic Programming IE Automatic Discovery ofReusable Programs (GPII) in its entirety is not a task for the weak-willed because the book without appendices is about 650 pages. An entire previous book by the same author [1] is devoted to describing Genetic Programming (GP), while this book is a sequel extolling an extension called Automatically Defined Functions (ADFs). The author, John R. Koza, argues that ADFs can be used in conjunction with GP to improve its efficacy on large problems. \"An automatically defined function (ADF) is a function (i.e., subroutine, procedure, module) that is dynamically evolved during a run of genetic programming and which may be called by a calling program (e.g., a main program) that is simultaneously being evolved\" (p. 1). Dr. Koza recommends adding the ADF technique to the \"GP toolkit.\" The book presents evidence that it is possible to interpret GP with ADFs as performing either a top-down process of problem decomposition or a bottom-up process of representational change to exploit identified regularities. This is stated as Main Point 1. Main Point 2 states that ADFs work by exploiting inherent regularities, symmetries, patterns, modularities, and homogeneities within a problem, though perhaps in ways that are very different from the style of programmers. Main Points 3 to 7 are appropriately qualified statements to the effect that, with a variety of problems, ADFs pay off be-", "title": "" }, { "docid": "e0bb1bdcba38bcfbcc7b2da09cd05a3f", "text": "Reconstructing the 3D surface from a set of provided range images – acquired by active or passive sensors – is an important step to generate faithful virtual models of real objects or environments. Since several approaches for high quality fusion of range images are already known, the runtime efficiency of the respective methods are of increased interest. In this paper we propose a highly efficient method for range image fusion resulting in very accurate 3D models. We employ a variational formulation for the surface reconstruction task. The global optimal solution can be found by gradient descent due to the convexity of the underlying energy functional. Further, the gradient descent procedure can be parallelized, and consequently accelerated by graphics processing units. The quality and runtime performance of the proposed method is demonstrated on wellknown multi-view stereo benchmark datasets.", "title": "" }, { "docid": "af9137900cd3fe09d9bea87f38324b80", "text": "The cognitive walkthrough is a technique for evaluating the design of a user interface, with speciaJ attention to how well the interface supports “exploratory learning,” i.e., first-time use without formal training. The evaluation can be performed by the system’s designers in the e,arly stages of design, before empirical user testing is possible. Early versions of the walkthrough method relied on a detailed series of questions, to be answered on paper or electronic forms. This tutorial presents a simpler method, founded in an understanding of the cognitive theory that describes a user’s interactions with a system. The tutorial refines the method on the basis of recent empirical and theoretical studies of exploratory learning with display-based interfaces. The strengths and limitations of the walkthrough method are considered, and it is placed into the context of a more complete design approach.", "title": "" }, { "docid": "48966a0436405a6656feea3ce17e87c3", "text": "Complex regional pain syndrome (CRPS) is a chronic, intensified localized pain condition that can affect children and adolescents as well as adults, but is more common among adolescent girls. Symptoms include limb pain; allodynia; hyperalgesia; swelling and/or changes in skin color of the affected limb; dry, mottled skin; hyperhidrosis and trophic changes of the nails and hair. The exact mechanism of CRPS is unknown, although several different mechanisms have been suggested. The diagnosis is clinical, with the aid of the adult criteria for CRPS. Standard care consists of a multidisciplinary approach with the implementation of intensive physical therapy in conjunction with psychological counseling. Pharmacological treatments may aid in reducing pain in order to allow the patient to participate fully in intensive physiotherapy. The prognosis in pediatric CRPS is favorable.", "title": "" } ]
scidocsrr
b64a7f19563176e5ac3229d07a45e985
Three-Phase Dual-Buck Inverter With Unified Pulsewidth Modulation
[ { "docid": "803b3d29c5514865cd8e17971f2dd8d6", "text": "This paper comprehensively analyzes the relationship between space-vector modulation and three-phase carrier-based pulsewidth modualtion (PWM). The relationships involved, such as the relationship between modulation signals (including zero-sequence component and fundamental components) and space vectors, the relationship between the modulation signals and the space-vector sectors, the relationship between the switching pattern of space-vector modulation and the type of carrier, and the relationship between the distribution of zero vectors and different zero-sequence signal are systematically established. All the relationships provide a bidirectional bridge for the transformation between carrier-based PWM modulators and space-vector modulation modulators. It is shown that all the drawn conclusions are independent of the load type. Furthermore, the implementations of both space-vector modulation and carrier-based PWM in a closed-loop feedback converter are discussed.", "title": "" } ]
[ { "docid": "6f3938e2951996d4f41a5fa6e8c71aad", "text": "Online Social Networks (OSNs), such as Facebook and Twitter, have become an integral part of our daily lives. There are hundreds of OSNs, each with its own focus in that each offers particular services and functionalities. Recent studies show that many OSN users create several accounts on multiple OSNs using the same or different personal information. Collecting all the available data of an individual from several OSNs and fusing it into a single profile can be useful for many purposes. In this paper, we introduce novel machine learning based methods for solving Entity Resolution (ER), a problem for matching user profiles across multiple OSNs. The presented methods are able to match between two user profiles from two different OSNs based on supervised learning techniques, which use features extracted from each one of the user profiles. By using the extracted features and supervised learning techniques, we developed classifiers which can perform entity matching between two profiles for the following scenarios: (a) matching entities across two OSNs; (b) searching for a user by similar name; and (c) de-anonymizing a user’s identity. The constructed classifiers were tested by using data collected from two popular OSNs, Facebook and Xing. We then evaluated the classifiers’ performances using various evaluation measures, such as true and false positive rates, accuracy, and the Area Under the receiver operator Curve (AUC). The constructed classifiers were evaluated and their classification performance measured by AUC was quite remarkable, with an AUC of up to 0.982 and an accuracy of up to 95.9% in identifying user profiles across two OSNs.", "title": "" }, { "docid": "455068ecca4db680a8cd65bf127cfc91", "text": "OBJECTIVES\nLoneliness is common among older persons and has been associated with health and mental health risks. This systematic review examines the utility of loneliness interventions among older persons.\n\n\nDATA SOURCE\nThirty-four intervention studies were used. STUDY INCLUSION CRITERIA: The study was conducted between 1996 and 2011, included a sample of older adults, implemented an intervention affecting loneliness or identified a situation that directly affected loneliness, included in its outcome measures the effects of the intervention or situation on loneliness levels or on loneliness-related measures (e.g., social interaction), and included in its analysis pretest-posttest comparisons.\n\n\nDATA EXTRACTION\nStudies were accessed using the databases PsycINFO, MEDLINE, ScienceDirect, AgeLine, PsycBOOKS, and Google Scholar for the years 1996-2011.\n\n\nDATA SYNTHESIS\nInterventions were classified based on population, format, and content and were evaluated for quality of design and efficacy.\n\n\nRESULTS\nTwelve studies were effective in reducing loneliness according to the review criteria, and 15 were evaluated as potentially effective. The findings suggest that it is possible to reduce loneliness by using educational interventions focused on social networks maintenance and enhancement.\n\n\nCONCLUSIONS\nMultiple approaches show promise, although flawed design often prevents proper evaluation of efficacy. The value of specific therapy techniques in reducing loneliness is highlighted and warrants a wider investigation. Studies of special populations, such as the cognitively impaired, are also needed.", "title": "" }, { "docid": "84be70157c6a6707d8c5621c9b7aed82", "text": "Depression is associated with significant disability, mortality and healthcare costs. It is the third leading cause of disability in high-income countries, 1 and affects approximately 840 million people worldwide. 2 Although biological, psychological and environmental theories have been advanced, 3 the underlying pathophysiology of depression remains unknown and it is probable that several different mechanisms are involved. Vitamin D is a unique neurosteroid hormone that may have an important role in the development of depression. Receptors for vitamin D are present on neurons and glia in many areas of the brain including the cingulate cortex and hippocampus, which have been implicated in the pathophysiology of depression. 4 Vitamin D is involved in numerous brain processes including neuroimmuno-modulation, regulation of neurotrophic factors, neuroprotection, neuroplasticity and brain development, 5 making it biologically plausible that this vitamin might be associated with depression and that its supplementation might play an important part in the treatment of depression. Over two-thirds of the populations of the USA and Canada have suboptimal levels of vitamin D. 6,7 Some studies have demonstrated a strong relationship between vitamin D and depression, 8,9 whereas others have shown no relationship. 10,11 To date there have been eight narrative reviews on this topic, 12–19 with the majority of reviews reporting that there is insufficient evidence for an association between vitamin D and depression. None of these reviews used a comprehensive search strategy, provided inclusion or exclusion criteria, assessed risk of bias or combined study findings. In addition, several recent studies were not included in these reviews. 9,10,20,21 Therefore, we undertook a systematic review and meta-analysis to investigate whether vitamin D deficiency is associated with depression in adults in case–control and cross-sectional studies; whether vitamin D deficiency increases the risk of developing depression in cohort studies in adults; and whether vitamin D supplementation improves depressive symptoms in adults with depression compared with placebo, or prevents depression compared with placebo, in healthy adults in randomised controlled trials (RCTs). We searched the databases MEDLINE, EMBASE, PsycINFO, CINAHL, AMED and Cochrane CENTRAL (up to 2 February 2011) using separate comprehensive strategies developed in consultation with an experienced research librarian (see online supplement DS1). A separate search of PubMed identified articles published electronically prior to print publication within 6 months of our search and therefore not available through MEDLINE. The clinical trials registries clinicaltrials.gov and Current Controlled Trials (controlled-trials.com) were searched for unpublished data. The reference lists …", "title": "" }, { "docid": "9c25a2e343e9e259a9881fd13983c150", "text": "Advances in cognitive, affective, and social neuroscience raise a host of new questions concerning the ways in which neuroscience can and should be used. These advances also challenge our intuitions about the nature of humans as moral and spiritual beings. Neuroethics is the new field that grapples with these issues. The present article surveys a number of applications of neuroscience to such diverse arenas as marketing, criminal justice, the military, and worker productivity. The ethical, legal, and societal effects of these applications are discussed. Less practical, but perhaps ultimately more consequential, is the impact of neuroscience on our worldview and our understanding of the human person.", "title": "" }, { "docid": "1ee063329b62404e22d73a4f5996332d", "text": "High-rate data communication over a multipath wireless channel often requires that the channel response be known at the receiver. Training-based methods, which probe the channel in time, frequency, and space with known signals and reconstruct the channel response from the output signals, are most commonly used to accomplish this task. Traditional training-based channel estimation methods, typically comprising linear reconstruction techniques, are known to be optimal for rich multipath channels. However, physical arguments and growing experimental evidence suggest that many wireless channels encountered in practice tend to exhibit a sparse multipath structure that gets pronounced as the signal space dimension gets large (e.g., due to large bandwidth or large number of antennas). In this paper, we formalize the notion of multipath sparsity and present a new approach to estimating sparse (or effectively sparse) multipath channels that is based on some of the recent advances in the theory of compressed sensing. In particular, it is shown in the paper that the proposed approach, which is termed as compressed channel sensing (CCS), can potentially achieve a target reconstruction error using far less energy and, in many instances, latency and bandwidth than that dictated by the traditional least-squares-based training methods.", "title": "" }, { "docid": "76d59eaa0e2862438492b55f893ceea3", "text": "The need to increase security in open or public spaces has in turn given rise to the requirement to monitor these spaces and analyse those images on‐site and on‐time. At this point, the use of smart cameras ‐ of which the popularity has been increasing ‐ is one step ahead. With sensors and Digital Signal Processors (DSPs), smart cameras generate ad hoc results by analysing the numeric images transmitted from the sensor by means of a variety of image‐processing algorithms. Since the images are not transmitted to a distance processing unit but rather are processed inside the camera, it does not necessitate high‐ bandwidth networks or high processor powered systems; it can instantaneously decide on the required access. Nonetheless, on account of restricted memory, processing power and overall power, image processing algorithms need to be developed and optimized for embedded processors. Among these algorithms, one of the most important is for face detection and recognition. A number of face detection and recognition methods have been proposed recently and many of these methods have been tested on general‐purpose processors. In smart cameras ‐ which are real‐life applications of such methods ‐ the widest use is on DSPs. In the present study, the Viola‐Jones face detection method ‐ which was reported to run faster on PCs ‐ was optimized for DSPs; the face recognition method was combined with the developed sub‐region and mask‐based DCT (Discrete Cosine Transform). As the employed DSP is a fixed‐point processor, the processes were performed with integers insofar as it was possible. To enable face recognition, the image was divided into sub‐ regions and from each sub‐region the robust coefficients against disruptive elements ‐ like face expression, illumination, etc. ‐ were selected as the features. The discrimination of the selected features was enhanced via LDA (Linear Discriminant Analysis) and then employed for recognition. Thanks to its operational convenience, codes that were optimized for a DSP received a functional test after the computer simulation. In these functional tests, the face recognition system attained a 97.4% success rate on the most popular face database: the FRGC.", "title": "" }, { "docid": "619f38266a35e76a77fb4141879e1e68", "text": "In article various approaches to measurement of efficiency of innovations and the problems arising at their measurement are considered, the system of an indistinct conclusion for the solution of a problem of obtaining recommendations about measurement of efficiency of innovations is offered.", "title": "" }, { "docid": "53d04c06efb468e14e2ee0b485caf66f", "text": "The analysis of time-oriented data is an important task in many application scenarios. In recent years, a variety of techniques for visualizing such data have been published. This variety makes it difficult for prospective users to select methods or tools that are useful for their particular task at hand. In this article, we develop and discuss a systematic view on the diversity of methods for visualizing time-oriented data. With the proposed categorization we try to untangle the visualization of time-oriented data, which is such an important concern in Visual Analytics. The categorization is not only helpful for users, but also for researchers to identify future tasks in Visual Analytics. r 2007 Elsevier Ltd. All rights reserved. MSC: primary 68U05; 68U35", "title": "" }, { "docid": "d3b248232b7a01bba1d165908f55a316", "text": "Two views of bilingualism are presented--the monolingual or fractional view which holds that the bilingual is (or should be) two monolinguals in one person, and the bilingual or wholistic view which states that the coexistence of two languages in the bilingual has produced a unique and specific speaker-hearer. These views affect how we compare monolinguals and bilinguals, study language learning and language forgetting, and examine the speech modes--monolingual and bilingual--that characterize the bilingual's everyday interactions. The implications of the wholistic view on the neurolinguistics of bilingualism, and in particular bilingual aphasia, are discussed.", "title": "" }, { "docid": "7078d24d78abf6c46a6bc8c2213561c4", "text": "In the past two decades, a new form of scholarship has appeared in which researchers present an overview of previously conducted research syntheses on the same topic. In these efforts, research syntheses are the principal units of evidence. Overviews of reviews introduce unique problems that require unique solutions. This article describes what methods overviewers have developed or have adopted from other forms of scholarship. These methods concern how to (a) define the broader problem space of an overview, (b) conduct literature searches that specifically look for research syntheses, (c) address the overlap in evidence in related reviews, (d) evaluate the quality of both primary research and research syntheses, (e) integrate the outcomes of research syntheses, especially when they produce discordant results, (f) conduct a second-order meta-analysis, and (g) present findings. The limitations of overviews are also discussed, especially with regard to the age of the included evidence.", "title": "" }, { "docid": "53049f1514bc03368b8c2a0b18518100", "text": "The Protein Data Bank (PDB; http://www.rcsb.org/pdb/ ) is the single worldwide archive of structural data of biological macromolecules. This paper describes the goals of the PDB, the systems in place for data deposition and access, how to obtain further information, and near-term plans for the future development of the resource.", "title": "" }, { "docid": "2427019698358950791ee46506a28e7b", "text": "This article describes a novel way of combining data mining techniques on Internet data in order to discover actionable marketing intelligence in electronic commerce scenarios. The data that is considered not only covers various types of server and web meta information, but also marketing data and knowledge. Furthermore, heterogeneity resolution thereof and Internet- and electronic commerce-specific pre-processing activities are embedded. A generic web log data hypercube is formally defined and schematic designs for analytical and predictive activities are given. From these materialised views, various online analytical web usage data mining techniques are shown, which include marketing expertise as domain knowledge and are specifically designed for electronic commerce purposes.", "title": "" }, { "docid": "289694f2395a6a2afc7d86d475b9c02d", "text": "Recently, large breakthroughs have been observed in saliency modeling. The top scores on saliency benchmarks have become dominated by neural network models of saliency, and some evaluation scores have begun to saturate. Large jumps in performance relative to previous models can be found across datasets, image types, and evaluation metrics. Have saliency models begun to converge on human performance? In this paper, we re-examine the current state-of-the-art using a finegrained analysis on image types, individual images, and image regions. Using experiments to gather annotations for high-density regions of human eye fixations on images in two established saliency datasets, MIT300 and CAT2000, we quantify up to 60% of the remaining errors of saliency models. We argue that to continue to approach human-level performance, saliency models will need to discover higher-level concepts in images: text, objects of gaze and action, locations of motion, and expected locations of people in images. Moreover, they will need to reason about the relative importance of image regions, such as focusing on the most important person in the room or the most informative sign on the road. More accurately tracking performance will require finer-grained evaluations and metrics. Pushing performance further will require higher-level image understanding.", "title": "" }, { "docid": "e9782d003112c64c3dc41c1f2a5c641e", "text": "Osgood-Schlatter's disease is a well known entity affecting the adolescent knee. Radiologic examination of the knee has been an integral part of the diagnosis of this condition for decades. However, the soft tissue changes have not been appreciated sufficiently. Emphasis is placed on the use of optimum radiographic technique and xeroradiography in the examination of the soft tissues of the knee.", "title": "" }, { "docid": "38003dcff86b5683d0793c21f3e1eccb", "text": "— In this paper a new location indoor system is presented, which shows the position and orientation of the user in closed environments, as well as the optimal route to his destination through location tags. This system is called Labelee, and it makes easier the interaction between users and devices through QR code scanning or by NFC tag reading, because this technology is increasingly common in the latest smartphones. With this system, users could locate themselves into an enclosure with less interaction.", "title": "" }, { "docid": "0f6aaec52e4f7f299a711296992b5dba", "text": "This paper presents a simple and efficient method for online signature verification. The technique is based on a feature set comprising of several histograms that can be computed efficiently given a raw data sequence of an online signature. The features which are represented by a fixed-length vector can not be used to reconstruct the original signature, thereby providing privacy to the user's biometric trait in case the stored template is compromised. To test the verification performance of the proposed technique, several experiments were conducted on the well known MCYT-100 and SUSIG datasets including both skilled forgeries and random forgeries. Experimental results demonstrate that the performance of the proposed technique is comparable to state-of-art algorithms despite its simplicity and efficiency.", "title": "" }, { "docid": "78377f1321d961445a388b049606728c", "text": "This paper presents a survey on techniques used in Autonomous Maze Solving Robot aka Micromouse. Autonomous movement is an important feature which allows a robot to move freely from one point to another without the mediation from human being. The micromouse is required to solve any kind of maze in shortest interval of time. Autonomous movement within the unknown area requires the robot to investigate, situate and plan the outlaying area. By solving a maze, the referenced algorithms and their pros and cons can be studied and analyzed.", "title": "" }, { "docid": "73ded3dd5e6b5abe5e882beb12312ea9", "text": "As deep learning methods form a critical part in commercially important applications such as autonomous driving and medical diagnostics, it is important to reliably detect out-of-distribution (OOD) inputs while employing these algorithms. In this work, we propose an OOD detection algorithm which comprises of an ensemble of classifiers. We train each classifier in a self-supervised manner by leaving out a random subset of training data as OOD data and the rest as in-distribution (ID) data. We propose a novel margin-based loss over the softmax output which seeks to maintain at least a margin m between the average entropy of the OOD and in-distribution samples. In conjunction with the standard cross-entropy loss, we minimize the novel loss to train an ensemble of classifiers. We also propose a novel method to combine the outputs of the ensemble of classifiers to obtain OOD detection score and class prediction. Overall, our method convincingly outperforms Hendrycks et al. [7] and the current state-of-the-art ODIN [13] on several OOD detection benchmarks.", "title": "" }, { "docid": "229a541fa4b8e9157c8cc057ae028676", "text": "The proposed system introduces a new genetic algorithm for prediction of financial performance with input data sets from a financial domain. The goal is to produce a GA-based methodology for prediction of stock market performance along with an associative classifier from numerical data. This work restricts the numerical data to stock trading data. Stock trading data contains the quotes of stock market. From this information, many technical indicators can be extracted, and by investigating the relations between these indicators trading signals can discovered. Genetic algorithm is being used to generate all the optimized relations among the technical indicator and its value. Along with genetic algorithm association rule mining algorithm is used for generation of association rules among the various Technical Indicators. Associative rules are generated whose left side contains a set of trading signals, expressed by relations among the technical indicators, and whose right side indicates whether there is a positive ,negative or no change. The rules are being further given to the classification process which will be able to classify the new data making use of the previously generated rules. The proposed idea in the paper is to offer an efficient genetic algorithm in combination with the association rule mining algorithm which predicts stock market performance. Keywords— Genetic Algorithm, Associative Rule Mining, Technical Indicators, Associative rules, Stock Market, Numerical Data, Rules INTRODUCTION Over the last decades, there has been much research interests directed at understanding and predicting future. Among them, to forecast price movements in stock markets is a major challenge confronting investors, speculator and businesses. How to make a right decision in stock trading extracts many attentions from many financial and technical fields. Many technologies such as evolutionary optimization methods have been studied to help people find better way to earn more profit from the stock market. And the data mining method shows its power to improve the accuracy of stock movement prediction, with which more profit can be obtained with less risk. Applications of data mining techniques for stock investment include clustering, decision tree etc. Moreover, researches on stock market discover trading signals and timings from financial data. Because of the numerical attributes used, data mining techniques, such as decision tree, have weaker capabilities to handle this kind of numerical data and there are infinitely many possible ways to enumerate relations among data. Stock prices depend on various factors, the important ones being the market sentiment, performance of the industry, earning results and projected earnings, takeover or merger, introduction of a new product or introduction of an existing product into new markets, share buy-back, announcements of dividends/bonuses, addition or removal from the index and such other factors leading to a positive or negative impact on the share price and the associated volumes. Apart from the basic technical and fundamental analysis techniques used in stock market analysis and prediction, soft computing methods based on Association Rule Mining, fuzzy logic, neural networks, genetic algorithms etc. are increasingly finding their place in understanding and predicting the financial markets. Genetic algorithm has a great capability to discover good solutions rapidly for difficult high dimensional problems. The genetic algorithm has good capability to deal with numerical data and relations between numerical data. Genetic algorithms have emerged as a powerful general purpose search and optimization technique and have found applications in widespread areas. Associative classification, one of the most important tasks in data mining and knowledge discovery, builds a classification system based on associative classification rules. Association rules are learned and extracted from the available training dataset and the most suitable rules are selected to build an associative classification model. Association rule discovery has been used with great success in International Journal of Engineering Research and General Science Volume 3, Issue 1, January-February, 2015 ISSN 2091-273", "title": "" }, { "docid": "8103f137c6bebb5c75b8dad4dac4ade0", "text": "Lactate (La−) has long been at the center of controversy in research, clinical, and athletic settings. Since its discovery in 1780, La− has often been erroneously viewed as simply a hypoxic waste product with multiple deleterious effects. Not until the 1980s, with the introduction of the cell-to-cell lactate shuttle did a paradigm shift in our understanding of the role of La− in metabolism begin. The evidence for La− as a major player in the coordination of whole-body metabolism has since grown rapidly. La− is a readily combusted fuel that is shuttled throughout the body, and it is a potent signal for angiogenesis irrespective of oxygen tension. Despite this, many fundamental discoveries about La− are still working their way into mainstream research, clinical care, and practice. The purpose of this review is to synthesize current understanding of La− metabolism via an appraisal of its robust experimental history, particularly in exercise physiology. That La− production increases during dysoxia is beyond debate, but this condition is the exception rather than the rule. Fluctuations in blood [La−] in health and disease are not typically due to low oxygen tension, a principle first demonstrated with exercise and now understood to varying degrees across disciplines. From its role in coordinating whole-body metabolism as a fuel to its role as a signaling molecule in tumors, the study of La− metabolism continues to expand and holds potential for multiple clinical applications. This review highlights La−’s central role in metabolism and amplifies our understanding of past research.", "title": "" } ]
scidocsrr
be4d60724cfd2bda81bb94c8c6270cef
Design and Implementation of PCB Inductors With Litz-Wire Structure for Conventional-Size Large-Signal Domestic Induction Heating Applications
[ { "docid": "dedef832d8b54cac137277afe9cd27eb", "text": "The number of strands to minimize loss in a litz-wire transformer winding is determined. With fine stranding, the ac resistance factor decreases, but dc resistance increases because insulation occupies more of the window area. A power law to model insulation thickness is combined with standard analysis of proximity-effect losses.", "title": "" }, { "docid": "79c80b3aea50ab971f405b8b58da38de", "text": "In this paper, the design and implementation of small inductors in printed circuit board (PCB) for domestic induction heating applications is presented. With this purpose, we have developed both a manufacturing technique and an electromagnetic model of the system based on finite-element method (FEM) simulations. The inductor arrangement consists of a stack of printed circuit boards in which a planar litz wire structure is implemented. The developed PCB litz wire structure minimizes the losses in a similar way to the conventional multi-stranded litz wires; whereas the stack of PCBs allows increasing the power transferred to the pot. Different prototypes of the proposed PCB inductor have been measured at low signal levels. Finally, a PCB inductor has been integrated in an electronic stage to test at high signal levels, i.e. in the similar working conditions to the commercial application.", "title": "" } ]
[ { "docid": "c027f0821c0a8762e90f5f83f82b7d8e", "text": "Digital data explosion mandates the development of scalable tools to organize the data in a meaningful and easily accessible form. Clustering is a commonly used tool for data organization. However, many clustering algorithms designed to handle large data sets assume linear separability of data and hence do not perform well on real world data sets. While kernel-based clustering algorithms can capture the non-linear structure in data, they do not scale well in terms of speed and memory requirements when the number of objects to be clustered exceeds tens of thousands. We propose an approximation scheme for kernel k-means, termed approximate kernel k-means, that reduces both the computational complexity and the memory requirements by employing a randomized approach. We show both analytically and empirically that the performance of approximate kernel k-means is similar to that of the kernel k-means algorithm, but with dramatically reduced run-time complexity and memory requirements.", "title": "" }, { "docid": "204f7f8282954de4d6b725f5cce0b00f", "text": "Traffic classification plays an important and basic role in network management and cyberspace security. With the widespread use of encryption techniques in network applications, encrypted traffic has recently become a great challenge for the traditional traffic classification methods. In this paper we proposed an end-to-end encrypted traffic classification method with one-dimensional convolution neural networks. This method integrates feature extraction, feature selection and classifier into a unified end-to-end framework, intending to automatically learning nonlinear relationship between raw input and expected output. To the best of our knowledge, it is the first time to apply an end-to-end method to the encrypted traffic classification domain. The method is validated with the public ISCX VPN-nonVPN traffic dataset. Among all of the four experiments, with the best traffic representation and the fine-tuned model, 11 of 12 evaluation metrics of the experiment results outperform the state-of-the-art method, which indicates the effectiveness of the proposed method.", "title": "" }, { "docid": "703696ca3af2a485ac34f88494210007", "text": "Cells navigate environments, communicate and build complex patterns by initiating gene expression in response to specific signals. Engineers seek to harness this capability to program cells to perform tasks or create chemicals and materials that match the complexity seen in nature. This Review describes new tools that aid the construction of genetic circuits. Circuit dynamics can be influenced by the choice of regulators and changed with expression 'tuning knobs'. We collate the failure modes encountered when assembling circuits, quantify their impact on performance and review mitigation efforts. Finally, we discuss the constraints that arise from circuits having to operate within a living cell. Collectively, better tools, well-characterized parts and a comprehensive understanding of how to compose circuits are leading to a breakthrough in the ability to program living cells for advanced applications, from living therapeutics to the atomic manufacturing of functional materials.", "title": "" }, { "docid": "29e5d267bebdeb2aa22b137219b4407e", "text": "Social networks are popular platforms for interaction, communication and collaboration between friends. Researchers have recently proposed an emerging class of applications that leverage relationships from social networks to improve security and performance in applications such as email, web browsing and overlay routing. While these applications often cite social network connectivity statistics to support their designs, researchers in psychology and sociology have repeatedly cast doubt on the practice of inferring meaningful relationships from social network connections alone.\n This leads to the question: Are social links valid indicators of real user interaction? If not, then how can we quantify these factors to form a more accurate model for evaluating socially-enhanced applications? In this paper, we address this question through a detailed study of user interactions in the Facebook social network. We propose the use of interaction graphs to impart meaning to online social links by quantifying user interactions. We analyze interaction graphs derived from Facebook user traces and show that they exhibit significantly lower levels of the \"small-world\" properties shown in their social graph counterparts. This means that these graphs have fewer \"supernodes\" with extremely high degree, and overall network diameter increases significantly as a result. To quantify the impact of our observations, we use both types of graphs to validate two well-known social-based applications (RE and SybilGuard). The results reveal new insights into both systems, and confirm our hypothesis that studies of social applications should use real indicators of user interactions in lieu of social graphs.", "title": "" }, { "docid": "8318d49318f442749bfe3a33a3394f42", "text": "Driving Scene understanding is a key ingredient for intelligent transportation systems. To achieve systems that can operate in a complex physical and social environment, they need to understand and learn how humans drive and interact with traffic scenes. We present the Honda Research Institute Driving Dataset (HDD), a challenging dataset to enable research on learning driver behavior in real-life environments. The dataset includes 104 hours of real human driving in the San Francisco Bay Area collected using an instrumented vehicle equipped with different sensors. We provide a detailed analysis of HDD with a comparison to other driving datasets. A novel annotation methodology is introduced to enable research on driver behavior understanding from untrimmed data sequences. As the first step, baseline algorithms for driver behavior detection are trained and tested to demonstrate the feasibility of the proposed task.", "title": "" }, { "docid": "9297a6eaaf5ba6c1ebec8f96243d39ac", "text": "Editor: T.M. Harrison Non-arc basalts of Archean and Proterozoic age have model primary magmas that exhibit mantle potential temperatures TP that increase from 1350 °C at the present to a maximum of ∼1500–1600 °C at 2.5–3.0 Ga. The overall trend of these temperatures converges smoothly to that of the present-day MORB source, supporting the interpretation that the non-arc basalts formed by the melting of hot ambient mantle, not mantle plumes, and that they can constrain the thermal history of the Earth. These petrological results are very similar to those predicted by thermal models characterized by a low Urey ratio and more sluggish mantle convection in the past. We infer that the mantle was warming in deep Archean–Hadean time because internal heating exceeded surface heat loss, and it has been cooling from 2.5 to 3.0 Ga to the present. Non-arc Precambrian basalts are likely to be similar to those that formed oceanic crust and erupted on continents. It is estimated that ∼25–35 km of oceanic crust formed in the ancient Earth by about 30% melting of hot ambient mantle. In contrast, komatiite parental magmas reveal TP that are higher than those of non-arc basalts, consistent with the hot plume model. However, the associated excess magmatism was minor and oceanic plateaus, if they existed, would have had subtle bathymetric variations, unlike those of Phanerozoic oceanic plateaus. Primary magmas of Precambrian ambient mantle had 18–24% MgO, and they left behind residues of harzburgite that are now found as xenoliths of cratonic mantle. We infer that primary basaltic partial melts having 10–13% MgO are a feature of Phanerozoic magmatism, not of the early Earth, which may be why modern-day analogs of oceanic crust have not been reported in Archean greenstone belts.", "title": "" }, { "docid": "3f2aa3cde019d56240efba61d52592a4", "text": "Drivers like global competition, advances in technology, and new attractive market opportunities foster a process of servitization and thus the search for innovative service business models. To facilitate this process, different methods and tools for the development of new business models have emerged. Nevertheless, business model approaches are missing that enable the representation of cocreation as one of the most important service-characteristics. Rooted in a cumulative research design that seeks to advance extant business model representations, this goal is to be closed by the Service Business Model Canvas (SBMC). This contribution comprises the application of thinking-aloud protocols for the formative evaluation of the SBMC. With help of industry experts and academics with experience in the service sector and business models, the usability is tested and implications for its further development derived. Furthermore, this study provides empirically based insights for the design of service business model representation that can facilitate the development of future business models.", "title": "" }, { "docid": "1b7fb04cd80a016ddd53d8481f6da8bd", "text": "The classification of retinal vessels into artery/vein (A/V) is an important phase for automating the detection of vascular changes, and for the calculation of characteristic signs associated with several systemic diseases such as diabetes, hypertension, and other cardiovascular conditions. This paper presents an automatic approach for A/V classification based on the analysis of a graph extracted from the retinal vasculature. The proposed method classifies the entire vascular tree deciding on the type of each intersection point (graph nodes) and assigning one of two labels to each vessel segment (graph links). Final classification of a vessel segment as A/V is performed through the combination of the graph-based labeling results with a set of intensity features. The results of this proposed method are compared with manual labeling for three public databases. Accuracy values of 88.3%, 87.4%, and 89.8% are obtained for the images of the INSPIRE-AVR, DRIVE, and VICAVR databases, respectively. These results demonstrate that our method outperforms recent approaches for A/V classification.", "title": "" }, { "docid": "d47fe2f028b03b9b10a81d1a71c466ab", "text": "This paper investigates the system-level performance of downlink non-orthogonal multiple access (NOMA) with power-domain user multiplexing at the transmitter side and successive interference canceller (SIC) on the receiver side. The goal is to clarify the performance gains of NOMA for future LTE (Long-Term Evolution) enhancements, taking into account design aspects related to the LTE radio interface such as, frequency-domain scheduling with adaptive modulation and coding (AMC), and NOMA specific functionalities such as error propagation of SIC receiver, multi-user pairing and transmit power allocation. In particular, a pre-defined user grouping and fixed per-group power allocation are proposed to reduce the overhead associated with power allocation signalling. Based on computer simulations, we show that for both wideband and subband scheduling and both low and high mobility scenarios, NOMA can still provide a hefty portion of its expected gains even with error propagation, and also when the proposed simplified user grouping and power allocation are used.", "title": "" }, { "docid": "5ed409feee70554257e4974ab99674e0", "text": "Text mining and information retrieval in large collections of scientific literature require automated processing systems that analyse the documents’ content. However, the layout of scientific articles is highly varying across publishers, and common digital document formats are optimised for presentation, but lack structural information. To overcome these challenges, we have developed a processing pipeline that analyses the structure a PDF document using a number of unsupervised machine learning techniques and heuristics. Apart from the meta-data extraction, which we reused from previous work, our system uses only information available from the current document and does not require any pre-trained model. First, contiguous text blocks are extracted from the raw character stream. Next, we determine geometrical relations between these blocks, which, together with geometrical and font information, are then used categorize the blocks into different classes. Based on this resulting logical structure we finally extract the body text and the table of contents of a scientific article. We separately evaluate the individual stages of our pipeline on a number of different datasets and compare it with other document structure analysis approaches. We show that it outperforms a state-of-the-art system in terms of the quality of the extracted body text and table of contents. Our unsupervised approach could provide a basis for advanced digital library scenarios that involve diverse and dynamic corpora.", "title": "" }, { "docid": "6ebc608df8be2f7a4a8de29fba69c052", "text": "Cloud Computing is the fastest growing technology in the IT industry. It helps in providing services and resources with the help of internet. Resources are always provided by the Cloud Service Provider. Resources may be servers, storage, applications and networks. Keeping Resources in the cloud environment can be helpful in saving infrastructure cost and time for the user. Transferring the entire information of the enterprise onto the cloud contains lots of security issues and threats. This paper focuses on the various concepts related to cloud computing, its various business and service models and its entities along with several issues and challenges related to it.", "title": "" }, { "docid": "cc01308f475be40bd68b2fcbb385c025", "text": "This paper presents design and implementation of a bi-directional inverter, including a high frequency transformer, a push-pull switch configuration at the dc side, a cycloconverter at the ac side, and a dsPIC controller. The dc/ac conversion is achieved with a phase-shifted control strategy. In addition, this topology also can achieve an ac/dc conversion with the PFC function. In this circuit, the dsPIC realizes almost all functions, including generation of PWM signals, A/D conversion, phase shift, circuit protection, and PFC during ac/dc conversion. The proposed bi-directional inverter can reduce weight, size and volume significantly as compared to a conventional low frequency transformer approach. Experimental results obtained from a prototype with a dc side voltage of 48V, an ac side voltage of 110Vrms, and power rating of 500W have verified its feasibility.", "title": "" }, { "docid": "96e10f0858818ce150dba83882557aee", "text": "Embedding and visualizing large-scale high-dimensional data in a two-dimensional space is an important problem since such visualization can reveal deep insights out of complex data. Most of the existing embedding approaches, however, run on an excessively high precision, ignoring the fact that at the end, embedding outputs are converted into coarsegrained discrete pixel coordinates in a screen space. Motivated by such an observation and directly considering pixel coordinates in an embedding optimization process, we accelerate Barnes-Hut tree-based t-distributed stochastic neighbor embedding (BH-SNE), known as a state-of-the-art 2D embedding method, and propose a novel method called PixelSNE, a highly-efficient, screen resolution-driven 2D embedding method with a linear computational complexity in terms of the number of data items. Our experimental results show the significantly fast running time of PixelSNE by a large margin against BH-SNE, while maintaining the minimal degradation in the embedding quality. Finally, the source code of our method is publicly available at https: //github.com/awesome-davian/sasne.", "title": "" }, { "docid": "29e1ecb7b1dfbf4ca2a229726dcab12e", "text": "The recently developed depth sensors, e.g., the Kinect sensor, have provided new opportunities for human-computer interaction (HCI). Although great progress has been made by leveraging the Kinect sensor, e.g., in human body tracking, face recognition and human action recognition, robust hand gesture recognition remains an open problem. Compared to the entire human body, the hand is a smaller object with more complex articulations and more easily affected by segmentation errors. It is thus a very challenging problem to recognize hand gestures. This paper focuses on building a robust part-based hand gesture recognition system using Kinect sensor. To handle the noisy hand shapes obtained from the Kinect sensor, we propose a novel distance metric, Finger-Earth Mover's Distance (FEMD), to measure the dissimilarity between hand shapes. As it only matches the finger parts while not the whole hand, it can better distinguish the hand gestures of slight differences. The extensive experiments demonstrate that our hand gesture recognition system is accurate (a 93.2% mean accuracy on a challenging 10-gesture dataset), efficient (average 0.0750 s per frame), robust to hand articulations, distortions and orientation or scale changes, and can work in uncontrolled environments (cluttered backgrounds and lighting conditions). The superiority of our system is further demonstrated in two real-life HCI applications.", "title": "" }, { "docid": "cd7b967fd59f37d1feccb7cb74bac816", "text": "A comprehensive security solution is no longer an option, and needs to be designed bottom-up into the car software. The architecture needs to be scalable and tiered, leveraging the proven technologies, processes and policies from the mature industries. The objective is to detect, defend and recover from any attack before harm comes to passengers, data and instrumentation. No matter how hardened security is there is always a need to patch any security vulnerabilities. This paper presents high level framework for security and over the air (OTA) framework.", "title": "" }, { "docid": "85576e6b36757f0a475e7482e4827a91", "text": "Existing approaches to neural machine translation are typically autoregressive models. While these models attain state-of-the-art translation quality, they are suffering from low parallelizability and thus slow at decoding long sequences. In this paper, we propose a novel model for fast sequence generation — the semi-autoregressive Transformer (SAT). The SAT keeps the autoregressive property in global but relieves in local and thus is able to produce multiple successive words in parallel at each time step. Experiments conducted on English-German and ChineseEnglish translation tasks show that the SAT achieves a good balance between translation quality and decoding speed. On WMT’14 English-German translation, the SAT achieves 5.58× speedup while maintains 88% translation quality, significantly better than the previous non-autoregressive methods. When produces two words at each time step, the SAT is almost lossless (only 1% degeneration in BLEU score).", "title": "" }, { "docid": "d1c69dac07439ade32a962134753ab08", "text": "The change history of a software project contains a rich collection of code changes that record previous development experience. Changes that fix bugs are especially interesting, since they record both the old buggy code and the new fixed code. This paper presents a bug finding algorithm using bug fix memories: a project-specific bug and fix knowledge base developed by analyzing the history of bug fixes. A bug finding tool, BugMem, implements the algorithm. The approach is different from bug finding tools based on theorem proving or static model checking such as Bandera, ESC/Java, FindBugs, JLint, and PMD. Since these tools use pre-defined common bug patterns to find bugs, they do not aim to identify project-specific bugs. Bug fix memories use a learning process, so the bug patterns are project-specific, and project-specific bugs can be detected. The algorithm and tool are assessed by evaluating if real bugs and fixes in project histories can be found in the bug fix memories. Analysis of five open source projects shows that, for these projects, 19.3%-40.3% of bugs appear repeatedly in the memories, and 7.9%-15.5% of bug and fix pairs are found in memories. The results demonstrate that project-specific bug fix patterns occur frequently enough to be useful as a bug detection technique. Furthermore, for the bug and fix pairs, it is possible to both detect the bug and provide a strong suggestion for the fix. However, there is also a high false positive rate, with 20.8%-32.5% of non-bug containing changes also having patterns found in the memories. A comparison of BugMem with a bug finding tool, PMD, shows that the bug sets identified by both tools are mostly exclusive, indicating that BugMem complements other bug finding tools.", "title": "" }, { "docid": "b4586447ef1536f23793651fcd9d71b8", "text": "State monitoring is widely used for detecting critical events and abnormalities of distributed systems. As the scale of such systems grows and the degree of workload consolidation increases in Cloud data centers, node failures and performance interferences, especially transient ones, become the norm rather than the exception. Hence, distributed state monitoring tasks are often exposed to impaired communication caused by such dynamics on different nodes. Unfortunately, existing distributed state monitoring approaches are often designed under the assumption of always-online distributed monitoring nodes and reliable inter-node communication. As a result, these approaches often produce misleading results which in turn introduce various problems to Cloud users who rely on state monitoring results to perform automatic management tasks such as auto-scaling. This paper introduces a new state monitoring approach that tackles this challenge by exposing and handling communication dynamics such as message delay and loss in Cloud monitoring environments. Our approach delivers two distinct features. First, it quantitatively estimates the accuracy of monitoring results to capture uncertainties introduced by messaging dynamics. This feature helps users to distinguish trustworthy monitoring results from ones heavily deviated from the truth, yet significantly improves monitoring utility compared with simple techniques that invalidate all monitoring results generated with the presence of messaging dynamics. Second, our approach also adapts to non-transient messaging issues by reconfiguring distributed monitoring algorithms to minimize monitoring errors. Our experimental results show that, even under severe message loss and delay, our approach consistently improves monitoring accuracy, and when applied to Cloud application auto-scaling, outperforms existing state monitoring techniques in terms of the ability to correctly trigger dynamic provisioning.", "title": "" }, { "docid": "6e8a9c37672ec575821da5c9c3145500", "text": "As video games become increasingly popular pastimes, it becomes more important to understand how different individuals behave when they play these games. Previous research has focused mainly on behavior in massively multiplayer online role-playing games; therefore, in the current study we sought to extend on this research by examining the connections between personality traits and behaviors in video games more generally. Two hundred and nineteen university students completed measures of personality traits, psychopathic traits, and a questionnaire regarding frequency of different behaviors during video game play. A principal components analysis of the video game behavior questionnaire revealed four factors: Aggressing, Winning, Creating, and Helping. Each behavior subscale was significantly correlated with at least one personality trait. Men reported significantly more Aggressing, Winning, and Helping behavior than women. Controlling for participant sex, Aggressing was negatively correlated with Honesty–Humility, Helping was positively correlated with Agreeableness, and Creating was negatively correlated with Conscientiousness. Aggressing was also positively correlated with all psychopathic traits, while Winning and Creating were correlated with one psychopathic trait each. Frequency of playing video games online was positively correlated with the Aggressing, Winning, and Helping scales, but not with the Creating scale. The results of the current study provide support for previous research on personality and behavior in massively multiplayer online role-playing games. 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "ed5fd1bf16317256b56f4fa0db37a0f9", "text": "In this paper we pursue an approach to scaling life-long learning using parallel off-policy reinforcement learning algorithms. In life-long learning a robot continually learns from a life-time of experience, slowly acquiring and applying skills and knowledge to new situations. Many of the benefits of life-long learning are a results of scaling the amount of training data, processed by the robot, to long sensorimotor streams. Another dimension of scaling can be added by allowing off-policy sampling from the unending stream of sensorimotor data generated by a long-lived robot. Recent algorithmic developments have made it possible to apply off-policy algorithms to life-long learning, in a sound way, for the first time. We assess the scalability of these off-policy algorithms on a physical robot. We show that hundreds of accurate multi-step predictions can be learned about several policies in parallel and in realtime. We present the first online measures of off-policy learning progress. Finally we demonstrate that our robot, using the new off-policy measures, can learn 8000 predictions about 300 distinct policies, a substantial increase in scale compared to previous simulated and robotic life-long learning systems.", "title": "" } ]
scidocsrr
60494b20bc9a9ca7940d1dce786c7582
PiLoc: A self-calibrating participatory indoor localization system
[ { "docid": "8ff8a8ce2db839767adb8559f6d06721", "text": "Indoor environments present opportunities for a rich set of location-aware applications such as navigation tools for humans and robots, interactive virtual games, resource discovery, asset tracking, location-aware sensor networking etc. Typical indoor applications require better accuracy than what current outdoor location systems provide. Outdoor location technologies such as GPS have poor indoor performance because of the harsh nature of indoor environments. Further, typical indoor applications require different types of location information such as physical space, position and orientation. This dissertation describes the design and implementation of the Cricket indoor location system that provides accurate location in the form of user space, position and orientation to mobile and sensor network applications. Cricket consists of location beacons that are attached to the ceiling of a building, and receivers, called listeners, attached to devices that need location. Each beacon periodically transmits its location information in an RF message. At the same time, the beacon also transmits an ultrasonic pulse. The listeners listen to beacon transmissions and measure distances to nearby beacons, and use these distances to compute their own locations. This active-beacon passive-listener architecture is scalable with respect to the number of users, and enables applications that preserve user privacy. This dissertation describes how Cricket achieves accurate distance measurements between beacons and listeners. Once the beacons are deployed, the MAT and AFL algorithms, described in this dissertation, use measurements taken at a mobile listener to configure the beacons with a coordinate assignment that reflects the beacon layout. This dissertation presents beacon interference avoidance and detection algorithms, as well as outlier rejection algorithms to prevent and filter out outlier distance estimates caused by uncoordinated beacon transmissions. The Cricket listeners can measure distances with an accuracy of 5 cm. The listeners can detect boundaries with an accuracy of 1 cm. Cricket has a position estimation accuracy of 10 cm and an orientation accuracy of 3 degrees. Thesis Supervisor: Hari Balakrishnan Title: Associate Professor of Computer Science and Engineering", "title": "" } ]
[ { "docid": "5b9488755fb3146adf5b6d8d767b7c8f", "text": "This paper presents an overview of our activities for spoken and written language resources for Vietnamese implemented at CLIPSIMAG Laboratory and International Research Center MICA. A new methodology for fast text corpora acquisition for minority languages which has been applied to Vietnamese is proposed. The first results of a process of building a large Vietnamese speech database (VNSpeechCorpus) and a phonetic dictionary, which is used for automatic alignment process, are also presented.", "title": "" }, { "docid": "48c157638090b3168b6fd3cb50780184", "text": "Adverse reactions to drugs are among the most common causes of death in industrialized nations. Expensive clinical trials are not sufficient to uncover all of the adverse reactions a drug may cause, necessitating systems for post-marketing surveillance, or pharmacovigilance. These systems have typically relied on voluntary reporting by health care professionals. However, self-reported patient data has become an increasingly important resource, with efforts such as MedWatch from the FDA allowing reports directly from the consumer. In this paper, we propose mining the relationships between drugs and adverse reactions as reported by the patients themselves in user comments to health-related websites. We evaluate our system on a manually annotated set of user comments, with promising performance. We also report encouraging correlations between the frequency of adverse drug reactions found by our system in unlabeled data and the frequency of documented adverse drug reactions. We conclude that user comments pose a significant natural language processing challenge, but do contain useful extractable information which merits further exploration.", "title": "" }, { "docid": "db6bba69b5bd316da640b03749db1918", "text": "[1] Pore pressure changes are rigorously included in Coulomb stress calculations for fault interaction studies. These are considered changes under undrained conditions for analyzing very short term postseismic response. The assumption that pore pressure is proportional to faultnormal stress leads to the widely used concept of an effective friction coefficient. We provide an exact expression for undrained fault zone pore pressure changes to evaluate the validity of that concept. A narrow fault zone is considered whose poroelastic parameters are different from those in the surrounding medium, which is assumed to be elastically isotropic. We use conditions for mechanical equilibrium of stress and geometric compatibility of strain to express the effective normal stress change within the fault as a weighted linear combination of mean stress and faultnormal stress changes in the surroundings. Pore pressure changes are determined by fault-normal stress changes when the shear modulus within the fault zone is significantly smaller than in the surroundings but by mean stress changes when the elastic mismatch is small. We also consider an anisotropic fault zone, introducing a Skempton tensor for pore pressure changes. If the anisotropy is extreme, such that fluid pressurization under constant stress would cause expansion only in the fault-normal direction, then the effective friction coefficient concept applies exactly. We finally consider moderately longer timescales than those for undrained response. A sufficiently permeable fault may come to local pressure equilibrium with its surroundings even while that surrounding region may still be undrained, leading to pore pressure change determined by mean stress changes in those surroundings.", "title": "" }, { "docid": "f472c2ebd6cf1f361fd8c572f8c516e4", "text": "This article discusses the creation of an educational game intended for UK GCSE-level content, called Elemental. Elemental, developed using Microsoft's XNA studio and deployed both on the PC and Xbox 360 platforms, addresses the periodic table of elements, a subject with extensions in chemistry, physics and engineering. Through the development process of the game but also the eventual pilot user study with 15 subjects (using a pre and post test method to measure learning using the medium and self-report questions), examples are given on how an educator can, without expert knowledge, utilize modern programming tools to create and test custom-made content for delivering part of a secondary education curriculum.", "title": "" }, { "docid": "c3f25271d25590bf76b36fee4043d227", "text": "Over the past few decades, application of artificial neural networks (ANN) to time-series forecasting (TSF) has been growing rapidly due to several unique features of ANN models. However, to date, a consistent ANN performance over different studies has not been achieved. Many factors contribute to the inconsistency in the performance of neural network models. One such factor is that ANN modeling involves determining a large number of design parameters, and the current design practice is essentially heuristic and ad hoc, this does not exploit the full potential of neural networks. Systematic ANN modeling processes and strategies for TSF are, therefore, greatly needed. Motivated by this need, this paper attempts to develop an automatic ANN modeling scheme. It is based on the generalized regression neural network (GRNN), a special type of neural network. By taking advantage of several GRNN properties (i.e., a single design parameter and fast learning) and by incorporating several design strategies (e.g., fusing multiple GRNNs), we have been able to make the proposed modeling scheme to be effective for modeling large-scale business time series. The initial model was entered into the NN3 time-series competition. It was awarded the best prediction on the reduced dataset among approximately 60 different models submitted by scholars worldwide.", "title": "" }, { "docid": "204ad3064d559c345caa2c6d1a140582", "text": "In this paper, a face recognition method based on Convolution Neural Network (CNN) is presented. This network consists of three convolution layers, two pooling layers, two full-connected layers and one Softmax regression layer. Stochastic gradient descent algorithm is used to train the feature extractor and the classifier, which can extract the facial features and classify them automatically. The Dropout method is used to solve the over-fitting problem. The Convolution Architecture For Feature Extraction framework (Caffe) is used during the training and testing process. The face recognition rate of the ORL face database and AR face database based on this network is 99.82% and 99.78%.", "title": "" }, { "docid": "d790103c1909778db0b054c5060336ff", "text": "The concept of the neurovascular unit (NVU), formalized at the 2001 Stroke Progress Review Group meeting of the National Institute of Neurological Disorders and Stroke, emphasizes the intimate relationship between the brain and its vessels. Since then, the NVU has attracted the interest of the neuroscience community, resulting in considerable advances in the field. Here the current state of knowledge of the NVU will be assessed, focusing on one of its most vital roles: the coupling between neural activity and blood flow. The evidence supports a conceptual shift in the mechanisms of neurovascular coupling, from a unidimensional process involving neuronal-astrocytic signaling to local blood vessels to a multidimensional one in which mediators released from multiple cells engage distinct signaling pathways and effector systems across the entire cerebrovascular network in a highly orchestrated manner. The recently appreciated NVU dysfunction in neurodegenerative diseases, although still poorly understood, supports emerging concepts that maintaining neurovascular health promotes brain health.", "title": "" }, { "docid": "3bdd6168db10b8b195ce88ae9c4a75f9", "text": "Nowadays Intrusion Detection System (IDS) which is increasingly a key element of system security is used to identify the malicious activities in a computer system or network. There are different approaches being employed in intrusion detection systems, but unluckily each of the technique so far is not entirely ideal. The prediction process may produce false alarms in many anomaly based intrusion detection systems. With the concept of fuzzy logic, the false alarm rate in establishing intrusive activities can be reduced. A set of efficient fuzzy rules can be used to define the normal and abnormal behaviors in a computer network. Therefore some strategy is needed for best promising security to monitor the anomalous behavior in computer network. In this paper I present a few research papers regarding the foundations of intrusion detection systems, the methodologies and good fuzzy classifiers using genetic algorithm which are the focus of current development efforts and the solution of the problem of Intrusion Detection System to offer a realworld view of intrusion detection. Ultimately, a discussion of the upcoming technologies and various methodologies which promise to improve the capability of computer systems to detect intrusions is offered.", "title": "" }, { "docid": "f921eccfa5df6b8479489c8851653b14", "text": "Restricted Boltzmann Machines (RBMs) are general unsupervised learning devices to ascertain generative models of data distributions. RBMs are often trained using the Contrastive Divergence learning algorithm (CD), an approximation to the gradient of the data log-likelihood. A simple reconstruction error is often used to decide whether the approximation provided by the CD algorithm is good enough, though several authors (Schulz et al., 2010; Fischer & Igel, 2010) have raised doubts concerning the feasibility of this procedure. However, not many alternatives to the reconstruction error have been used in the literature. In this manuscript we investigate simple alternatives to the reconstruction error in order to detect as soon as possible the decrease in the log-likelihood during learning. Proceedings of the 2 International Conference on Learning Representations, Banff, Canada, 2014. Copyright 2014 by the author(s).", "title": "" }, { "docid": "ffee60d5f6d862115b7d7d2442e1a1b9", "text": "Preventing accidents caused by drowsiness has become a major focus of active safety driving in recent years. It requires an optimal technique to continuously detect drivers' cognitive state related to abilities in perception, recognition, and vehicle control in (near-) real-time. The major challenges in developing such a system include: 1) the lack of significant index for detecting drowsiness and 2) complicated and pervasive noise interferences in a realistic and dynamic driving environment. In this paper, we develop a drowsiness-estimation system based on electroencephalogram (EEG) by combining independent component analysis (ICA), power-spectrum analysis, correlation evaluations, and linear regression model to estimate a driver's cognitive state when he/she drives a car in a virtual reality (VR)-based dynamic simulator. The driving error is defined as deviations between the center of the vehicle and the center of the cruising lane in the lane-keeping driving task. Experimental results demonstrate the feasibility of quantitatively estimating drowsiness level using ICA-based multistream EEG spectra. The proposed ICA-based method applied to power spectrum of ICA components can successfully (1) remove most of EEG artifacts, (2) suggest an optimal montage to place EEG electrodes, and estimate the driver's drowsiness fluctuation indexed by the driving performance measure. Finally, we present a benchmark study in which the accuracy of ICA-component-based alertness estimates compares favorably to scalp-EEG based.", "title": "" }, { "docid": "9b11423260c2d3d175892f846cecced3", "text": "Disturbances in fluid and electrolytes are among the most common clinical problems encountered in the intensive care unit (ICU). Recent studies have reported that fluid and electrolyte imbalances are associated with increased morbidity and mortality among critically ill patients. To provide optimal care, health care providers should be familiar with the principles and practice of fluid and electrolyte physiology and pathophysiology. Fluid resuscitation should be aimed at restoration of normal hemodynamics and tissue perfusion. Early goal-directed therapy has been shown to be effective in patients with severe sepsis or septic shock. On the other hand, liberal fluid administration is associated with adverse outcomes such as prolonged stay in the ICU, higher cost of care, and increased mortality. Development of hyponatremia in critically ill patients is associated with disturbances in the renal mechanism of urinary dilution. Removal of nonosmotic stimuli for vasopressin secretion, judicious use of hypertonic saline, and close monitoring of plasma and urine electrolytes are essential components of therapy. Hypernatremia is associated with cellular dehydration and central nervous system damage. Water deficit should be corrected with hypotonic fluid, and ongoing water loss should be taken into account. Cardiac manifestations should be identified and treated before initiating stepwise diagnostic evaluation of dyskalemias. Divalent ion deficiencies such as hypocalcemia, hypomagnesemia and hypophosphatemia should be identified and corrected, since they are associated with increased adverse events among critically ill patients.", "title": "" }, { "docid": "346bedcddf74d56db8b2d5e8b565efef", "text": "Ulric Neisser (Chair) Gwyneth Boodoo Thomas J. Bouchard, Jr. A. Wade Boykin Nathan Brody Stephen J. Ceci Diane E Halpern John C. Loehlin Robert Perloff Robert J. Sternberg Susana Urbina Emory University Educational Testing Service, Princeton, New Jersey University of Minnesota, Minneapolis Howard University Wesleyan University Cornell University California State University, San Bernardino University of Texas, Austin University of Pittsburgh Yale University University of North Florida", "title": "" }, { "docid": "c3e8960170cb72f711263e7503a56684", "text": "BACKGROUND\nThe deltoid ligament has both superficial and deep layers and consists of up to six ligamentous bands. The prevalence of the individual bands is variable, and no consensus as to which bands are constant or variable exists. Although other studies have looked at the variance in the deltoid anatomy, none have quantified the distance to relevant osseous landmarks.\n\n\nMETHODS\nThe deltoid ligaments from fourteen non-paired, fresh-frozen cadaveric specimens were isolated and the ligamentous bands were identified. The lengths, footprint areas, orientations, and distances from relevant osseous landmarks were measured with a three-dimensional coordinate measurement device.\n\n\nRESULTS\nIn all specimens, the tibionavicular, tibiospring, and deep posterior tibiotalar ligaments were identified. Three additional bands were variable in our specimen cohort: the tibiocalcaneal, superficial posterior tibiotalar, and deep anterior tibiotalar ligaments. The deep posterior tibiotalar ligament was the largest band of the deltoid ligament. The origins from the distal center of the intercollicular groove were 16.1 mm (95% confidence interval, 14.7 to 17.5 mm) for the tibionavicular ligament, 13.1 mm (95% confidence interval, 11.1 to 15.1 mm) for the tibiospring ligament, and 7.6 mm (95% confidence interval, 6.7 to 8.5 mm) for the deep posterior tibiotalar ligament. Relevant to other pertinent osseous landmarks, the tibionavicular ligament inserted at 9.7 mm (95% confidence interval, 8.4 to 11.0 mm) from the tuberosity of the navicular, the tibiospring inserted at 35% (95% confidence interval, 33.4% to 36.6%) of the spring ligament's posteroanterior distance, and the deep posterior tibiotalar ligament inserted at 17.8 mm (95% confidence interval, 16.3 to 19.3 mm) from the posteromedial talar tubercle.\n\n\nCONCLUSIONS\nThe tibionavicular, tibiospring, and deep posterior tibiotalar ligament bands were constant components of the deltoid ligament. The deep posterior tibiotalar ligament was the largest band of the deltoid ligament.\n\n\nCLINICAL RELEVANCE\nThe anatomical data regarding the deltoid ligament bands in this study will help to guide anatomical placement of repairs and reconstructions for deltoid ligament injury or instability.", "title": "" }, { "docid": "9af1423c296f59683b8e6528ad039d5c", "text": "We present a novel approach to natural language generation (NLG) that applies hierarchical reinforcement learning to text generation in the wayfinding domain. Our approach aims to optimise the integration of NLG tasks that are inherently different in nature, such as decisions of content selection, text structure, user modelling, referring expression generation (REG), and surface realisation. It also aims to capture existing interdependencies between these areas. We apply hierarchical reinforcement learning to learn a generation policy that captures these interdependencies, and that can be transferred to other NLG tasks. Our experimental results—in a simulated environment—show that the learnt wayfinding policy outperforms a baseline policy that takes reasonable actions but without optimization.", "title": "" }, { "docid": "b36549a4b16c2c8ab50f1adda99f3120", "text": "Spatial representations of time are a ubiquitous feature of human cognition. Nevertheless, interesting sociolinguistic variations exist with respect to where in space people locate temporal constructs. For instance, while in English time metaphorically flows horizontally, in Mandarin an additional vertical dimension is employed. Noting that the bilingual mind can flexibly accommodate multiple representations, the present work explored whether Mandarin-English bilinguals possess two mental time lines. Across two experiments, we demonstrated that Mandarin-English bilinguals do indeed employ both horizontal and vertical representations of time. Importantly, subtle variations to cultural context were seen to shape how these time lines were deployed.", "title": "" }, { "docid": "0034b7f8160f504bd3de5125cf33fea6", "text": "By taking into account simultaneously the effects of border traps and interface states, the authors model the alternating current capacitance-voltage (C-V) behavior of high-mobility substrate metal-oxide-semiconductor (MOS) capacitors. The results are validated with the experimental In0.53Ga0.47As/ high-κ and InP/high-κ (C-V) curves. The simulated C-V and conductance-voltage (G-V) curves reproduce comprehensively the experimentally measured capacitance and conductance data as a function of bias voltage and measurement frequency, over the full bias range going from accumulation to inversion and full frequency spectra from 100 Hz to 1 MHz. The interface state densities of In0.53Ga0.47As and InP MOS devices with various high-κ dielectrics, together with the corresponding border trap density inside the high-κ oxide, were derived accordingly. The derived interface state densities are consistent to those previously obtained with other measurement methods. The border traps, distributed over the thickness of the high- κ oxide, show a large peak density above the two semiconductor conduction band minima. The total density of border traps extracted is on the order of 1019 cm-3. Interface and border trap distributions for InP and In0.53Ga0.47As interfaces with high-κ oxides show remarkable similarities on an energy scale relative to the vacuum reference.", "title": "" }, { "docid": "8786f3c99d03981a3ef194cb32c23d9a", "text": "This study applies a technique to expand the number of images to a level that allows deep learning. And the applicability of the Sauvegrain method through deep learning with relatively few elbow X-rays is studied. The study was composed of processes similar to the physicians’ bone age assessment procedures. The selected reference images were learned without being included in the evaluation data, and at the same time, the data was extended to accommodate the number of cases. In addition, we adjusted the X-ray images to better images using U-Net and selected the ROI with RPN + so as to be able to perform bone age estimation through CNN. The mean absolute error of the Sauvegrain method based on deep learning is 2.8 months and the Mean Absolute Percentage Error (MAPE) is 0.018. This result shows that X ray analysis using the Sauvegrain method shows higher accuracy than that of the age group of puberty even in the deep learning base. This means that deep learning of the Suvegrain method can be measured at a level similar to that of an expert, based on the extended X-ray image with the image data extension technique. Finally, we applied the Sauvegrain method to deep learning for accurate measurement of bone age at puberty. As a result, the present study is based on deep learning, and compared with the evaluation results of experts, it is possible to overcome limitations of the method of measuring bone age based on machine learning which was in TW3 or Greulich & Pyle due to lack of XI confirmed the fact. And we also presented the Sauvegrain method, which is applicable to adolescents as well.", "title": "" }, { "docid": "2b588e18ff6826bd9b077f539777a27a", "text": "Big data phenomenon arises from the increasing number of data collected from various sources, including the internet. Big data is not only about the size or volume. Big data posses specific characteristics (volume, variety, velocity, and value - 4V) that make it difficult to manage from security point of view. The evolution of data to become big data rises another important issues about data security and its management. NIST defines guide for conducting risk assessments on data, including risk management process and risk assessment. This paper looks at NIST risk management guidance and determines whether the approach of this standard is applicable to big data by generally define the threat source, threat events, vulnerabilities, likelihood of occurence and impact. The result of this study will be a general framework defining security management on Big data.", "title": "" }, { "docid": "1256f0799ed585092e60b50fb41055be", "text": "So far, plant identification has challenges for sev eral researchers. Various methods and features have been proposed. However, there are still many approaches could be investigated to develop robust plant identification systems. This paper reports several xperiments in using Zernike moments to build folia ge plant identification systems. In this case, Zernike moments were combined with other features: geometr ic features, color moments and gray-level co-occurrenc e matrix (GLCM). To implement the identifications systems, two approaches has been investigated. Firs t approach used a distance measure and the second u sed Probabilistic Neural Networks (PNN). The results sh ow that Zernike Moments have a prospect as features in leaf identification systems when they are combin ed with other features.", "title": "" }, { "docid": "e5bea734149b69a05455c5fec2d802e3", "text": "This article introduces a collection of essays on continuity and discontinuity in cognitive development. In his lead essay, J. Kagan (2008) argues that limitations in past research (e.g., on number concepts, physical solidarity, and object permanence) render conclusions about continuity premature. Commentaries respectively (1) argue that longitudinal contexts are essential for interpreting developmental data, (2) illustrate the value of converging measures, (3) identify qualitative change via dynamical systems theory, (4) redirect the focus from states to process, and (5) review epistemological premises of alternative research traditions. Following an overview of the essays, this introductory article discusses how the search for developmental structures, continuity, and process differs between mechanistic-contextualist and organismic-contextualist metatheoretical frameworks, and closes by highlighting continuities in Kagan's scholarship over the past half century.", "title": "" } ]
scidocsrr
51670de216112c1276f6f73e06115568
Memory Augmented Neural Networks with Wormhole Connections
[ { "docid": "51048699044d547df7ffd3a0755c76d9", "text": "Many sequential processing tasks require complex nonlinear transition functions from one step to the next. However, recurrent neural networks with “deep\" transition functions remain difficult to train, even when using Long Short-Term Memory (LSTM) networks. We introduce a novel theoretical analysis of recurrent networks based on Geršgorin’s circle theorem that illuminates several modeling and optimization issues and improves our understanding of the LSTM cell. Based on this analysis we propose Recurrent Highway Networks, which are deep not only in time but also in space, extending the LSTM architecture to larger step-to-step transition depths. Experiments demonstrate that the proposed architecture results in powerful and efficient models benefiting from up to 10 layers in the recurrent transition. On the Penn Treebank language modeling corpus, a single network outperforms all previous ensemble results with a perplexity of 66.0 on the test set. On the larger Hutter Prize Wikipedia dataset, a single network again significantly outperforms all previous results with an entropy of 1.32 bits per character on the test set.", "title": "" }, { "docid": "f0de7977c22d5fa16cae768337794b30", "text": "Deep learning research over the past years has shown that by increasing the scope or difficulty of the learning problem over time, increasingly complex learning problems can be addressed. We study incremental learning in the context of sequence learning, using generative RNNs in the form of multi-layer recurrent Mixture Density Networks. While the potential of incremental or curriculum learning to enhance learning is known, indiscriminate application of the principle does not necessarily lead to improvement, and it is essential therefore to know which forms of incremental or curriculum learning have a positive effect. This research contributes to that aim by comparing three instantiations of incremental or curriculum learning. We introduce Incremental Sequence Learning, a simple incremental approach to sequence learning. Incremental Sequence Learning starts out by using only the first few steps of each sequence as training data. Each time a performance criterion has been reached, the length of the parts of the sequences used for training is increased. To evaluate Incremental Sequence Learning and comparison methods, we introduce and make available a novel sequence learning task and data set: predicting and classifying MNIST pen stroke sequences, where the familiar handwritten digit images have been transformed to pen stroke sequences representing the skeletons of the digits. We find that Incremental Sequence Learning greatly speeds up sequence learning and reaches the best test performance level of regular sequence learning 20 times faster, reduces the test error by 74%, and in general performs more robustly; it displays lower variance and achieves sustained progress after all three comparison methods have stopped improving. The two other instantiations of curriculum learning do not result in any noticeable improvement. A trained sequence prediction model is also used in transfer learning to the task of sequence classification, where it is found that transfer learning realizes improved classification performance compared to methods that learn to classify from scratch.", "title": "" }, { "docid": "7cbe504e03ab802389c48109ed1f1802", "text": "Despite recent breakthroughs in the applications of deep neural networks, one setting that presents a persistent challenge is that of “one-shot learning.” Traditional gradient-based networks require a lot of data to learn, often through extensive iterative training. When new data is encountered, the models must inefficiently relearn their parameters to adequately incorporate the new information without catastrophic interference. Architectures with augmented memory capacities, such as Neural Turing Machines (NTMs), offer the ability to quickly encode and retrieve new information, and hence can potentially obviate the downsides of conventional models. Here, we demonstrate the ability of a memory-augmented neural network to rapidly assimilate new data, and leverage this data to make accurate predictions after only a few samples. We also introduce a new method for accessing an external memory that focuses on memory content, unlike previous methods that additionally use memory locationbased focusing mechanisms.", "title": "" } ]
[ { "docid": "1f62fab7d2d88ab3c048e0c620f3842b", "text": "Being able to locate the origin of a sound is important for our capability to interact with the environment. Humans can locate a sound source in both the horizontal and vertical plane with only two ears, using the head related transfer function HRTF, or more specifically features like interaural time difference ITD, interaural level difference ILD, and notches in the frequency spectra. In robotics notches have been left out since they are considered complex and difficult to use. As they are the main cue for humans' ability to estimate the elevation of the sound source this have to be compensated by adding more microphones or very large and asymmetric ears. In this paper, we present a novel method to extract the notches that makes it possible to accurately estimate the location of a sound source in both the horizontal and vertical plane using only two microphones and human-like ears. We suggest the use of simple spiral-shaped ears that has similar properties to the human ears and make it easy to calculate the position of the notches. Finally we show how the robot can learn its HRTF and build audiomotor maps using supervised learning and how it automatically can update its map using vision and compensate for changes in the HRTF due to changes to the ears or the environment", "title": "" }, { "docid": "2752c235aea735a04b70272deb042ea6", "text": "Psychophysiological studies with music have not examined what exactly in the music might be responsible for the observed physiological phenomena. The authors explored the relationships between 11 structural features of 16 musical excerpts and both self-reports of felt pleasantness and arousal and different physiological measures (respiration, skin conductance, heart rate). Overall, the relationships between musical features and experienced emotions corresponded well with those known between musical structure and perceived emotions. This suggests that the internal structure of the music played a primary role in the induction of the emotions in comparison to extramusical factors. Mode, harmonic complexity, and rhythmic articulation best differentiated between negative and positive valence, whereas tempo, accentuation, and rhythmic articulation best discriminated high arousal from low arousal. Tempo, accentuation, and rhythmic articulation were the features that most strongly correlated with physiological measures. Music that induced faster breathing and higher minute ventilation, skin conductance, and heart rate was fast, accentuated, and staccato. This finding corroborates the contention that rhythmic aspects are the major determinants of physiological responses to music.", "title": "" }, { "docid": "c65f050e911abb4b58b4e4f9b9aec63b", "text": "The abundant spatial and contextual information provided by the advanced remote sensing technology has facilitated subsequent automatic interpretation of the optical remote sensing images (RSIs). In this paper, a novel and effective geospatial object detection framework is proposed by combining the weakly supervised learning (WSL) and high-level feature learning. First, deep Boltzmann machine is adopted to infer the spatial and structural information encoded in the low-level and middle-level features to effectively describe objects in optical RSIs. Then, a novel WSL approach is presented to object detection where the training sets require only binary labels indicating whether an image contains the target object or not. Based on the learnt high-level features, it jointly integrates saliency, intraclass compactness, and interclass separability in a Bayesian framework to initialize a set of training examples from weakly labeled images and start iterative learning of the object detector. A novel evaluation criterion is also developed to detect model drift and cease the iterative learning. Comprehensive experiments on three optical RSI data sets have demonstrated the efficacy of the proposed approach in benchmarking with several state-of-the-art supervised-learning-based object detection approaches.", "title": "" }, { "docid": "f6755a6782631ef5cced358bf75b2c18", "text": "With the rapid growth of cloud adoption in both private and public sectors globally, cloud computing environments have become a new battlefield for cyber crime. In this paper, the researchers present the results and analysis of a survey that had been widely circulated among digital forensic experts and practitioners worldwide on cloud forensics and critical criteria for cloud forensic capability in order to better understand the key fundamental issues of cloud forensics such as its definition, scope, challenges, opportunities as well as missing capabilities based on the 257 collected responses of the survey.", "title": "" }, { "docid": "0f78628c309cc863680d60dd641cb7f0", "text": "A systematic review was conducted to evaluate whether chocolate or its constituents were capable of influencing cognitive function and/or mood. Studies investigating potentially psychoactive fractions of chocolate were also included. Eight studies (in six articles) met the inclusion criteria for assessment of chocolate or its components on mood, of which five showed either an improvement in mood state or an attenuation of negative mood. Regarding cognitive function, eight studies (in six articles) met the criteria for inclusion, of which three revealed clear evidence of cognitive enhancement (following cocoa flavanols and methylxanthine). Two studies failed to demonstrate behavioral benefits but did identify significant alterations in brain activation patterns. It is unclear whether the effects of chocolate on mood are due to the orosensory characteristics of chocolate or to the pharmacological actions of chocolate constituents. Two studies have reported acute cognitive effects of supplementation with cocoa polyphenols. Further exploration of the effect of chocolate on cognitive facilitation is recommended, along with substantiation of functional brain changes associated with the components of cocoa.", "title": "" }, { "docid": "f75a1e5c9268a3a64daa94bb9c7f522d", "text": "Many natural language generation tasks, such as abstractive summarization and text simplification, are paraphrase-orientated. In these tasks, copying and rewriting are two main writing modes. Most previous sequence-to-sequence (Seq2Seq) models use a single decoder and neglect this fact. In this paper, we develop a novel Seq2Seq model to fuse a copying decoder and a restricted generative decoder. The copying decoder finds the position to be copied based on a typical attention model. The generative decoder produces words limited in the source-specific vocabulary. To combine the two decoders and determine the final output, we develop a predictor to predict the mode of copying or rewriting. This predictor can be guided by the actual writing mode in the training data. We conduct extensive experiments on two different paraphrase datasets. The result shows that our model outperforms the stateof-the-art approaches in terms of both informativeness and language quality.", "title": "" }, { "docid": "7dfb6a3a619f7062452aa97aaa134c45", "text": "Most companies favour the creation and nurturing of long-term relationships with customers because retaining customers is more profitable than acquiring new ones. Churn prediction is a predictive analytics technique to identify churning customers ahead of their departure and enable customer relationship managers to take action to keep them. This work evaluates the development of an expert system for churn prediction and prevention using a Hidden Markov model (HMM). A HMM is implemented on unique data from a mobile application and its predictive performance is compared to other algorithms that are commonly used for churn prediction: Logistic Regression, Neural Network and Support Vector Machine. Predictive performance of the HMM is not outperformed by the other algorithms. HMM has substantial advantages for use in expert systems though due to low storage and computational requirements and output of highly relevant customer motivational states. Generic session data of the mobile app is used to train and test the models which makes the system very easy to deploy and the findings applicable to the whole ecosystem of mobile apps distributed in Apple's App and Google's Play Store.", "title": "" }, { "docid": "36944ed25994736d1cd05bdc259670b6", "text": "Users on an online social network site generate a large number of heterogeneous activities, ranging from connecting with other users, to sharing content, to updating their profiles. The set of activities within a user's network neighborhood forms a stream of updates for the user's consumption. In this paper, we report our experience with the problem of ranking activities in the LinkedIn homepage feed. In particular, we provide a taxonomy of social network activities, describe a system architecture (with a number of key components open-sourced) that supports fast iteration in model development, demonstrate a number of key factors for effective ranking, and report experimental results from extensive online bucket tests.", "title": "" }, { "docid": "bddea9fd4d14f591e6fb6acc3cc057f1", "text": "We present an analysis of musical influence using intact lyrics of over 550,000 songs, extending existing research on lyrics through a novel approach using directed networks. We form networks of lyrical influence over time at the level of three-word phrases, weighted by tf-idf. An edge reduction analysis of strongly connected components suggests highly central artist, songwriter, and genre network topologies. Visualizations of the genre network based on multidimensional scaling confirm network centrality and provide insight into the most influential genres at the heart of the network. Next, we present metrics for influence and self-referential behavior, examining their interactions with network centrality and with the genre diversity of songwriters. Here, we uncover a negative correlation between songwriters’ genre diversity and the robustness of their connections. By examining trends among the data for top genres, songwriters, and artists, we address questions related to clustering, influence, and isolation of nodes in the networks. We conclude by discussing promising future applications of lyrical influence networks in music information retrieval research. The networks constructed in this study are made publicly available for research purposes.", "title": "" }, { "docid": "336b6ed16323cd4d4d46761aae81c548", "text": "Smart interconnected vehicles generate a huge amount of data to be used by a wide range of applications. Although cloud based data management is currently in practice, for many applications serving road safety or traffic regulation, it is utmost important that applications access these data at the site itself for improved quality of service. Road side units (RSUs) play a crucial role in handling these vast amount of vehicular data and serving the running applications in turn. In this current era of edge computing, in-place data access is also proven to be advantageous from cost point of view. As multiple applications from different service providers are interested to access different fragments of these data, a robust access control mechanism is needed to ensure desired level of security as well as reliability for these data. In this paper, we introduce B2VDM, a novel architecture for vehicular data management at RSUs, that provides a seamless access control using Blockchain technology. The proposed B2VDM framework also implements a simple load distribution module, which maintains the reliability by minimizing the number of packet drops at a heavily loaded RSU during peak hours. An extensive evaluation using Etherium Blockchain validates the effectiveness of the proposed architecture.", "title": "" }, { "docid": "0bcff493580d763dbc1dd85421546201", "text": "The development of powerful imaging tools, editing images for changing their data content is becoming a mark to undertake. Tempering image contents by adding, removing, or copying/moving without leaving a trace or unable to be discovered by the investigation is an issue in the computer forensic world. The protection of information shared on the Internet like images and any other con?dential information is very signi?cant. Nowadays, forensic image investigation tools and techniques objective is to reveal the tempering strategies and restore the firm belief in the reliability of digital media. This paper investigates the challenges of detecting steganography in computer forensics. Open source tools were used to analyze these challenges. The experimental investigation focuses on using steganography applications that use same algorithms to hide information exclusively within an image. The research finding denotes that, if a certain steganography tool A is used to hide some information within a picture, and then tool B which uses the same procedure would not be able to recover the embedded image.", "title": "" }, { "docid": "91283606a1737f3076ba6e00a6754fd1", "text": "OBJECTIVE\nTo review the quantitative instruments available to health service researchers who want to measure culture and cultural change.\n\n\nDATA SOURCES\nA literature search was conducted using Medline, Cinahl, Helmis, Psychlit, Dhdata, and the database of the King's Fund in London for articles published up to June 2001, using the phrase \"organizational culture.\" In addition, all citations and the gray literature were reviewed and advice was sought from experts in the field to identify instruments not found on the electronic databases. The search focused on instruments used to quantify culture with a track record, or potential for use, in health care settings.\n\n\nDATA EXTRACTION\nFor each instrument we examined the cultural dimensions addressed, the number of items for each questionnaire, the measurement scale adopted, examples of studies that had used the tool, the scientific properties of the instrument, and its strengths and limitations.\n\n\nPRINCIPAL FINDINGS\nThirteen instruments were found that satisfied our inclusion criteria, of which nine have a track record in studies involving health care organizations. The instruments varied considerably in terms of their grounding in theory, format, length, scope, and scientific properties.\n\n\nCONCLUSIONS\nA range of instruments with differing characteristics are available to researchers interested in organizational culture, all of which have limitations in terms of their scope, ease of use, or scientific properties. The choice of instrument should be determined by how organizational culture is conceptualized by the research team, the purpose of the investigation, intended use of the results, and availability of resources.", "title": "" }, { "docid": "97471e2a00910cfd96d84751fa36867c", "text": "FAUST is a software tool that generates formal abstractions of (possibly non-deterministic) discrete-time Markov processes (dtMP) defined over uncountable (continuous) state spaces. A dtMP model (Sec. 1) is specified in MATLAB and abstracted as a finite-state Markov chain or Markov decision processes. The abstraction procedure (Sec. 2) runs in MATLAB and employs parallel computations and fast manipulations based on vector calculus. The abstract model is formally put in relationship with the concrete dtMP via a user-defined maximum threshold on the approximation error introduced by the abstraction procedure. FAUST allows exporting the abstract model to well-known probabilistic model checkers, such as PRISM or MRMC (Sec. 4). Alternatively, it can handle internally the computation of PCTL properties (e.g. safety or reach-avoid) over the abstract model, and refine the outcomes over the concrete dtMP via a quantified error that depends on the abstraction procedure and the given formula (Sec. 3). The toolbox is available at http://sourceforge.net/projects/faust2/ 1 Models: discrete-time Markov processes We consider a discrete-time Markov process (dtMP) s(k), k ∈ N ∪ {0} defined over a general state space, such as a finite-dimensional Euclidean domain [1] or a hybrid state space [2]. The model is denoted by the pair S = (S, Ts). S is a continuous (uncountable) but bounded state space, e.g. S ⊂ R, n < ∞. We denote by B(S) the associated sigma algebra and refer the reader to [2,3] for details on measurability and topological considerations. The conditional stochastic kernel Ts : B(S)×S → [0, 1] assigns to each point s ∈ S a probability measure Ts(·|s), so that for any set A ∈ B(S), k ∈ N∪{0}, P(s(k+1) ∈ A|s(k) = s) = ∫ A Ts(dx|s). (Please refer to code or case study for a modelling example.) Implementation: The user interaction with FAUST is enhanced by a Graphical User Interface. A dtMP model is fed into FAUST as follows. Select the Formula free option in the box Problem selection 1 in Figure 1, and enter the bounds on the state space S as a n × 2 matrix in the prompt Domain in box 8 . Alternatively if the user presses the button Select 8 , a pop-up window prompts the user to enter the lower and upper values of the box-shaped ar X iv :1 40 3. 32 86 v1 [ cs .S Y ] 1 3 M ar 2 01 4 bounds of the state space. The transition kernel Ts can be specified by the user (select User-defined 2 ) in an m-file, entered in the text-box Name of kernel function, or loaded by pressing the button Search for file 7 . Please open the files ./Templates/SymbolicKernel.m for a template and ExampleKernel.m for an instance of kernel Ts. As a special case, the class of affine dynamical systems with additive Gaussian noise is described by the difference equation s(k + 1) = As(k) + B + η(k), where η(·) ∼ N (0, Sigma). (Refer to the Case Study on how to express the difference equation as a stochastic kernel.) For this common instance, the user can select the option Linear Gaussian model in the box Kernel distribution 2 , and input properly-sized matrices A,B,Sigma in the MATLAB workspace. FAUST also handles Gaussian dynamical models s(k + 1) = f(s(k)) + g(s(k))η(k) with nonlinear drift and variance: select the bottom option in box 2 and enter the symbolic function [f g] via box 7 . u t", "title": "" }, { "docid": "61af083f594aedff3a58f2183165a1ac", "text": "Design is an essential part of all games and narratives, yet designing and implementing believable game strategies can be time consuming and error-prone. This motivates the development and application of systems AI methodologies. Here we demonstrate for the first time the iterative development of agent behaviour for a real-time strategy game (here StarCraft) utilising Behaviour Oriented Design (BOD) [7]. BOD provides focus on the robust creation and easy adjustment of modular and hierarchical cognitive agents. We demonstrate BOD’s usage in creating an AI capable of playing the StarCraft character the Zerg hive mind, and document its performance against a variety of opponent AI systems. In describing our tool-driven development process, we also describe the new Abode IDE, provide a brief literature review situating BOD in the AI game literature, and propose possible future work.", "title": "" }, { "docid": "31fc90c66332e52dbdae734083442f4f", "text": "The free-energy principle in recent studies of brain theory and neuroscience models the perception and understanding of the outside scene as an active inference process, in which the brain tries to account for the visual scene with an internal generative model. Specifically, with the internal generative model, the brain yields corresponding predictions for its encountered visual scenes. Then, the discrepancy between the visual input and its brain prediction should be closely related to the quality of perceptions. On the other hand, sparse representation has been evidenced to resemble the strategy of the primary visual cortex in the brain for representing natural images. With the strong neurobiological support for sparse representation, in this paper, we approximate the internal generative model with sparse representation and propose an image quality metric accordingly, which is named FSI (free-energy principle and sparse representation-based index for image quality assessment). In FSI, the reference and distorted images are, respectively, predicted by the sparse representation at first. Then, the difference between the entropies of the prediction discrepancies is defined to measure the image quality. Experimental results on four large-scale image databases confirm the effectiveness of the FSI and its superiority over representative image quality assessment methods. The FSI belongs to reduced-reference methods, and it only needs a single number from the reference image for quality estimation.", "title": "" }, { "docid": "adf57fe7ec7ab1481561f7664110a1e8", "text": "This paper presents a scalable 28-GHz phased-array architecture suitable for fifth-generation (5G) communication links based on four-channel ( $2\\times 2$ ) transmit/receive (TRX) quad-core chips in SiGe BiCMOS with flip-chip packaging. Each channel of the quad-core beamformer chip has 4.6-dB noise figure (NF) in the receive (RX) mode and 10.5-dBm output 1-dB compression point (OP1dB) in the transmit (TX) mode with 6-bit phase control and 14-dB gain control. The phase change with gain control is only ±3°, allowing orthogonality between the variable gain amplifier and the phase shifter. The chip has high RX linearity (IP1dB = −22 dBm/channel) and consumes 130 mW in the RX mode and 200 mW in the TX mode at P1dB per channel. Advantages of the scalable all-RF beamforming architecture and circuit design techniques are discussed in detail. 4- and 32-element phased-arrays are demonstrated with detailed data link measurements using a single or eight of the four-channel TRX core chips on a low-cost printed circuit board with microstrip antennas. The 32-element array achieves an effective isotropic radiated power (EIRP) of 43 dBm at P1dB, a 45-dBm saturated EIRP, and a record-level system NF of 5.2 dB when the beamformer loss and transceiver NF are taken into account and can scan to ±50° in azimuth and ±25° in elevation with < −12-dB sidelobes and without any phase or amplitude calibration. A wireless link is demonstrated using two 32-element phased-arrays with a state-of-the-art data rate of 1.0–1.6 Gb/s in a single beam using 16-QAM waveforms over all scan angles at a link distance of 300 m.", "title": "" }, { "docid": "d6959f0cd5ad7a534e99e3df5fa86135", "text": "In the course of the project Virtual Try-On new VR technologies have been developed, which form the basis for a realistic, three dimensional, (real-time) simulation and visualization of individualized garments put on by virtual counterparts of real customers. To provide this cloning and dressing of people in VR, a complete process chain is being build up starting with the touchless 3-dimensional scanning of the human body up to a photo-realistic 3-dimensional presentation of the virtual customer dressed in the chosen pieces of clothing. The emerging platform for interactive selection and configuration of virtual garments, the „virtual shop“, will be accessible in real fashion boutiques as well as over the internet, thereby supplementing the conventional distribution channels.", "title": "" }, { "docid": "0175e289151e90a0edae940f70a03484", "text": "Simultaneously detecting an object and determining its pose has become a popular research topic in recent years. Due to the large variances of the object appearance in images, it is critical to capture the discriminative object parts that can provide key information about the object pose. Recent part-based models have obtained state-of-theart results for this task. However, such models either require manually defined object parts with heavy supervision or a complicated algorithm to find discriminative object parts. In this study, we have designed a novel deep architecture, called Auto-masking Neural Network (ANN), for object detection and viewpoint estimation. ANN can automatically learn to select the most discriminative object parts across different viewpoints from training images. We also propose a method of accurate continuous viewpoint estimation based on the output of ANN. Experimental results on related datasets show that ANN outperforms previous methods.", "title": "" }, { "docid": "e65e735636a2641f75a323e3198907db", "text": "Computational creativity is one of the central research topics of Artificial Intelligence and Natural Language Processing today. Irony, a creative use of language, has received very little attention from the computational linguistics research point of view. In this study we investigate the automatic detection of irony casting it as a classification problem. We propose a model capable of detecting irony in the social network Twitter. In cross-domain classification experiments our model based on lexical features outperforms a word-based baseline previously used in opinion mining and achieves state-of-the-art performance. Our features are simple to implement making the approach easily replicable.", "title": "" }, { "docid": "106add4e66ec7f673450c226b86b9b76", "text": "Three different algorithms for finding blood pressure through the oscillometric method were researched and assessed. It is shown that these algorithms are based on two different underlying approaches. The estimated values of systolic and diastolic blood pressure are compared against the nurse readings. The best two approaches turned out to be the linear approximation algorithm and the points of rapidly increasing/decreasing slope algorithm. Future work on combining these two algorithms using algorithm fusion is envisaged.", "title": "" } ]
scidocsrr
46be7ddfb3a53b1acd44ce4135a41676
Active Zero-Shot Learning
[ { "docid": "17ae144806d014bb157c1c9cec5f0fd9", "text": "Given the difficulty of acquiring labeled examples for many fine-grained visual classes, there is an increasing interest in zero-shot image tagging, aiming to tag images with novel labels that have no training examples present. Using a semantic space trained by a neural language model, the current state-of-the-art embeds both images and labels into the space, wherein cross-media similarity is computed. However, for labels of relatively low occurrence, its similarity to images and other labels can be unreliable. This paper proposes Hierarchical Semantic Embedding (HierSE), a simple model that exploits the WordNet hierarchy to improve label embedding and consequently image embedding. Moreover, we identify two good tricks, namely training the neural language model using Flickr tags instead of web documents, and using partial match instead of full match for vectorizing a WordNet node. All this lets us outperform the state-of-the-art. On a test set of over 1,500 visual object classes and 1.3 million images, the proposed model beats the current best results (18.3% versus 9.4% in hit@1).", "title": "" } ]
[ { "docid": "326b1a496a416ec68770391399fc59e2", "text": "Identifying medical persona from a social media post is of paramount importance for drug marketing and pharmacovigilance. In this work, we propose multiple approaches to infer the medical persona associated with a social media post. We pose this as a supervised multi-label text classification problem. The main challenge is to identify the hidden cues in a post that are indicative of a particular persona. We first propose a large set of manually engineered features for this task. Further, we propose multiple neural network based architectures to extract useful features from these posts using pre-trained word embeddings. Our experiments on thousands of blogs and tweets show that the proposed approach results in 7% and 5% gain in F-measure over manual feature engineering based approach for blogs and tweets respectively.", "title": "" }, { "docid": "780bcf4241b412b7ed2fe428bc566120", "text": "Although chatbots have been very popular in recent years, they still have some serious weaknesses which limit the scope of their applications. One major weakness is that they cannot learn new knowledge during the conversation process, i.e., their knowledge is fixed beforehand and cannot be expanded or updated during conversation. In this paper, we propose to build a general knowledge learning engine for chatbots to enable them to continuously and interactively learn new knowledge during conversations. As time goes by, they become more and more knowledgeable and better and better at learning and conversation. We model the task as an open-world knowledge base completion problem and propose a novel technique called lifelong interactive learning and inference (LiLi) to solve it. LiLi works by imitating how humans acquire knowledge and perform inference during an interactive conversation. Our experimental results show LiLi is highly promising.", "title": "" }, { "docid": "37f14a10e08cbb4d4034d19a7d3bf24e", "text": "Development of Mobile handset applications, new standard for cellular networks have been defined. In this Paper, author intend to propose a Novel mobile Antenna that can cover more of LTE (Long Term Evolution) Bands (4G cellular networks). The proposed antenna uses structure of planar monopole antenna. Bandwidth of antenna is 0.87-0.99 GHz, 1.65-3.14 GHz and has high efficiency unlike the previous structures. The dimension of the antenna is 18mm×21mm and has FR4 substrate by 1.5mm thickness that is very compact antenna respect to the other expressed antenna.", "title": "" }, { "docid": "5364dd1ec4afce5ee01ca8bc0e6d9aed", "text": "In this paper we present a fuzzy version of SHOIN (D), the corresponding Description Logic of the ontology description language OWL DL. We show that the representation and reasoning capabilities of fuzzy SHOIN (D) go clearly beyond classical SHOIN (D). Interesting features are: (i) concept constructors are based on t-norm, t-conorm, negation and implication; (ii) concrete domains are fuzzy sets; (iii) fuzzy modifiers are allowed; and (iv) entailment and subsumption relationships may hold to some degree in the unit interval [0, 1].", "title": "" }, { "docid": "ca1eb1dc93f420ba4ca88caca10b7c62", "text": "BACKGROUND\nThe purpose of this study was to describe and evaluate the outcomes of breast reduction in cases of gigantomastia using a posterosuperior pedicle.\n\n\nMETHODS\nFour hundred thirty-one breast reductions were performed between 2004 and 2007. Fifty patients of 431 (11.6 percent) responded to the inclusion criteria (>1000 g of tissue removed per breast (100 breasts). The mean age was 33.2 years (range, 17 to 58 years). The average notch-to-nipple distance was 37.9 cm (range, 35 to 46 cm). The mean body mass index was 27 (range, 22 to 35 cm). The technique of the posterosuperior pedicle was used, in which the perforators from fourth anterior intercostal arteries are preserved (posterior pedicle). Results were evaluated by means of self-evaluation at 1 year postoperatively.\n\n\nRESULTS\nThe average weight resected was 1231 g (range, 1000 to 2500 g). The length of hospital stay was 2.3 days (range 2 to 4 days). Thirty seven patients evaluated their results as \"very good\" (74 percent), nine as \"good\" (18 percent), and four as \"acceptable\" (8 percent). There were no \"poor\" results. The chief complaint was insufficient breast reduction (four patients), despite the considerable improvement in their daily life (8 percent). Back pain totally resolved in 46 percent and partially (with significant improvement) in 54 percent of cases. One major and seven minor complications were recorded.\n\n\nCONCLUSIONS\nThe posterosuperior pedicle for breast reduction is a reproducible and versatile technique. The preservation of the anterior intercostal artery perforators enhances the reliability of the vascular supply to the superior pedicle.", "title": "" }, { "docid": "2088be2c5623d7491c5692b6ebd4f698", "text": "Machine learning (ML) is now widespread. Traditional software engineering can be applied to the development ML applications. However, we have to consider specific problems with ML applications in therms of their quality. In this paper, we present a survey of software quality for ML applications to consider the quality of ML applications as an emerging discussion. From this survey, we raised problems with ML applications and discovered software engineering approaches and software testing research areas to solve these problems. We classified survey targets into Academic Conferences, Magazines, and Communities. We targeted 16 academic conferences on artificial intelligence and software engineering, including 78 papers. We targeted 5 Magazines, including 22 papers. The results indicated key areas, such as deep learning, fault localization, and prediction, to be researched with software engineering and testing.", "title": "" }, { "docid": "13a1fc2a026a899379ca4f11ac6fdaf8", "text": "Recognizing objects from the point cloud captured by modern 3D sensors is an important task for robots operating autonomously in real-world environments. However, the existing well-performing approaches typically suffer from a trade-off between resolution of representation and computational efficiency. In this paper, raw point cloud normals are fed into the Point Convolution Network (PCN) without any other representation converts. The point cloud set disordered and unstructured problems are tackled by Kd-tree-based local permutation and spatial commutative pooling strategies proposed in this paper. Experiments on ModelNet illustrate that our method has two orders of magnitude less floating point computation in each non-linear mapping layer while it contributes to significant classification accuracy improvement. Compared to some of the state-of-the-art methods using the 3D volumetric image convolution, the PCN method also yields comparable classification accuracy.", "title": "" }, { "docid": "dd15c51d3f5f25d43169c927ac753013", "text": "After completing this article, readers should be able to: 1. List the risk factors for severe hyperbilirubinemia. 2. Distinguish between physiologic jaundice and pathologic jaundice of the newborn. 3. Recognize the clinical manifestations of acute bilirubin encephalopathy and the permanent clinical sequelae of kernicterus.4. Describe the evaluation of hyperbilirubinemia from birth through 3 months of age. 5. Manage neonatal hyperbilirubinemia, including referral to the neonatal intensive care unit for exchange transfusion.", "title": "" }, { "docid": "5b9d26fc8b5c45a26377885f75c0f509", "text": "Background: The objective of this study is to assess the feasibility of aprimary transfistula anorectoplasty (TFARP) in congenital recto-vestibular fistula without a covering colostomy in the north of Iraq. Patients and Methods: Female patients having imperforate anus with congenital rectovestibular fistula presenting to pediatric surgical centres in the north of Iraq (Mosul & Erbil) between 1995 to 2011 were reviewed in a nonrandomized manner, after excluding those with pouch colon, rectovaginal fistula and patients with colostomy. All cases underwent one stage primary (TFARP) anorectoplasty at age between 1-30 months, after on table rectal irrigation with normal saline & povidoneIodine. They were kept nil by mouth until 24 hours postoperatively. Postoperative regular anal dilatation were commenced after 2 weeks of operation when needed. The results were evaluated for need of bowel preparation, duration of surgery,, cosmetic appearance, commencement of feed and hospital stay,postoperative results. Patients were also followed up for assessment of continence and anal dilatation.", "title": "" }, { "docid": "cc10051c413cfb6f87d0759100bc5182", "text": "Social Media Hate Speech has continued to grow both locally and globally due to the increase of Online Social Media web forums like Facebook, Twitter and blogging. This has been propelled even further by smartphones and mobile data penetration locally. Global and Local terrorism has posed a vital question for technologists to investigate, prosecute, predict and prevent Social Media Hate Speech. This study provides a social media digital forensics tool through the design, development and implementation of a software application. The study will develop an application using Linux Apache MySQL PHP and Python. The application will use Scrapy Python page ranking algorithm to perform web crawling and the data will be placed in a MySQL database for data mining. The application used Agile Software development methodology with twenty websites being the subject of interest. The websites will be the sample size to demonstrate how the application", "title": "" }, { "docid": "97c40f796f104587a465f5d719653181", "text": "Although some theory suggests that it is impossible to increase one’s subjective well-being (SWB), our ‘sustainable happiness model’ (Lyubomirsky, Sheldon, & Schkade, 2005) specifies conditions under which this may be accomplished. To illustrate the three classes of predictor in the model, we first review research on the demographic/circumstantial, temperament/personality, and intentional/experiential correlates of SWB. We then introduce the sustainable happiness model, which suggests that changing one’s goals and activities in life is the best route to sustainable new SWB. However, the goals and activities must be of certain positive types, must fit one’s personality and needs, must be practiced diligently and successfully, must be varied in their timing and enactment, and must provide a continued stream of fresh positive experiences. Research supporting the model is reviewed, including new research suggesting that happiness intervention effects are not just placebo effects. Everyone wants to be happy. Indeed, happiness may be the ultimate fundamental ‘goal’ that people pursue in their lives (Diener, 2000), a pursuit enshrined as an inalienable right in the US Declaration of Independence. The question of what produces happiness and wellbeing is the subject of a great deal of contemporary research, much of it falling under the rubric of ‘positive psychology’, an emerging field that also considers issues such as what makes for optimal relationships, optimal group functioning, and optimal communities. In this article, we first review some prominent definitions, theories, and research findings in the well-being literature. We then focus in particular on the question of whether it is possible to become lastingly happier in one’s life, drawing from our recent model of sustainable happiness. Finally, we discuss some recent experimental data suggesting that it is indeed possible to boost one’s happiness level, and to sustain that newfound level. A number of possible definitions of happiness exist. Let us start with the three proposed by Ed Diener in his landmark Psychological Bulletin 130 Is It Possible to Become Happier © 2007 The Authors Social and Personality Psychology Compass 1/1 (2007): 129–145, 10.1111/j.1751-9004.2007.00002.x Journal Compilation © 2007 Blackwell Publishing Ltd (1984) article. The first is ‘leading a virtuous life’, in which the person adheres to society’s vision of morality and proper conduct. This definition makes no reference to the person’s feelings or emotions, instead apparently making the implicit assumption that reasonably positive feelings will ensue if the person toes the line. A second definition of happiness involves a cognitive evaluation of life as a whole. Are you content, overall, or would you do things differently given the opportunity? This reflects a personcentered view of happiness, and necessarily taps peoples’ subjective judgments of whether they are satisfied with their lives. A third definition refers to typical moods. Are you typically in a positive mood (i.e., inspired, pleased, excited) or a negative mood (i.e., anxious, upset, depressed)? In this person-centered view, it is the balance of positive to negative mood that matters (Bradburn, 1969). Although many other conceptions of well-being exist (Lyubomirsky & Lepper, 1999; Ryan & Frederick, 1997; Ryff & Singer, 1996), ratings of life satisfaction and judgments of the frequency of positive and negative affect have received the majority of the research attention, illustrating the dominance of the second and third (person-centered) definitions of happiness in the research literature. Notably, positive affect, negative affect, and life satisfaction are presumed to be somewhat distinct. Thus, although life satisfaction typically correlates positively with positive affect and negatively with negative affect, and positive affect typically correlates negatively with negative affect, these correlations are not necessarily strong (and they also vary depending on whether one assesses a particular time or context, or the person’s experience as a whole). The generally modest correlations among the three variables means that an individual high in one indicator is not necessarily high (or low) in any other indicator. For example, a person with many positive moods might also experience many negative moods, and a person with predominantly good moods may or may not be satisfied with his or her life. As a case in point, a college student who has many friends and rewarding social interactions may be experiencing frequent pleasant affect, but, if he doubts that college is the right choice for him, he will be discontent with life. In contrast, a person experiencing many negative moods might nevertheless be satisfied with her life, if she finds her life meaningful or is suffering for a good cause. For example, a frazzled new mother may feel that all her most cherished life goals are being realized, yet she is experiencing a great deal of negative emotions on a daily basis. Still, the three quantities typically go together to an extent such that a comprehensive and reliable subjective well-being (SWB) indicator can be computed by summing positive affect and life satisfaction and subtracting negative affect. Can we trust people’s self-reports of happiness (or unhappiness)? Actually, we must: It would make little sense to claim that a person is happy if he or she does not acknowledge being happy. Still, it is possible to corroborate self-reports of well-being with reports from the respondents’ friends and", "title": "" }, { "docid": "2b6770c329721c71f619ae6da066546a", "text": "Financial Cryptography is substantially complex, requiring skills drawn from diverse and incompatible, or at least, unfriendly, disciplines. Caught between Central Banking and Cryptography, or between accountants and programmers, there is a grave danger that efforts to construct Financial Cryptography systems will simplify or omit critical disciplines. This paper presents a model that seeks to encompass the breadth of Financial Cryptography (at the clear expense of the depth of each area). By placing each discipline into a seven layer model of introductory nature, where the relationship between each adjacent layer is clear, this model should assist project, managerial and requirements people. Whilst this model is presented as efficacious, there are limits to any model. This one does not propose a methodology for design, nor a checklist for protocols. Further, given the young heritage of the model, and of the field itself, it should be taken as a hint of complexity rather than a defining guide.", "title": "" }, { "docid": "c9993b2d046bf0e796014f2a434dc1a0", "text": "Recently, diverse types of chaotic image encryption algorithms have been explored to meet the high demands in realizing secured real time image sharing applications. In this context, to achieve high sensitivity and superior key space, a multiple chaotic map based image encryption algorithm has been proposed. The proposed algorithm employs three-stage permutation and diffusion to withstand several attacks and the same is modelled in reconfigurable platform namely Field Programmable Gate Array (FPGA). The comprehensive analysis is done with various parameters to exhibit the robustness of the proposed algorithm and its ability to withstand brute-force, differential and statistical attacks. The synthesized result demonstrates that the reconfigurable hardware architecture takes approximately 0.098 ms for encrypting an image of size 256 × 256. Further the resource utilization and timing analyzer results are reported.", "title": "" }, { "docid": "e0d8936ecce870fbcee6b3bd4bc66d10", "text": "UNLABELLED\nMathematical modeling is a process by which a real world problem is described by a mathematical formulation. The cancer modeling is a highly challenging problem at the frontier of applied mathematics. A variety of modeling strategies have been developed, each focusing on one or more aspects of cancer.\n\n\nMATERIAL AND METHODS\nThe vast majority of mathematical models in cancer diseases biology are formulated in terms of differential equations. We propose an original mathematical model with small parameter for the interactions between these two cancer cell sub-populations and the mathematical model of a vascular tumor. We work on the assumption that, the quiescent cells' nutrient consumption is long. One the equations system includes small parameter epsilon. The smallness of epsilon is relative to the size of the solution domain.\n\n\nRESULTS\nMATLAB simulations obtained for transition rate from the quiescent cells' nutrient consumption is long, we show a similar asymptotic behavior for two solutions of the perturbed problem. In this system, the small parameter is an asymptotic variable, different from the independent variable. The graphical output for a mathematical model of a vascular tumor shows the differences in the evolution of the tumor populations of proliferating, quiescent and necrotic cells. The nutrient concentration decreases sharply through the viable rim and tends to a constant level in the core due to the nearly complete necrosis in this region.\n\n\nCONCLUSIONS\nMany mathematical models can be quantitatively characterized by ordinary differential equations or partial differential equations. The use of MATLAB in this article illustrates the important role of informatics in research in mathematical modeling. The study of avascular tumor growth cells is an exciting and important topic in cancer research and will profit considerably from theoretical input. Interpret these results to be a permanent collaboration between math's and medical oncologists.", "title": "" }, { "docid": "e40eb32613ed3077177d61ac14e82413", "text": "Preamble. Billions of people are using cell phone devices on the planet, essentially in poor posture. The purpose of this study is to assess the forces incrementally seen by the cervical spine as the head is tilted forward, into worsening posture. This data is also necessary for cervical spine surgeons to understand in the reconstruction of the neck.", "title": "" }, { "docid": "efe279fbc7307bc6a191ebb397b01823", "text": "Real-time traffic sign detection and recognition has been receiving increasingly more attention in recent years due to the popularity of driver-assistance systems and autonomous vehicles. This paper proposes an accurate and efficient traffic sign detection technique by exploring AdaBoost and support vector regression (SVR) for discriminative detector learning. Different from the reported traffic sign detection techniques, a novel saliency estimation approach is first proposed, where a new saliency model is built based on the traffic sign-specific color, shape, and spatial information. By incorporating the saliency information, enhanced feature pyramids are built to learn an AdaBoost model that detects a set of traffic sign candidates from images. A novel iterative codeword selection algorithm is then designed to generate a discriminative codebook for the representation of sign candidates, as detected by the AdaBoost, and an SVR model is learned to identify the real traffic signs from the detected sign candidates. Experiments on three public data sets show that the proposed traffic sign detection technique is robust and obtains superior accuracy and efficiency.", "title": "" }, { "docid": "fb1e23b956c5b60f581f9a32001a9783", "text": "Deep convolutional neural networks (CNNs) have recently shown very high accuracy in a wide range of cognitive tasks, and due to this, they have received significant interest from the researchers. Given the high computational demands of CNNs, custom hardware accelerators are vital for boosting their performance. The high energy efficiency, computing capabilities and reconfigurability of FPGA make it a promising platform for hardware acceleration of CNNs. In this paper, we present a survey of techniques for implementing and optimizing CNN algorithms on FPGA. We organize the works in several categories to bring out their similarities and differences. This paper is expected to be useful for researchers in the area of artificial intelligence, hardware architecture and system design.", "title": "" }, { "docid": "b2e958ceedce24bf6cd5e448d0b9ec84", "text": "In this paper, we propose a real-time online shopper behavior analysis system consisting of two modules which simultaneously predicts the visitor’s shopping intent and Web site abandonment likelihood. In the first module, we predict the purchasing intention of the visitor using aggregated pageview data kept track during the visit along with some session and user information. The extracted features are fed to random forest (RF), support vector machines (SVMs), and multilayer perceptron (MLP) classifiers as input. We use oversampling and feature selection preprocessing steps to improve the performance and scalability of the classifiers. The results show that MLP that is calculated using resilient backpropagation algorithm with weight backtracking produces significantly higher accuracy and F1 Score than RF and SVM. Another finding is that although clickstream data obtained from the navigation path followed during the online visit convey important information about the purchasing intention of the visitor, combining them with session information-based features that possess unique information about the purchasing interest improves the success rate of the system. In the second module, using only sequential clickstream data, we train a long short-term memory-based recurrent neural network that generates a sigmoid output showing the probability estimate of visitor’s intention to leave the site without finalizing the transaction in a prediction horizon. The modules are used together to determine the visitors which have purchasing intention but are likely to leave the site in the prediction horizon and take actions accordingly to improve the Web site abandonment and purchase conversion rates. Our findings support the feasibility of accurate and scalable purchasing intention prediction for virtual shopping environment using clickstream and session information data.", "title": "" } ]
scidocsrr
e29f09ce9e86990c2087f3eaa3252e0e
Returnn: The RWTH extensible training framework for universal recurrent neural networks
[ { "docid": "ec1dcdc072858748843db782b741676f", "text": "The transcription of handwritten text on images is one task in machine learning and one solution to solve it is using multi-dimensional recurrent neural networks (MDRNN) with connectionist temporal classification (CTC). The RNNs can contain special units, the long short-term memory (LSTM) cells. They are able to learn long term dependencies but they get unstable when the dimension is chosen greater than one. We defined some useful and necessary properties for the one-dimensional LSTM cell and extend them in the multi-dimensional case. Thereby we introduce several new cells with better stability. We present a method to design cells using the theory of linear shift invariant systems. The new cells are compared to the LSTM cell on the IFN/ENIT and Rimes database, where we can improve the recognition rate compared to the LSTM cell. So each application where the LSTM cells in MDRNNs are used could be improved by substituting them by the new developed cells.", "title": "" } ]
[ { "docid": "586d28a28a2e6943c294ed48216be552", "text": "Beta cell dysfunction in type-2 diabetes mellitus holds an important role not just on its pathogenesis, but also on the progression of the disease. Until now, diabetes treatment cannot restore the reduced function of pancreatic beta cell. McIntyre et al stated that there is a factor from the intestine which stimulates insulin secretion as a response on glucose.This factor is known as incretin. It is a hormone which is released by the intestine due to ingested food especially those which contain carbohydrate and fat. Currently, there are 2 types of incretin hormones which have been identified, i.e.Glucose dependent insulinotropic polypeptide (GIP) and Glucagon like peptide-1 (GLP-1). These two hormones act by triggering insulin release immediately after food ingestion, inhibiting glucagon secretion, delaying stomach emptying, and suppressing hunger sensation. Several in vitro studies have demonstrated that these two incretin hormones may increase the proliferation of pancreatic beta cell.There is a decrease of GIP function and GLP-1 amount in type-2 diabetes mellitus; thus the attempt to increase both incretin hormones - in this case by using GLP-1 agonist and DPP-IV inhibitor - is one of treatment modalities to control the glucose blood level, either as a monotherapy or a combination therapy. Currently, there are two approaches of incretin utilization as one of type-2 diabetes mellitus treatment, which is the utilization of incretin mimetic/agonist and DPP-IV inhibitor.", "title": "" }, { "docid": "c05bcb214a7ac6d18c839c70f56b05db", "text": "A 0.3-1.4 GHz all-digital phase locked loop (ADPLL) with an adaptive loop gain controller (ALGC), a 1/8-resolution fractional divider and a frequency search block is presented. The ALGC reduces the nonlinearity of the bang-bang phase-frequency detector (BBPFD), reducing output jitter. The fractional divider partially compensates for the large input phase error caused by fractional-N frequency synthesis. A fast frequency search unit using the false position method achieves frequency lock in 6 iterations that correspond to 192 reference clock cycles. A prototype ADPLL using a BBPFD with a dead-zone-free retimer, an ALGC, a fractional divider, and a digital logic implementation of a frequency search algorithm was fabricated in a 0.13-μm CMOS logic process. The core occupies 0.2 mm2 and consumes 16.5 mW with a 1.2-V supply at 1.35-GHz. Measured RMS and peak-to-peak jitter with activating the ALGC are 3.7 ps and 32 ps respectively.", "title": "" }, { "docid": "348115a5dddbc2bcdcf5552b711e82c0", "text": "Enterococci are Gram-positive, catalase-negative, non-spore-forming, facultative anaerobic bacteria, which usually inhabit the alimentary tract of humans in addition to being isolated from environmental and animal sources. They are able to survive a range of stresses and hostile environments, including those of extreme temperature (5-65 degrees C), pH (4.5-10.0) and high NaCl concentration, enabling them to colonize a wide range of niches. Virulence factors of enterococci include the extracellular protein Esp and aggregation substances (Agg), both of which aid in colonization of the host. The nosocomial pathogenicity of enterococci has emerged in recent years, as well as increasing resistance to glycopeptide antibiotics. Understanding the ecology, epidemiology and virulence of Enterococcus species is important for limiting urinary tract infections, hepatobiliary sepsis, endocarditis, surgical wound infection, bacteraemia and neonatal sepsis, and also stemming the further development of antibiotic resistance.", "title": "" }, { "docid": "ffde296b436c2d9f5e2aa85f731a5758", "text": "Financial institutions are interested in ensuring security and quality for their customers. Banks, for instance, need to identify and stop harmful transactions in a timely manner. In order to detect fraudulent operations, data mining techniques and customer profile analysis are commonly used. However, these approaches are not supported by Visual Analytics techniques yet. Visual Analytics techniques have potential to considerably enhance the knowledge discovery process and increase the detection and prediction accuracy of financial fraud detection systems. Thus, we propose EVA, a Visual Analytics approach for supporting fraud investigation, fine-tuning fraud detection algorithms, and thus, reducing false positive alarms.", "title": "" }, { "docid": "12b855b39278c49d448fbda9aa56cacf", "text": "Human visual system (HVS) can perceive constant color under varying illumination conditions while digital images record information of both reflectance (physical color) of objects and illumination. Retinex theory, formulated by Edwin H. Land, aimed to simulate and explain this feature of HVS. However, to recover the reflectance from a given image is in general an ill-posed problem. In this paper, we establish an L1-based variational model for Retinex theory that can be solved by a fast computational approach based on Bregman iteration. Compared with previous works, our L1-Retinex method is more accurate for recovering the reflectance, which is illustrated by examples and statistics. In medical images such as magnetic resonance imaging (MRI), intensity inhomogeneity is often encountered due to bias fields. This is a similar formulation to Retinex theory while the MRI has some specific properties. We then modify the L1-Retinex method and develop a new algorithm for MRI data. We demonstrate the performance of our method by comparison with previous work on simulated and real data.", "title": "" }, { "docid": "54ac39df2d9f5f9dbd21c2f6a66e2321", "text": "Automatic musical genre classification is very useful for music indexing and retrieval. In this paper, an efficient and effective automatic musical genre classification approach is presented. A set of features is extracted and used to characterize music content. A multi-layer classifier based on support vector machines is applied to musical genre classification. Support vector machines are used to obtain the optimal class boundaries between different genres of music by learning from training data. Experimental results of multi-layer support vector machines illustrate good performance in musical genre classification and are more advantageous than traditional Euclidean distance based method and other statistic learning methods.", "title": "" }, { "docid": "fbddd20271cf134e15b33e7d6201c374", "text": "Authors and publishers who wish their publications to be considered for review in Computational Linguistics should send a copy to the book review editor, Graeme Hirst, Department of Computer Science, University of Toronto, Toronto, Canada M5S 3G4. All relevant books received will be listed, but not all can be reviewed. Technical reports (other than dissertations) will not be listed or reviewed. Authors should be aware that some publishers will not send books for review (even when instructed to do so); authors wishing to enquire as to whether their book has been received for review may contact the book review editor.", "title": "" }, { "docid": "eaed6338dd4d25307aab04cb1441844b", "text": "In the network communications, network intrusion is the most important concern nowadays. The increasing occurrence of network attacks is a devastating problem for network services. Various research works are already conducted to find an effective and efficient solution to prevent intrusion in the network in order to ensure network security and privacy. Machine learning is an effective analysis tool to detect any anomalous events occurred in the network traffic flow. In this paper, a combination of two machine learning algorithms is proposed to classify any anomalous behavior in the network traffic. The overall efficiency of the proposed method is dignified by evaluating the detection accuracy, false positive rate, false negative rate and time taken to detect the intrusion. The proposed method demonstrates the effectiveness of the algorithm in detecting the intrusion with higher detection accuracy of 98.76% and lower false positive rate of 0.09% and false negative rate of 1.15%, whereas the normal SVM based scheme achieved a detection accuracy of 88.03% and false positive rate of 4.2% and false negative rate of 7.77%. Keywords—Intrusion Detection; Machine Learning; Support Vector Machine, Supervised Learning", "title": "" }, { "docid": "67fe4b931c2495c6833da493707e58d1", "text": "Alan N. Steinberg Technical Director, Data Fusion ERIM International, Inc. 1101 Wilson Blvd Arlington, VA 22209 (703)528-5250 x4109 [email protected] Christopher L. Bowman Data Fusion and Neural Networks 1643 Hemlock Way Broomfield, CO 80020 (303)469-9828 [email protected] Franklin E. White Director, Program Development SPAWAR Systems Center San Diego, CA 92152 Chair, Data Fusion Group (619) 553-4036 [email protected]", "title": "" }, { "docid": "d95cc1187827e91601cb5711dbdb1550", "text": "As data sparsity remains a significant challenge for collaborative filtering (CF, we conjecture that predicted ratings based on imputed data may be more accurate than those based on the originally very sparse rating data. In this paper, we propose a framework of imputation-boosted collaborative filtering (IBCF), which first uses an imputation technique, or perhaps machine learned classifier, to fill-in the sparse user-item rating matrix, then runs a traditional Pearson correlation-based CF algorithm on this matrix to predict a novel rating. Empirical results show that IBCF using machine learning classifiers can improve predictive accuracy of CF tasks. In particular, IBCF using a classifier capable of dealing well with missing data, such as naïve Bayes, can outperform the content-boosted CF (a representative hybrid CF algorithm) and IBCF using PMM (predictive mean matching, a state-of-the-art imputation technique), without using external content information.", "title": "" }, { "docid": "b9921c1ec7fc2b6c88748ba7f9346524", "text": "As the interest in the representation of context dependent knowledge in the Semantic Web has been recognized, a number of logic based solutions have been proposed in this regard. In our recent works, in response to this need, we presented the description logic-based Contextualized Knowledge Repository (CKR) framework. CKR is not only a theoretical framework, but it has been effectively implemented over state-of-the-art tools for the management of Semantic Web data: inference inside and across contexts has been realized in the form of forward SPARQL-based rules over different RDF named graphs. In this paper we present the first evaluation results for such CKR implementation. In particular, in first experiment we study its scalability with respect to different reasoning regimes. In a second experiment we analyze the effects of knowledge propagation on the computation of inferences.", "title": "" }, { "docid": "a35387165cc7ca200b8eaa4b829086c8", "text": "This paper presents a new density-based clustering algorithm, ST-DBSCAN, which is based on DBSCAN. We propose three marginal extensions to DBSCAN related with the identification of (i) core objects, (ii) noise objects, and (iii) adjacent clusters. In contrast to the existing density-based clustering algorithms, our algorithm has the ability of discovering clusters according to non-spatial, spatial and temporal values of the objects. In this paper, we also present a spatial–temporal data warehouse system designed for storing and clustering a wide range of spatial–temporal data. We show an implementation of our algorithm by using this data warehouse and present the data mining results. 2006 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "c44420fbcf9e6da8e22c616a14707f45", "text": "This article discusses the impact of artificially intelligent computers to the process of design, play and educational activities. A computational process which has the necessary intelligence and creativity to take a proactive role in such activities can not only support human creativity but also foster it and prompt lateral thinking. The argument is made both from the perspective of human creativity, where the computational input is treated as an external stimulus which triggers re-framing of humans’ routines and mental associations, but also from the perspective of computational creativity where human input and initiative constrains the search space of the algorithm, enabling it to focus on specific possible solutions to a problem rather than globally search for the optimal. The article reviews four mixed-initiative tools (for design and educational play) based on how they contribute to human-machine co-creativity. These paradigms serve different purposes, afford different human interaction methods and incorporate different computationally creative processes. Assessing how co-creativity is facilitated on a per-paradigm basis strengthens the theoretical argument and provides an initial seed for future work in the burgeoning domain of mixed-initiative interaction.", "title": "" }, { "docid": "16bedecf8774c2734d591e10ca6cd6de", "text": "In this paper, we present the design and development of a simple, cost effective and low power surface EMG signal acquisition unit for a neuromuscular biofeedback system. The paper discusses the acquisition end signal chain, the synthesis of various blocks in the signal chain, wireless transmission of digitized data to a computer and realization of EMG monitoring system.", "title": "" }, { "docid": "be84c6f3e2a5141834213c6cec291b02", "text": "Streaming of video content over the Internet is experiencing an unprecedented growth. While video permeates every application, it also puts tremendous pressure in the network—to support users having heterogeneous accesses and expecting a high quality of experience, in a furthermore cost-effective manner. In this context, future internet paradigms, such as information centric networking (ICN), are particularly well suited to not only enhance video delivery at the client (as in the dynamic adaptive streaming over HTTP (DASH) approach), but to also naturally and seamlessly extend video support deeper in the network functions. In this paper, we contrast ICN and transmission control protocol/internet protocol (TCP/IP) with an experimental approach, where we employ several state-of-the-art DASH controllers (PANDA, AdapTech, and BOLA) on an ICN versus TCP/IP network stack. Our campaign, based on tools that we developed and made available as open-source software, includes multiple clients (homogeneous vesrus heterogeneous mixture and synchronous vesrus asynchronous arrivals), videos (up to 4k resolution), channels (e.g., DASH profiles, emulated WiFi and LTE, and real 3G/4G traces), and levels of integration with an ICN network (i.e., vanilla named data networking (NDN), wireless loss detection and recovery at the access point, and load balancing). Our results clearly illustrate, as well as quantitatively assess, the benefits of ICN-based streaming, warning about potential pitfalls that are however easy to avoid.", "title": "" }, { "docid": "b47f6272e110928a8d0db8d450e539e9", "text": "This paper presents an ocean energy power take-off system using paddle like wave energy converter (WEC), magnetic gear and efficient power converter architecture. As the WEC oscillates at a low speed of about 5-25 rpm, the direct drive generator is not an efficient design. To increase the generator speed a cost effective flux focusing magnetic gear is proposed. Power converter architecture is discussed and integration of energy storage in the system to smooth the power output is elaborated. Super-capacitor is chosen as energy storage for its better oscillatory power absorbing capability than battery. WEC is emulated in hardware using motor generator set-up and energy storage integration in the system is demonstrated.", "title": "" }, { "docid": "d0865357c86572de6bbbd6e75e3f030e", "text": "Reinforcement Learning (RL) algorithms have been promising methods for designing intelligent agents in games. Although their capability of learning in real time has been already proved, the high dimensionality of state spaces in most game domains can be seen as a significant barrier. This paper studies the popular arcade video game Ms. Pac-Man and outlines an approach to deal with its large dynamical environment. Our motivation is to demonstrate that an abstract but informative state space description plays a key role in the design of efficient RL agents. Thus, we can speed up the learning process without the necessity of Q-function approximation. Several experiments were made using the multiagent MASON platform where we measured the ability of the approach to reach optimum generic policies which enhances its generalization abilities.", "title": "" }, { "docid": "29dab83f08d38702e09acec2f65346b3", "text": "This paper proposes a weakly- and self-supervised deep convolutional neural network (WSSDCNN) for contentaware image retargeting. Our network takes a source image and a target aspect ratio, and then directly outpues a retargeted image. Retargeting is performed through a shift reap, which is a pixet-wise mapping from the source to the target grid. Our method implicitly learns an attention map, which leads to r content-aware shift map for image retargeting. As a result, discriminative parts in an image are preserved, while background regions are adjusted seamlessly. In the training phase, pairs of an image and its image-level annotation are used to compute content and structure tosses. We demonstrate the effectiveness of our proposed method for a retargeting application with insightful analyses.", "title": "" }, { "docid": "e1404d2926f51455690883caf01fb2f9", "text": "The integration of data produced and collected across autonomous, heterogeneous web services is an increasingly important and challenging problem. Due to the lack of global identifiers, the same entity (e.g., a product) might have different textual representations across databases. Textual data is also often noisy because of transcription errors, incomplete information, and lack of standard formats. A fundamental task during data integration is matching of strings that refer to the same entity. In this paper, we adopt the widely used and established cosine similarity metric from the information retrieval field in order to identify potential string matches across web sources. We then use this similarity metric to characterize this key aspect of data integration as a join between relations on textual attributes, where the similarity of matches exceeds a specified threshold. Computing an exact answer to the text join can be expensive. For query processing efficiency, we propose a sampling-based join approximation strategy for execution in a standard, unmodified relational database management system (RDBMS), since more and more web sites are powered by RDBMSs with a web-based front end. We implement the join inside an RDBMS, using SQL queries, for scalability and robustness reasons. Finally, we present a detailed performance evaluation of an implementation of our algorithm within a commercial RDBMS, using real-life data sets. Our experimental results demonstrate the efficiency and accuracy of our techniques.", "title": "" }, { "docid": "8ef2ab1c25af8290e7f6492fbcfb4321", "text": "This chapter discusses the topic of Goal Reasoning and its relation to Trusted Autonomy. Goal Reasoning studies how autonomous agents can extend their reasoning capabilities beyond their plans and actions, to consider their goals. Such capability allows a Goal Reasoning system to more intelligently react to unexpected events or changes in the environment. We present two different models of Goal Reasoning: Goal-Driven Autonomy (GDA) and goal refinement. We then discuss several research topics related to each, and how they relate to the topic of Trusted Autonomy. Finally, we discuss several directions of ongoing work that are particularly interesting in the context of the chapter: using a model of inverse trust as a basis for adaptive autonomy, and studying how Goal Reasoning agents may choose to rebel (i.e., act contrary to a given command). Benjamin Johnson NRC Research Associate at the US Naval Research Laboratory; Washington, DC; USA e-mail: [email protected] Michael W. Floyd Knexus Research Corporation; Springfield, VA; USA e-mail: [email protected] Alexandra Coman NRC Research Associate at the US Naval Research Laboratory; Washington, DC; USA e-mail: [email protected] Mark A. Wilson Navy Center for Applied Research in AI, US Naval Research Laboratory; Washington, DC; USA e-mail: [email protected] David W. Aha Navy Center for Applied Research in AI, US Naval Research Laboratory; Washington, DC; USA e-mail: [email protected]", "title": "" } ]
scidocsrr
ca141021543ebfc2f0d19cde426ac735
Chaotic Cryptography Using Augmented Lorenz Equations Aided by Quantum Key Distribution
[ { "docid": "9478efffef9b34aa43a3e69765a48507", "text": "Digital chaotic ciphers have been investigated for more than a decade. However, their overall performance in terms of the tradeoff between security and speed, as well as the connection between chaos and cryptography, has not been sufficiently addressed. We propose a chaotic Feistel cipher and a chaotic uniform cipher. Our plan is to examine crypto components from both dynamical-system and cryptographical points of view, thus to explore connection between these two fields. In the due course, we also apply dynamical system theory to create cryptographically secure transformations and evaluate cryptographical security measures", "title": "" } ]
[ { "docid": "c8fd391e486efcf907424119696cdf01", "text": "AIM\nThis paper is the report of a study to explicate the components of observable behaviour that indicate a potential for violence in patients, their family and friends when presenting at an emergency department.\n\n\nBACKGROUND\nViolence towards nurses is a contemporary, multifaceted problem for the healthcare workforce globally. International literature identifies emergency departments as having high levels of violence.\n\n\nMETHOD\nA mixed method case study design was adopted, and data were collected by means of 290 hours of participant observation, 16 semi-structured interviews and 13 informal field interviews over a 5-month period in 2005. Thematic analysis of textual data was undertaken using NVivo2. Frequency counts were developed from the numerical data.\n\n\nFINDINGS\nFive distinctive elements of observable behaviour indicating potential for violence in patients, their families and friends were identified. These elements can be conceptualized as a potential nursing violence assessment framework and described through the acronym STAMP: Staring and eye contact, Tone and volume of voice, Anxiety, Mumbling and Pacing.\n\n\nCONCLUSION\nStaring and eye contact, Tone and volume of voice, Anxiety, Mumbling and Pacing provides a useful, practical nursing violence assessment framework to assist nurses to quickly identify patients, families and friends who have a potential for violence.", "title": "" }, { "docid": "7eb9e3aac9d25e3ae0628ffe0beea533", "text": "Many believe that an essential component for the discovery of the tremendous diversity in natural organisms was the evolution of evolvability, whereby evolution speeds up its ability to innovate by generating a more adaptive pool of offspring. One hypothesized mechanism for evolvability is developmental canalization, wherein certain dimensions of variation become more likely to be traversed and others are prevented from being explored (e.g., offspring tend to have similar-size legs, and mutations affect the length of both legs, not each leg individually). While ubiquitous in nature, canalization is rarely reported in computational simulations of evolution, which deprives us of in silico examples of canalization to study and raises the question of which conditions give rise to this form of evolvability. Answering this question would shed light on why such evolvability emerged naturally, and it could accelerate engineering efforts to harness evolution to solve important engineering challenges. In this article, we reveal a unique system in which canalization did emerge in computational evolution. We document that genomes entrench certain dimensions of variation that were frequently explored during their evolutionary history. The genetic representation of these organisms also evolved to be more modular and hierarchical than expected by chance, and we show that these organizational properties correlate with increased fitness. Interestingly, the type of computational evolutionary experiment that produced this evolvability was very different from traditional digital evolution in that there was no objective, suggesting that open-ended, divergent evolutionary processes may be necessary for the evolution of evolvability.", "title": "" }, { "docid": "844bd5a95f2f7436e666d7408ca89462", "text": "Neural message passing on molecular graphs is one of the most promising methods for predicting formation energy and other properties of molecules and materials. In this work we extend the neural message passing model with an edge update network which allows the information exchanged between atoms to depend on the hidden state of the receiving atom. We benchmark the proposed model on three publicly available datasets (QM9, The Materials Project and OQMD) and show that the proposed model yields superior prediction of formation energies and other properties on all three datasets in comparison with the best published results. Furthermore we investigate different methods for constructing the graph used to represent crystalline structures and we find that using a graph based on K-nearest neighbors achieves better prediction accuracy than using maximum distance cutoff or the Voronoi tessellation graph.", "title": "" }, { "docid": "47c953c6accdf17c6b4c6fba8a7538eb", "text": "A mobile health application solution with biofeedback based on body sensors is very useful to perform a data collection for patients remote monitoring. This system allows comfort, mobility, and efficiency in all the process of data collection providing more confidence and operability. Falls represent a high risk for debilitated elderly people. Falls can be detected by the accelerometer presented in most of the available mobile devices. To reverse this tendency, more accurate data for patients monitoring can be obtained from the body sensors attached to a human body (such as, electro cardiogram, electromyography, blood pressure, electro dermal activity, and temperature). Then, this paper proposes a mobile solution for falls detection and biofeedback monitoring. The proposed system collects sensed data from body that is forwarded to a smartphone or tablet through Bluetooth. Mobile devices are used to display information graphically to users. All the process of data acquisition is performed in real time. The proposed system is evaluated, demonstrated, and validated through a prototype and it is ready for use.", "title": "" }, { "docid": "813c951bc3533ec52c2a9e42be88cd6d", "text": "The automatic recognition of facial expressions has been an active research topic since the early nineties. There have been several advances in the past few years in terms of face detection and tracking, feature extraction mechanisms and the techniques used for expression classification. This paper surveys some of the published work since 2001 till date. The paper presents a time-line view of the advances made in this field, the applications of automatic face expression recognizers, the characteristics of an ideal system, the databases that have been used and the advances made in terms of their standardization and a detailed summary of the state of the art. The paper also discusses facial parameterization using FACS Action Units (AUs) and MPEG-4 Facial Animation Parameters (FAPs) and the recent advances in face detection, tracking and feature extraction methods. Notes have also been presented on emotions, expressions and facial features, discussion on the six prototypic expressions and the recent studies on expression classifiers. The paper ends with a note on the challenges and the future work. This paper has been written in a tutorial style with the intention of helping students and researchers who are new to this field. Index Terms — Expression recognition, emotion classification, face detection, face tracking, facial action encoding, survey, tutorial, human-centered computing. ——————————  ——————————", "title": "" }, { "docid": "efc11b77182119202190f97d705b3bb7", "text": "In many E-commerce recommender systems, a special class of recommendation involves recommending items to users in a life cycle. For example, customers who have babies will shop on Diapers.com within a relatively long period, and purchase different products for babies within different growth stages. Traditional recommendation algorithms produce recommendation lists similar to items that the target user has accessed before (content filtering), or compute recommendation by analyzing the items purchased by the users who are similar to the target user (collaborative filtering). Such recommendation paradigms cannot effectively resolve the situation with a life cycle, i.e., the need of customers within different stages might vary significantly. In this paper, we model users’ behavior with life cycles by employing handcrafted item taxonomies, of which the background knowledge can be tailored for the computation of personalized recommendation. In particular, our method first formalizes a user’s long-term behavior using the item taxonomy, and then identifies the exact stage of the user. By incorporating collaborative filtering into recommendation, we can easily provide a personalized item list to the user through other similar users within the same stage. An empirical evaluation conducted on a purchasing data collection obtained from Diapers.com demonstrates the efficacy of our proposed method. 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "79c5085cb9f85dbcd52637a71234c199", "text": "Abstract: In this paper, a three-phase six-switch standard boost rectifier with unity-power-factor-correction is investigated. A general equation is derived that relates input phase voltage and duty ratios of switches in continuous conduction mode. Based on one of solutions and using One-Cycle Control, a Unified Constant-frequency Integration (UCI) controller for powerfactor-correction (PFC) is proposed. For the standard bridge boost rectifier, unity-power-factor and low total-harmonicdistortion (THD) can be realized in all three phases with a simple circuit that is composed of one integrator with reset along with several flips-flops, comparators, and some logic and linear components. It does not require multipliers and threephase voltage sensors, which are used in many other control approaches. In addition, it employs constant switching frequency modulation that is desirable for industrial applications. The proposed control approach is simple and reliable. Theoretical analysis is verified by simulation and experimental results.", "title": "" }, { "docid": "e9959661af6e90ab26604d35385f32d1", "text": "This paper presents an enhancement transient capless low dropout voltage regulator (LDO). To eliminate the external capacitor, the miller effect is implemented through the use of a current amplifier. The proposed regulator LDO provides a load current of 50 mA with a dropout voltage of 200 mV, consuming 14μA quiescent current at light loads, and the regulated output voltage is 1.6 V with an input voltage range from 1.2 to 1.8 V. The proposed system is designed in 0.18 μm CMOS technology. A folded cascode amplifier with high transconductance and high power efficiency is proposed to improve the transient response of the LDO. In addition, multiloop feedback strategy employs a direct dynamic biasing technique to provide a high speed path during the load transient responses. The simulation results presented in this paper will be compared with other results of SoC LDOs demonstrate the advantage of the proposed topology.", "title": "" }, { "docid": "3e54aca085b4915316c97964bdb01527", "text": "This paper presents PIC microcontroller based PWM inverter controlled four switch three phase inverter (FSTPI) fed Induction Motor drive. The advantage of this inverter that uses of 4 switches instead of conventional 6 switches is lesser switching losses, lower electromagnetic interference (EMI), less complexity of control algorithms and reduced interface circuits. Simulation and experimental work are carried out and results presented to demonstrate the feasibility of the proposed approach. Simulation is carried out using MATLAB SIMULINK and in the experimental work a prototype model is built to verify the simulation results. PIC microcontroller (PIC 16F877A) is used to generate the PWM pulses for FSTPI to drive the 0.5 hp 3-phase Induction Motor.", "title": "" }, { "docid": "1a153e0afca80aaf35ffa1b457725fa3", "text": "Cloud computing can reduce mainframe management costs, so more and more users choose to build their own cloud hosting environment. In cloud computing, all the commands through the network connection, therefore, information security is particularly important. In this paper, we will explore the types of intrusion detection systems, and integration of these types, provided an effective and output reports, so system administrators can understand the attacks and damage quickly. With the popularity of cloud computing, intrusion detection system log files are also increasing rapidly, the effect is limited and inefficient by using the conventional analysis system. In this paper, we use Hadoop's MapReduce algorithm analysis of intrusion detection System log files, the experimental results also confirmed that the calculation speed can be increased by about 89%. For the system administrator, IDS Log Cloud Analysis System (called ICAS) can provide fast and high reliability of the system.", "title": "" }, { "docid": "b327f4e9a9e11ade7faff4b9781d3524", "text": "In the decade since Jeff Hawkins proposed Hierarchical Temporal Memory (HTM) as a model of neocortical computation, the theory and the algorithms have evolved dramatically. This paper presents a detailed description of HTM’s Cortical Learning Algorithm (CLA), including for the first time a rigorous mathematical formulation of all aspects of the computations. Prediction Assisted CLA (paCLA), a refinement of the CLA, is presented, which is both closer to the neuroscience and adds significantly to the computational power. Finally, we summarise the key functions of neocortex which are expressed in paCLA implementations. An Open Source project, Comportex, is the leading implementation of this evolving theory of the brain.", "title": "" }, { "docid": "5054443e7133111f2511631e4cf6e0db", "text": "Stitching multiple images together to create beautiful highresolution panoramas is one of the most popular consumer applications of image registration and blending. In this chapter, I review the motion models (geometric transformations) that underlie panoramic image stitching, discuss direct intensity-based and feature-based registration algorithms, and present global and local alignment techniques needed to establish highaccuracy correspondences between overlapping images. I then discuss various compositing options, including multi-band and gradient-domain blending, as well as techniques for removing blur and ghosted images. The resulting techniques can be used to create high-quality panoramas for static or interactive viewing.", "title": "" }, { "docid": "1d0dbfe15768703f7d5a1a56bbee3cac", "text": "This paper investigates the effect of non-audit services on audit quality. Following the announcement of the requirement to disclose non-audit fees, approximately one-third of UK quoted companies disclosed before the requirement became effective. Whilst distressed companies were more likely to disclose early, auditor size, directors’ shareholdings and non-audit fees were not signiŽ cantly correlated with early disclosure. These results cast doubt on the view that voluntary disclosure of non-audit fees was used to signal audit quality. The evidence also indicates a positive weakly signiŽ cant relationship between disclosed non-audit fees and audit qualiŽ cations. This suggests that when non-audit fees are disclosed, the provision of non-audit services does not reduce audit quality.", "title": "" }, { "docid": "5d9f9f01d5c443f3556984694753b7d5", "text": "The re-focusing of pharmaceutical industry research away from early discovery activities is stimulating the development of novel models of drug discovery, notably involving academia as a 'front end'. In this article the authors explore the drivers of change, the role of new entrants (universities with specialised core facilities) and novel partnership models. If they are to be sustainable and deliver, these new models must be flexible and properly funded by industry or public funding, rewarding all partners for contributions. The introduction of an industry-like process and experienced management teams signals a revolution in discovery that benefits society by improving the value gained from publicly funded research.", "title": "" }, { "docid": "9e93c2ecfd268f36d0da9e43ab63baa8", "text": "We present new and review existing algorithms for the numerical integration of multivariate functions defined over d-dimensional cubes using several variants of the sparse grid method first introduced by Smolyak [49]. In this approach, multivariate quadrature formulas are constructed using combinations of tensor products of suitable one-dimensional formulas. The computing cost is almost independent of the dimension of the problem if the function under consideration has bounded mixed derivatives. We suggest the usage of extended Gauss (Patterson) quadrature formulas as the one‐dimensional basis of the construction and show their superiority in comparison to previously used sparse grid approaches based on the trapezoidal, Clenshaw–Curtis and Gauss rules in several numerical experiments and applications. For the computation of path integrals further improvements can be obtained by combining generalized Smolyak quadrature with the Brownian bridge construction.", "title": "" }, { "docid": "74770d8f7e0ac066badb9760a6a2b925", "text": "Memristor-based synaptic network has been widely investigated and applied to neuromorphic computing systems for the fast computation and low design cost. As memristors continue to mature and achieve higher density, bit failures within crossbar arrays can become a critical issue. These can degrade the computation accuracy significantly. In this work, we propose a defect rescuing design to restore the computation accuracy. In our proposed design, significant weights in a specified network are first identified and retraining and remapping algorithms are described. For a two layer neural network with 92.64% classification accuracy on MNIST digit recognition, our evaluation based on real device testing shows that our design can recover almost its full performance when 20% random defects are present.", "title": "" }, { "docid": "fc421a5ef2556b86c34d6f2bb4dc018e", "text": "It's been over a decade now. We've forgotten how slow the adoption of consumer Internet commerce has been compared to other Internet growth metrics. And we're surprised when security scares like spyware and phishing result in lurches in consumer use.This paper re-visits an old theme, and finds that consumer marketing is still characterised by aggression and dominance, not sensitivity to customer needs. This conclusion is based on an examination of terms and privacy policy statements, which shows that businesses are confronting the people who buy from them with fixed, unyielding interfaces. Instead of generating trust, marketers prefer to wield power.These hard-headed approaches can work in a number of circumstances. Compelling content is one, but not everyone sells sex, gambling services, short-shelf-life news, and even shorter-shelf-life fashion goods. And, after decades of mass-media-conditioned consumer psychology research and experimentation, it's far from clear that advertising can convert everyone into salivating consumers who 'just have to have' products and services brand-linked to every new trend, especially if what you sell is groceries or handyman supplies.The thesis of this paper is that the one-dimensional, aggressive concept of B2C has long passed its use-by date. Trading is two-way -- consumers' attention, money and loyalty, in return for marketers' products and services, and vice versa.So B2C is conceptually wrong, and needs to be replaced by some buzzphrase that better conveys 'B-with-C' rather than 'to-C' and 'at-C'. Implementations of 'customised' services through 'portals' have to mature beyond data-mining-based manipulation to support two-sided relationships, and customer-managed profiles.It's all been said before, but now it's time to listen.", "title": "" }, { "docid": "633d32667221f53def4558db23a8b8af", "text": "In this paper we present, ARCTREES, a novel way of visualizing hierarchical and non-hierarchical relations within one interactive visualization. Such a visualization is challenging because it must display hierarchical information in a way that the user can keep his or her mental map of the data set and include relational information without causing misinterpretation. We propose a hierarchical view derived from traditional Treemaps and augment this view with an arc diagram to depict relations. In addition, we present interaction methods that allow the exploration of the data set using Focus+Context techniques for navigation. The development was motivated by a need for understanding relations in structured documents but it is also useful in many other application domains such as project management and calendars.", "title": "" }, { "docid": "ae218abd859370a093faf83d6d81599d", "text": "In this letter, we present an autofocus routine for backprojection imagery from spotlight-mode synthetic aperture radar data. The approach is based on maximizing image sharpness and supports the flexible collection and imaging geometries of BP, including wide-angle apertures and the ability to image directly onto a digital elevation map. While image-quality-based autofocus approaches can be computationally intensive, in the backprojection setting, we demonstrate a natural geometric interpretation that allows for optimal single-pulse phase corrections to be derived in closed form as the solution of a quartic polynomial. The approach is applicable to focusing standard backprojection imagery, as well as providing incremental focusing in sequential imaging applications based on autoregressive backprojection. An example demonstrates the efficacy of the approach applied to real data for a wide-aperture backprojection image.", "title": "" }, { "docid": "13451c2f433b9d32563012458bb4856c", "text": "Purpose – The paper’s aim is to explore the factors that affect the online game addiction and the role that online game addiction plays in the relationship between online satisfaction and loyalty. Design/methodology/approach – A web survey of online game players was conducted, with 1,186 valid responses collected. Structure equation modeling – specifically partial least squares – was used to assess the relationships in the proposed research framework. Findings – The results indicate that perceived playfulness and descriptive norms influence online game addiction. Furthermore, descriptive norms indirectly affect online game addiction through perceived playfulness. Addiction also directly contributes to loyalty and attenuates the relationship between satisfaction and loyalty. This finding partially explains why people remain loyal to an online game despite being dissatisfied. Practical implications – Online gaming vendors should strive to create amusing game content and to maintain their online game communities in order to enhance players’ perceptions of playfulness and the effects of social influences. Also, because satisfaction is the most significant indicator of loyalty, vendors can enhance loyalty by providing better services, such as fraud prevention and the detection of cheating behaviors. Originality/value – The value of this study is that it reveals the moderating influences of addiction on the satisfaction-loyalty relationship and factors that contribute to the online game addiction. Moreover, while many past studies focused on addiction’s negative effects and on groups considered particularly vulnerable to Internet addiction, this paper extends previous work by investigating the relationship of addiction to other marketing variables and by using a more general population, mostly young adults, as research subjects.", "title": "" } ]
scidocsrr
cb447415d4dd327985a22c7164777599
An Unsupervised Segmentation Method for Retinal Vessel Using Combined Filters
[ { "docid": "85a076e58f4d117a37dfe6b3d68f5933", "text": "We propose a new model for active contours to detect objects in a given image, based on techniques of curve evolution, Mumford-Shah (1989) functional for segmentation and level sets. Our model can detect objects whose boundaries are not necessarily defined by the gradient. We minimize an energy which can be seen as a particular case of the minimal partition problem. In the level set formulation, the problem becomes a \"mean-curvature flow\"-like evolving the active contour, which will stop on the desired boundary. However, the stopping term does not depend on the gradient of the image, as in the classical active contour models, but is instead related to a particular segmentation of the image. We give a numerical algorithm using finite differences. Finally, we present various experimental results and in particular some examples for which the classical snakes methods based on the gradient are not applicable. Also, the initial curve can be anywhere in the image, and interior contours are automatically detected.", "title": "" } ]
[ { "docid": "69d42340c09303b69eafb19de7170159", "text": "Based on an example of translational motion, this report shows how to model and initialize the Kalman Filter. Basic rules about physical motion are introduced to point out, that the well-known laws of physical motion are a mere approximation. Hence, motion of non-constant velocity or acceleration is modelled by additional use of white noise. Special attention is drawn to the matrix initialization for use in the Kalman Filter, as, in general, papers and books do not give any hint on this; thus inducing the impression that initializing is not important and may be arbitrary. For unknown matrices many users of the Kalman Filter choose the unity matrix. Sometimes it works, sometimes it does not. In order to close this gap, initialization is shown on the example of human interactive motion. In contrast to measuring instruments with documented measurement errors in manuals, the errors generated by vision-based sensoring must be estimated carefully. Of course, the described methods may be adapted to other circumstances.", "title": "" }, { "docid": "7c6d1a1b5002e54f8ee28312b2dc25ba", "text": "This study aims to investigate the direct effects of store image and service quality on brand image and purchase intention for a private label brand (PLB). This study also investigates the indirect effects mediated by perceived risk and price consciousness on these relationships. The sample in this study consisted of three hundred and sixty (360) customers of the Watsons and Cosmed chain of drugstores. The pre-test results identified ‘‘Watsons’’ and ‘‘My Beauty Diary’’ as the research brands of the PLB for the two stores, respectively. This study uses LISREL to examine the hypothesized relationships. This study reveals that (1) store image has a direct and positive effect on the purchase intention of the PLB; (2) service quality has a direct and positive effect on the PLB image; (3) the perceived risk of PLB products has a mediating effect on the relationship between the brand image and the consumers purchase intention of the PLB. 2010 Australian and New Zealand Marketing Academy. All rights reserved.", "title": "" }, { "docid": "2b985f234933a34b150ef3819305b282", "text": "The constraint of difference is known to the constraint programming community since Lauriere introduced Alice in 1978. Since then, several strategies have been designed to solve the alldifferent constraint. This paper surveys the most important developments over the years regarding the alldifferent constraint. First we summarize the underlying concepts and results from graph theory and integer programming. Then we give an overview and an abstract comparison of different solution strategies. In addition, the symmetric alldifferent constraint is treated. Finally, we show how to apply cost-based filtering to the alldifferent constraint. A preliminary version of this paper appeared as [14].", "title": "" }, { "docid": "f38b35c6d21e562ae6e5015c43ed3d53", "text": "We discuss an attentional model for simultaneous object tracking and recognition that is driven by gaze data. Motivated by theories of perception, the model consists of two interacting pathways, identity and control, intended to mirror the what and where pathways in neuroscience models. The identity pathway models object appearance and performs classification using deep (factored)-restricted Boltzmann machines. At each point in time, the observations consist of foveated images, with decaying resolution toward the periphery of the gaze. The control pathway models the location, orientation, scale, and speed of the attended object. The posterior distribution of these states is estimated with particle filtering. Deeper in the control pathway, we encounter an attentional mechanism that learns to select gazes so as to minimize tracking uncertainty. Unlike in our previous work, we introduce gaze selection strategies that operate in the presence of partial information and on a continuous action space. We show that a straightforward extension of the existing approach to the partial information setting results in poor performance, and we propose an alternative method based on modeling the reward surface as a gaussian process. This approach gives good performance in the presence of partial information and allows us to expand the action space from a small, discrete set of fixation points to a continuous domain.", "title": "" }, { "docid": "41c317b0e275592ea9009f3035d11a64", "text": "We introduce a distribution based model to learn bilingual word embeddings from monolingual data. It is simple, effective and does not require any parallel data or any seed lexicon. We take advantage of the fact that word embeddings are usually in form of dense real-valued lowdimensional vector and therefore the distribution of them can be accurately estimated. A novel cross-lingual learning objective is proposed which directly matches the distributions of word embeddings in one language with that in the other language. During the joint learning process, we dynamically estimate the distributions of word embeddings in two languages respectively and minimize the dissimilarity between them through standard back propagation algorithm. Our learned bilingual word embeddings allow to group each word and its translations together in the shared vector space. We demonstrate the utility of the learned embeddings on the task of finding word-to-word translations from monolingual corpora. Our model achieved encouraging performance on data in both related languages and substantially different languages.", "title": "" }, { "docid": "22cd14ac697aa68750c15befa420ed99", "text": "Internet of Thing (IoT) is going to make such a world where physical things (smart home appliances, and smart watches etc.) revolutionized the information networks and services providing systems which provide innovative and smart services to human. With smart home technology, our living area is becoming more comfortable and convenient. Smart home technology provides automated, intelligent, smart, innovative and ubiquitous services to residential users through Information Communication Technology (ICT). Due to internet-connected, dynamic and heterogeneous nature of smart home environment creates new security, authentication and privacy challenges. In this paper, we investigate security attacks in smart home and evaluate their impact on the overall system security. We identified security requirements and solutions in the smart home environment. Based on several scenarios, we suggest to set security goals for the smart home environment. Based on historical data, we forecast security attacks (like malware, virus etc.) and estimated that how many attacks are expected to be launched in coming years.", "title": "" }, { "docid": "7f68d112267f94d91cd4c45ecb7f874a", "text": "In this paper we study the problem of learning Rectified Linear Units (ReLUs) which are functions of the form x ↦ max(0, ⟨w,x⟩) with w ∈ R denoting the weight vector. We study this problem in the high-dimensional regime where the number of observations are fewer than the dimension of the weight vector. We assume that the weight vector belongs to some closed set (convex or nonconvex) which captures known side-information about its structure. We focus on the realizable model where the inputs are chosen i.i.d. from a Gaussian distribution and the labels are generated according to a planted weight vector. We show that projected gradient descent, when initialized at 0, converges at a linear rate to the planted model with a number of samples that is optimal up to numerical constants. Our results on the dynamics of convergence of these very shallow neural nets may provide some insights towards understanding the dynamics of deeper architectures.", "title": "" }, { "docid": "0552c786fe0030df69b2095d78c20485", "text": "In recent years, real-time processing and analytics systems for big data--in the context of Business Intelligence (BI)--have received a growing attention. The traditional BI platforms that perform regular updates on daily, weekly or monthly basis are no longer adequate to satisfy the fast-changing business environments. However, due to the nature of big data, it has become a challenge to achieve the real-time capability using the traditional technologies. The recent distributed computing technology, MapReduce, provides off-the-shelf high scalability that can significantly shorten the processing time for big data; Its open-source implementation such as Hadoop has become the de-facto standard for processing big data, however, Hadoop has the limitation of supporting real-time updates. The improvements in Hadoop for the real-time capability, and the other alternative real-time frameworks have been emerging in recent years. This paper presents a survey of the open source technologies that support big data processing in a real-time/near real-time fashion, including their system architectures and platforms.", "title": "" }, { "docid": "e1001ebf3a30bcb2599fae6dae8f83e9", "text": "The notion of the \"stakeholders\" of the firm has drawn ever-increasing attention since Freeman published his seminal book on Strategic Management: A Stakeholder Approach in 1984. In the understanding of most scholars in the field, stakeholder theory is not a special theory on a firm's constituencies but sets out to replace today's prevailing neoclassical economic concept of the firm. As such, it is seen as the superior theory of the firm. Though stakeholder theory explicitly is a theory on the firm, that is, on a private sector entity, some scholars try to apply it to public sector organizations, and, in particular, to e-government settings. This paper summarizes stakeholder theory, discusses its premises and justifications, compares its tracks, sheds light on recent attempts to join the two tracks, and discusses the benefits and limits of its practical applicability to the public sector using the case of a recent e-government initiative in New York State.", "title": "" }, { "docid": "f5ccb75eed1be1d5c0c8e98b5fcf565c", "text": "In this paper, a self-guiding multimodal LSTM (sg-LSTM) image captioning model is proposed to handle uncontrolled imbalanced real-world image-sentence dataset. We collect FlickrNYC dataset from Flickr as our testbed with 306, 165 images and the original text descriptions uploaded by the users are utilized as the ground truth for training. Descriptions in FlickrNYC dataset vary dramatically ranging from short term-descriptions to long paragraph-descriptions and can describe any visual aspects, or even refer to objects that are not depicted. To deal with the imbalanced and noisy situation and to fully explore the dataset itself, we propose a novel guiding textual feature extracted utilizing a multimodal LSTM (m-LSTM) model. Training of m-LSTM is based on the portion of data in which the image content and the corresponding descriptions are strongly bonded. Afterwards, during the training of sg-LSTM on the rest training data, this guiding information serves as additional input to the network along with the image representations and the ground-truth descriptions. By integrating these input components into a multimodal block, we aim to form a training scheme with the textual information tightly coupled with the image content. The experimental results demonstrate that the proposed sg-LSTM model outperforms the traditional state-of-the-art multimodal RNN captioning framework in successfully describing the key components of the input images.", "title": "" }, { "docid": "a7b0f0455482765efd3801c3ae9f85b7", "text": "The Business Process Modelling Notation (BPMN) is a standard for capturing business processes in the early phases of systems development. The mix of constructs found in BPMN makes it possible to create models with semantic errors. Such errors are especially serious, because errors in the early phases of systems development are among the most costly and hardest to correct. The ability to statically check the semantic correctness of models is thus a desirable feature for modelling tools based on BPMN. Accordingly, this paper proposes a mapping from BPMN to a formal language, namely Petri nets, for which efficient analysis techniques are available. The proposed mapping has been implemented as a tool that, in conjunction with existing Petri net-based tools, enables the static analysis of BPMN models. The formalisation also led to the identification of deficiencies in the BPMN standard specification.", "title": "" }, { "docid": "77cf780ce8b2c7b6de57c83f6b724dba", "text": "BACKGROUND\nAlthough there are several case reports of facial skin ischemia/necrosis caused by hyaluronic acid filler injections, no systematic study of the clinical outcomes of a series of cases with this complication has been reported.\n\n\nMETHODS\nThe authors report a study of 20 consecutive patients who developed impending nasal skin necrosis as a primary concern, after nose and/or nasolabial fold augmentation with hyaluronic acid fillers. The authors retrospectively reviewed the clinical outcomes and the risk factors for this complication using case-control analysis.\n\n\nRESULTS\nSeven patients (35 percent) developed full skin necrosis, and 13 patients (65 percent) recovered fully after combination treatment with hyaluronidase. Although the two groups had similar age, sex, filler injection sites, and treatment for the complication, 85 percent of the patients in the full skin necrosis group were late presenters who did not receive the combination treatment with hyaluronidase within 2 days after the vascular complication first appeared. In contrast, just 15 percent of the patients in the full recovery group were late presenters (p = 0.004).\n\n\nCONCLUSIONS\nNose and nasolabial fold augmentations with hyaluronic acid fillers can lead to impending nasal skin necrosis, possibly caused by intravascular embolism and/or extravascular compression. The key for preventing the skin ischemia from progressing to necrosis is to identify and treat the ischemia as early as possible. Early (<2 days) combination treatment with hyaluronidase is associated with the full resolution of the complication.\n\n\nCLINICAL QUESTION/LEVEL OF EVIDENCE\nTherapeutic, IV.", "title": "" }, { "docid": "8be9a353493fe641d5a8ad5fe5dea3b6", "text": "Fast and efficient network intrusion detection is a very challenging issue as the size of network traffic has become increasingly big and complex. A real time intrusion detection system should be able to process large size of network traffic data as quickly as possible in order to prevent intrusion in the communication system as early as possible. In this paper, we have employed five machine learning algorithms such as Logistic regression, Support vector machines, Random forest, Gradient Boosted Decision trees & Naive Bayes for detecting the attack traffic. For processing and detecting the attack traffic as fast as possible, we have used Apache Spark, a big data processing tool for detecting and analysis of intrusion in the communication network traffic. Performance comparison of intrusion detection schemes are evaluated in terms of training time, prediction time, accuracy, sensitivity and specificity on a real time KDD'99 data set.", "title": "" }, { "docid": "ec2eb33d3bf01df406409a31cc0a0e1f", "text": "Brain graphs provide a relatively simple and increasingly popular way of modeling the human brain connectome, using graph theory to abstractly define a nervous system as a set of nodes (denoting anatomical regions or recording electrodes) and interconnecting edges (denoting structural or functional connections). Topological and geometrical properties of these graphs can be measured and compared to random graphs and to graphs derived from other neuroscience data or other (nonneural) complex systems. Both structural and functional human brain graphs have consistently demonstrated key topological properties such as small-worldness, modularity, and heterogeneous degree distributions. Brain graphs are also physically embedded so as to nearly minimize wiring cost, a key geometric property. Here we offer a conceptual review and methodological guide to graphical analysis of human neuroimaging data, with an emphasis on some of the key assumptions, issues, and trade-offs facing the investigator.", "title": "" }, { "docid": "97202b135a3c4d641b6ffe8f36778619", "text": "This paper proposes a new method for human posture recognition from top-view depth maps on small training datasets. There are two strategies developed to leverage the capability of convolution neural network (CNN) in mining the fundamental and generic features for recognition. First, the early layers of CNN should serve the function to extract feature without specific representation. By applying the concept of transfer learning, the first few layers from the pre-learned VGG model can be used directly without further fine-tuning. To alleviate the computational loading and to increase the accuracy of our partially transferred model, a cross-layer inheriting feature fusion (CLIFF) is proposed by using the information from the early layer in fully connected layer without further processing. The experimental result shows that combination of partial transferred model and CLIFF can provide better performance than VGG16 [1] model with re-trained FC layer and other hand-crafted features like RBPs [2].", "title": "" }, { "docid": "0b1bb42b175ed925b357112d869d3ddd", "text": "While location is one of the most important context information in mobile and ubiquitous computing, large-scale deployment of indoor localization system remains elusive.\n In this work, we propose PiLoc, an indoor localization system that utilizes opportunistically sensed data contributed by users. Our system does not require manual calibration, prior knowledge and infrastructure support. The key novelty of PiLoc is that it merges walking segments annotated with displacement and signal strength information from users to derive a map of walking paths annotated with radio signal strengths.\n We evaluate PiLoc over 4 different indoor areas. Evaluation shows that our system can achieve an average localization error of 1.5m.", "title": "" }, { "docid": "7e354ca56591a9116d651b53c6ab744d", "text": "We have implemented a concurrent copying garbage collector that uses replicating garbage collection. In our design, the client can continuously access the heap during garbage collection. No low-level synchronization between the client and the garbage collector is required on individual object operations. The garbage collector replicates live heap objects and periodically synchronizes with the client to obtain the client's current root set and mutation log. An experimental implementation using the Standard ML of New Jersey system on a shared-memory multiprocessor demonstrates excellent pause time performance and moderate execution time speedups.", "title": "" }, { "docid": "70593bbda6c88f0ac10e26768d74b3cd", "text": "Type 2 diabetes mellitus (T2DM) is a chronic disease that o‰en results in multiple complications. Risk prediction and pro€ling of T2DM complications is critical for healthcare professionals to design personalized treatment plans for patients in diabetes care for improved outcomes. In this paper, we study the risk of developing complications a‰er the initial T2DM diagnosis from longitudinal patient records. We propose a novel multi-task learning approach to simultaneously model multiple complications where each task corresponds to the risk modeling of one complication. Speci€cally, the proposed method strategically captures the relationships (1) between the risks of multiple T2DM complications, (2) between the di‚erent risk factors, and (3) between the risk factor selection paŠerns. Œe method uses coecient shrinkage to identify an informative subset of risk factors from high-dimensional data, and uses a hierarchical Bayesian framework to allow domain knowledge to be incorporated as priors. Œe proposed method is favorable for healthcare applications because in additional to improved prediction performance, relationships among the di‚erent risks and risk factors are also identi€ed. Extensive experimental results on a large electronic medical claims database show that the proposed method outperforms state-of-the-art models by a signi€cant margin. Furthermore, we show that the risk associations learned and the risk factors identi€ed lead to meaningful clinical insights. CCS CONCEPTS •Information systems→ Data mining; •Applied computing → Health informatics;", "title": "" }, { "docid": "9b98e43825bd36736c7c87bb2cee5a8c", "text": "Corresponding Author: Daniel Strmečki Faculty of Organization and Informatics, Pavlinska 2, 42000 Varaždin, Croatia Email: [email protected] Abstract: Gamification is the usage of game mechanics, dynamics, aesthetics and game thinking in non-game systems. Its main objective is to increase user’s motivation, experience and engagement. For the same reason, it has started to penetrate in e-learning systems. However, when using gamified design elements in e-learning, we must consider various types of learners. In the phases of analysis and design of such elements, the cooperation of education, technology, pedagogy, design and finance experts is required. This paper discusses the development phases of introducing gamification into e-learning systems, various gamification design elements and their suitability for usage in e-learning systems. Several gamified design elements are found suited for e-learning (including points, badges, trophies, customization, leader boards, levels, progress tracking, challenges, feedback, social engagement loops and the freedom to fail). Advices for the usage of each of those elements in e-learning systems are also provided in this study. Based on those advises and the identified phases of introducing gamification info e-learning systems, we conducted an experimental study to investigate the effectiveness of gamification of an informatics online course. Results showed that students enrolled in the gamified version of the online module achieved greater learning success. Positive results encourage us to investigate the gamification of online learning content for other topics and courses. We also encourage more research on the influence of specific gamified design elements on learner’s motivation and engagement.", "title": "" }, { "docid": "d0526f6c589dc04284312a83ac5d7fff", "text": "Paper delivered at the International Conference on \" Cluster management in structural policy – International experiences and consequences for Northrhine-Westfalia \" , Duisburg, december 5 th", "title": "" } ]
scidocsrr
3a5e093608cab145dd5a41d5dbe84699
Interaction between the native and second language phonetic subsystems
[ { "docid": "eb30c6946e802086ac6de5848897a648", "text": "To determine how age of acquisition influences perception of second-language speech, the Speech Perception in Noise (SPIN) test was administered to native Mexican-Spanish-speaking listeners who learned fluent English before age 6 (early bilinguals) or after age 14 (late bilinguals) and monolingual American-English speakers (monolinguals). Results show that the levels of noise at which the speech was intelligible were significantly higher and the benefit from context was significantly greater for monolinguals and early bilinguals than for late bilinguals. These findings indicate that learning a second language at an early age is important for the acquisition of efficient high-level processing of it, at least in the presence of noise.", "title": "" } ]
[ { "docid": "5e58638e766904eb84380b53cae60df2", "text": "BACKGROUND\nAneurysmal subarachnoid hemorrhage (SAH) accounts for 5% of strokes and carries a poor prognosis. It affects around 6 cases per 100,000 patient years occurring at a relatively young age.\n\n\nMETHODS\nCommon risk factors are the same as for stroke, and only in a minority of the cases, genetic factors can be found. The overall mortality ranges from 32% to 67%, with 10-20% of patients with long-term dependence due to brain damage. An explosive headache is the most common reported symptom, although a wide spectrum of clinical disturbances can be the presenting symptoms. Brain computed tomography (CT) allow the diagnosis of SAH. The subsequent CT angiography (CTA) or digital subtraction angiography (DSA) can detect vascular malformations such as aneurysms. Non-aneurysmal SAH is observed in 10% of the cases. In patients surviving the initial aneurysmal bleeding, re-hemorrhage and acute hydrocephalus can affect the prognosis.\n\n\nRESULTS\nAlthough occlusion of an aneurysm by surgical clipping or endovascular procedure effectively prevents rebleeding, cerebral vasospasm and the resulting cerebral ischemia occurring after SAH are still responsible for the considerable morbidity and mortality related to such a pathology. A significant amount of experimental and clinical research has been conducted to find ways in preventing these complications without sound results.\n\n\nCONCLUSIONS\nEven though no single pharmacological agent or treatment protocol has been identified, the main therapeutic interventions remain ineffective and limited to the manipulation of systemic blood pressure, alteration of blood volume or viscosity, and control of arterial dioxide tension.", "title": "" }, { "docid": "f9fd7fc57dfdfbfa6f21dc074c9e9daf", "text": "Recently, Lin and Tsai proposed an image secret sharing scheme with steganography and authentication to prevent participants from the incidental or intentional provision of a false stego-image (an image containing the hidden secret image). However, dishonest participants can easily manipulate the stego-image for successful authentication but cannot recover the secret image, i.e., compromise the steganography. In this paper, we present a scheme to improve authentication ability that prevents dishonest participants from cheating. The proposed scheme also defines the arrangement of embedded bits to improve the quality of stego-image. Furthermore, by means of the Galois Field GF(2), we improve the scheme to a lossless version without additional pixels. 2006 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "acd6c7715fb1e15a123778033672f070", "text": "Classical statistical inference of experimental data assumes that the treatment affects the test group but not the control group. This assumption will typically be violated when experimenting in marketplaces because of general equilibrium effects: changing test demand affects the supply available to the control group. We illustrate this with an email marketing campaign performed by eBay. Ignoring test-control interference leads to estimates of the campaign's effectiveness which are too large by a factor of around two. We present the simple economics of this bias in a supply and demand framework, showing that the bias is larger in magnitude where there is more inelastic supply, and is positive if demand is elastic.", "title": "" }, { "docid": "f69d19efaa44b8e7dd2f190ec9316658", "text": "A common workflow for visualization designers begins with a generative tool, like D3 or Processing, to create the initial visualization; and proceeds to a drawing tool, like Adobe Illustrator or Inkscape, for editing and cleaning. Unfortunately, this is typically a one-way process: once a visualization is exported from the generative tool into a drawing tool, it is difficult to make further, data-driven changes. In this paper, we propose a bridge model to allow designers to bring their work back from the drawing tool to re-edit in the generative tool. Our key insight is to recast this iteration challenge as a merge problem - similar to when two people are editing a document and changes between them need to reconciled. We also present a specific instantiation of this model, a tool called Hanpuku, which bridges between D3 scripts and Illustrator. We show several examples of visualizations that are iteratively created using Hanpuku in order to illustrate the flexibility of the approach. We further describe several hypothetical tools that bridge between other visualization tools to emphasize the generality of the model.", "title": "" }, { "docid": "919483807937c5aed6f4529b0db29540", "text": "Tabular data is an abundant source of information on the Web, but remains mostly isolated from the latter’s interconnections since tables lack links and computer-accessible descriptions of their structure. In other words, the schemas of these tables — attribute names, values, data types, etc. — are not explicitly stored as table metadata. Consequently, the structure that these tables contain is not accessible to the crawlers that power search engines and thus not accessible to user search queries. We address this lack of structure with a new method for leveraging the principles of table construction in order to extract table schemas. Discovering the schema by which a table is constructed is achieved by harnessing the similarities and differences of nearby table rows through the use of a novel set of features and a feature processing scheme. The schemas of these data tables are determined using a classification technique based on conditional random fields in combination with a novel feature encoding method called logarithmic binning, which is specifically designed for the data table extraction task. Our method provides considerable improvement over the wellknown WebTables schema extraction method. In contrast with previous work that focuses on extracting individual relations, our method excels at correctly interpreting full tables, thereby being capable of handling general tables such as those found in spreadsheets, instead of being restricted to HTML tables as is the case with the WebTables method. We also extract additional schema characteristics, such as row groupings, which are important for supporting information retrieval tasks on tabular data.", "title": "" }, { "docid": "709d82edae451b9afa621cb4c64d79da", "text": "In many industries, firms are seeking to cut concept to customer development time, improve quality, reduce the cost of new products and facilitate the smooth launch of new products. Prior research has indicated that the integration of material suppliers into the new product development (NPD) cycle can provide substantial benefits towards achieving these goals. This involvement may range from simple consultation with suppliers on design ideas to making suppliers fully responsible for the design of components or systems they will supply. Moreover, suppliers may be involved at different stages of the new product development process. Early supplier involvement is a key coordinating process in supply chain design, product design and process design. Several important questions regarding supplier involvement in new product development remain unanswered. Specifically, we look at the issue of what managerial practices affect new product development team effectiveness when suppliers are to be involved. We also consider whether these factors differ depending on when the supplier is to be involved and what level of responsibility is to be given to the supplier. Finally, we examine whether supplier involvement in new product development can produce significant improvements in financial returns and/or product design performance. We test these proposed relationships using survey data collected from a group of global organizations and find support for the relationships based on the results of a multiple regression analysis. # 2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "e07aa24f085a06a255b12382fccf20ed", "text": "This paper presents an end-to-end question answering system for legal texts. This system includes two main phases. In the first phase, our system will retrieve articles from Japanese Civil Code that are relevant with the given question using the cosine distance after the given question and articles are converted into vectors using TF-IDF weighting scheme. Then, a ranking model can be applied to re-rank these retrieved articles using a learning to rank algorithm and annotated corpus. In the second phase, we adapted two deep learning models, which has been proposed for the Natural language inference task, to check the entailment relationship between a question and its related articles including a sentence encoding-based model and a decomposable attention model. Experimental results show that our approaches can be a promising approach for information extraction/entailment in legal texts.", "title": "" }, { "docid": "3ba87a9a84f317ef3fd97c79f86340c1", "text": "Programmers often need to reason about how a program evolved between two or more program versions. Reasoning about program changes is challenging as there is a significant gap between how programmers think about changes and how existing program differencing tools represent such changes. For example, even though modification of a locking protocol is conceptually simple and systematic at a code level, diff extracts scattered text additions and deletions per file. To enable programmers to reason about program differences at a high level, this paper proposes a rule-based program differencing approach that automatically discovers and represents systematic changes as logic rules. To demonstrate the viability of this approach, we instantiated this approach at two different abstraction levels in Java: first at the level of application programming interface (API) names and signatures, and second at the level of code elements (e.g., types, methods, and fields) and structural dependences (e.g., method-calls, field-accesses, and subtyping relationships). The benefit of this approach is demonstrated through its application to several open source projects as well as a focus group study with professional software engineers from a large e-commerce company.", "title": "" }, { "docid": "481f4a4b14d4594d8b023f9df074dfeb", "text": "We describe a method to produce a network where current methods such as DeepFool have great difficulty producing adversarial samples. Our construction suggests some insights into how deep networks work. We provide a reasonable analyses that our construction is difficult to defeat, and show experimentally that our method is hard to defeat with both Type I and Type II attacks using several standard networks and datasets. This SafetyNet architecture is used to an important and novel application SceneProof, which can reliably detect whether an image is a picture of a real scene or not. SceneProof applies to images captured with depth maps (RGBD images) and checks if a pair of image and depth map is consistent. It relies on the relative difficulty of producing naturalistic depth maps for images in post processing. We demonstrate that our SafetyNet is robust to adversarial examples built from currently known attacking approaches.", "title": "" }, { "docid": "6347b4594d9bf79cf1ec03711ad79176", "text": "The paper deals with a Wireless Sensor Network (WSN) as a reliable solution for capturing the kinematics of a fire front spreading over a fuel bed. To provide reliable information in fire studies and support fire fighting strategies, a Wireless Sensor Network must be able to perform three sequential actions: 1) sensing thermal data in the open as the gas temperature; 2) detecting a fire i.e., the spatial position of a flame; 3) tracking the fire spread during its spatial and temporal evolution. One of the great challenges in performing fire front tracking with a WSN is to avoid the destruction of motes by the fire. This paper therefore shows the performance of Wireless Sensor Network when the motes are protected with a thermal insulation dedicated to track a fire spreading across vegetative fuels on a field scale. The resulting experimental WSN is then used in series of wildfire experiments performed in the open in vegetation areas ranging in size from 50 to 1,000 m(2).", "title": "" }, { "docid": "b9b7e15f0442ed205248d8dc64c74f2d", "text": "Platforms such as Twitter have provided researchers with ample opportunities to analytically study social phenomena. There are however, significant computational challenges due to the enormous rate of production of new information: researchers are therefore, often forced to analyze a judiciously selected “sample” of the data. Like other social media phenomena, information diffusion is a social process–it is affected by user context, and topic, in addition to the graph topology. This paper studies the impact of different attribute and topology based sampling strategies on the discovery of an important social media phenomena–information diffusion. We examine several widely-adopted sampling methods that select nodes based on attribute (random, location, and activity) and topology (forest fire) as well as study the impact of attribute based seed selection on topology based sampling. Then we develop a series of metrics for evaluating the quality of the sample, based on user activity (e.g. volume, number of seeds), topological (e.g. reach, spread) and temporal characteristics (e.g. rate). We additionally correlate the diffusion volume metric with two external variables–search and news trends. Our experiments reveal that for small sample sizes (30%), a sample that incorporates both topology and usercontext (e.g. location, activity) can improve on naı̈ve methods by a significant margin of ∼15-20%.", "title": "" }, { "docid": "040b56db2f85ad43ed9f3f9adbbd5a71", "text": "This study examined the relations between source credibility of eWOM (electronic word of mouth), perceived risk and food products customer's information adoption mediated by argument quality and information usefulness. eWOM has been commonly used to refer the customers during decision-making process for food commodities. Based on this study, we used Elaboration Likelihood Model of information adoption presented by Sussman and Siegal (2003) to check the willingness to buy. Non-probability purposive samples of 300 active participants were taken through questionnaire from several regions of the Republic of China and analyzed the data through structural equation modeling (SEM) accordingly. We discussed that whether eWOM source credibility and perceived risk would impact the degree of information adoption through argument quality and information usefulness. It reveals that eWOM has positively influenced on perceived risk by source credibility to the extent of information adoption and, for this, customers use eWOM for the reduction of the potential hazards when decision making. Companies can make their marketing strategies according to their target towards loyal clients' needs through online foodproduct forums review sites. © 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "4af06d0e333f681a2d9afdb3298b549b", "text": "In this paper we present CRF-net, a CNN-based solution for estimating the camera response function from a single photograph. We follow the recent trend of using synthetic training data, and generate a large set of training pairs based on a small set of radio-metrically linear images and the DoRF database of camera response functions. The resulting CRF-net estimates the parameters of the EMoR camera response model directly from a single photograph. Experimentally, we show that CRF-net is able to accurately recover the camera response function from a single photograph under a wide range of conditions.", "title": "" }, { "docid": "36fbc5f485d44fd7c8726ac0df5648c0", "text": "We present “Ouroboros Praos”, a proof-of-stake blockchain protocol that, for the first time, provides security against fully-adaptive corruption in the semi-synchronous setting : Specifically, the adversary can corrupt any participant of a dynamically evolving population of stakeholders at any moment as long the stakeholder distribution maintains an honest majority of stake; furthermore, the protocol tolerates an adversarially-controlled message delivery delay unknown to protocol participants. To achieve these guarantees we formalize and realize in the universal composition setting a suitable form of forward secure digital signatures and a new type of verifiable random function that maintains unpredictability under malicious key generation. Our security proof develops a general combinatorial framework for the analysis of semi-synchronous blockchains that may be of independent interest. We prove our protocol secure under standard cryptographic assumptions in the random oracle model.", "title": "" }, { "docid": "0b12d6a973130f7317956326320ded03", "text": "We present simple and computationally efficient nonparametric estimators of Rényi entropy and mutual information based on an i.i.d. sample drawn from an unknown, absolutely continuous distribution over R. The estimators are calculated as the sum of p-th powers of the Euclidean lengths of the edges of the ‘generalized nearest-neighbor’ graph of the sample and the empirical copula of the sample respectively. For the first time, we prove the almost sure consistency of these estimators and upper bounds on their rates of convergence, the latter of which under the assumption that the density underlying the sample is Lipschitz continuous. Experiments demonstrate their usefulness in independent subspace analysis.", "title": "" }, { "docid": "eb0a619bab5c43193bbc8de2d65de41f", "text": "Back pain is the most expensive industrial injury. Representatives at DeRoyal, a manufacturer of orthopaedic soft goods, believed a total back hygiene program that included aggressive training in body mechanics would reduce the cost associated with back injury. Prior to implementing a back educational program, the knowledge base of employees was assessed. A survey designed to measure knowledge and application of proper body mechanics was developed and distributed to 100 randomly selected workers. Most workers know the best way to lift. But, they had less knowledge of the best way to push. They also did not always use proper technique in lifting without twisting, planning lifts, and proper standing. A subset of workers that has previously attended back training sessions (n = 6) all gave the correct answers on 4 or the 5 survey items. Orthopaedic nurses could play a critical role in the industrial setting through assessment of body mechanic knowledge and implementation of educational programs.", "title": "" }, { "docid": "502cae1daa2459ed0f826ed3e20c44e4", "text": "Recurrent neural networks (RNNs) have drawn interest from machine learning researchers because of their effectiveness at preserving past inputs for time-varying data processing tasks. To understand the success and limitations of RNNs, it is critical that we advance our analysis of their fundamental memory properties. We focus on echo state networks (ESNs), which are RNNs with simple memoryless nodes and random connectivity. In most existing analyses, the short-term memory (STM) capacity results conclude that the ESN network size must scale linearly with the input size for unstructured inputs. The main contribution of this paper is to provide general results characterizing the STM capacity for linear ESNs with multidimensional input streams when the inputs have common low-dimensional structure: sparsity in a basis or significant statistical dependence between inputs. In both cases, we show that the number of nodes in the network must scale linearly with the information rate and poly-logarithmically with the input dimension. The analysis relies on advanced applications of random matrix theory and results in explicit non-asymptotic bounds on the recovery error. Taken together, this analysis provides a significant step forward in our understanding of the STM properties in RNNs.", "title": "" }, { "docid": "04e627bbb63da238d7d87e86a8eb9641", "text": "Parsing sentences to linguisticallyexpressive semantic representations is a key goal of Natural Language Processing. Yet statistical parsing has focussed almost exclusively on bilexical dependencies or domain-specific logical forms. We propose a neural encoder-decoder transition-based parser which is the first full-coverage semantic graph parser for Minimal Recursion Semantics (MRS). The model architecture uses stack-based embedding features, predicting graphs jointly with unlexicalized predicates and their token alignments. Our parser is more accurate than attention-based baselines on MRS, and on an additional Abstract Meaning Representation (AMR) benchmark, and GPU batch processing makes it an order of magnitude faster than a high-precision grammar-based parser. Further, the 86.69% Smatch score of our MRS parser is higher than the upper-bound on AMR parsing, making MRS an attractive choice as a semantic representation.", "title": "" }, { "docid": "37cca578319bd55d0784c24fc9773913", "text": "Natural DNA can encode complexity on an enormous scale. Researchers are attempting to achieve the same representational efficiency in computers by implementing developmental encodings, i.e. encodings that map the genotype to the phenotype through a process of growth from a small starting point to a mature form. A major challenge in in this effort is to find the right level of abstraction of biological development to capture its essential properties without introducing unnecessary inefficiencies. In this paper, a novel abstraction of natural development, called Compositional Pattern Producing Networks (CPPNs), is proposed. Unlike currently accepted abstractions such as iterative rewrite systems and cellular growth simulations, CPPNs map to the phenotype without local interaction, that is, each individual component of the phenotype is determined independently of every other component. Results produced with CPPNs through interactive evolution of two-dimensional images show that such an encoding can nevertheless produce structural motifs often attributed to more conventional developmental abstractions, suggesting that local interaction may not be essential to the desirable properties of natural encoding in the way that is usually assumed.", "title": "" }, { "docid": "efcd07019ab4b779a0c3c3a880504c6a", "text": "The purpose of semantic query optimization is to use semantic knowledge (e.g., integrity constraints) for transforming a query into a form that may be answered more efficiently than the original version. In several previous papers we described and proved the correctness of a method for semantic query optimization in deductive databases couched in first-order logic. This paper consolidates the major results of these papers emphasizing the techniques and their applicability for optimizing relational queries. Additionally, we show how this method subsumes and generalizes earlier work on semantic query optimization. We also indicate how semantic query optimization techniques can be extended to databases that support recursion and integrity constraints that contain disjunction, negation, and recursion.", "title": "" } ]
scidocsrr
4720de8dbb59fb7924798dfafdc04296
A Method of Aircraft Detection Using Fully Convolutional Network
[ { "docid": "aa8c85df6cf5291f98b707b995ec1768", "text": "http://www.sciencemag.org/cgi/content/full/313/5786/504 version of this article at: including high-resolution figures, can be found in the online Updated information and services, http://www.sciencemag.org/cgi/content/full/313/5786/504/DC1 can be found at: Supporting Online Material found at: can be related to this article A list of selected additional articles on the Science Web sites http://www.sciencemag.org/cgi/content/full/313/5786/504#related-content http://www.sciencemag.org/cgi/content/full/313/5786/504#otherarticles , 6 of which can be accessed for free: cites 8 articles This article 15 article(s) on the ISI Web of Science. cited by This article has been http://www.sciencemag.org/cgi/content/full/313/5786/504#otherarticles 4 articles hosted by HighWire Press; see: cited by This article has been http://www.sciencemag.org/about/permissions.dtl in whole or in part can be found at: this article permission to reproduce of this article or about obtaining reprints Information about obtaining", "title": "" }, { "docid": "07ff0274408e9ba5d6cd2b1a2cb7cbf8", "text": "Though tremendous strides have been made in object recognition, one of the remaining open challenges is detecting small objects. We explore three aspects of the problem in the context of finding small faces: the role of scale invariance, image resolution, and contextual reasoning. While most recognition approaches aim to be scale-invariant, the cues for recognizing a 3px tall face are fundamentally different than those for recognizing a 300px tall face. We take a different approach and train separate detectors for different scales. To maintain efficiency, detectors are trained in a multi-task fashion: they make use of features extracted from multiple layers of single (deep) feature hierarchy. While training detectors for large objects is straightforward, the crucial challenge remains training detectors for small objects. We show that context is crucial, and define templates that make use of massively-large receptive fields (where 99% of the template extends beyond the object of interest). Finally, we explore the role of scale in pre-trained deep networks, providing ways to extrapolate networks tuned for limited scales to rather extreme ranges. We demonstrate state-of-the-art results on massively-benchmarked face datasets (FDDB and WIDER FACE). In particular, when compared to prior art on WIDER FACE, our results reduce error by a factor of 2 (our models produce an AP of 82% while prior art ranges from 29-64%).", "title": "" } ]
[ { "docid": "046e17fc432ebfae586c508adb686ace", "text": "FREEBASE contains entities and relation information but is highly incomplete. Relevant information is ubiquitous in web text, but extraction deems challenging. We present JEDI, an automated system to jointly extract typed named entities and FREEBASE relations using dependency pattern from text. An innovative method for constraint solving on entity types of multiple relations is used to disambiguate pattern. The high precision in the evaluation supports our claim that we can detect entities and relations together, alleviating the need to train a custom classifier for an entity type1.", "title": "" }, { "docid": "3898b7f3d55e96781c4c1dd3d72f1045", "text": "In addition to trait EI, Cherniss identifies three other EI models whose main limitations must be succinctly mentioned, not least because they provided the impetus for the development of the trait EI model. Bar-On’s (1997) model is predicated on the problematic assumption that emotional intelligence (or ‘‘ability’’ or ‘‘competence’’ or ‘‘skill’’ or ‘‘potential’’—terms that appear to be used interchangeably in his writings) can be validly assessed through self-report questions of the type ‘‘It is easy for me to understand my emotions.’’ Psychometrically, as pointed out in Petrides and Furnham (2001), this is not a viable position because such self-report questions can only be tapping into self-perceptions rather than into abilities or competencies. This poses a fundamental threat to the validity of this model, far more serious than the pervasive faking problem noted by several authors (e.g., Grubb & McDaniel, 2008). Goleman’s (1995) model is difficult to evaluate scientifically because of its reliance on", "title": "" }, { "docid": "179be5148a006cd12d0182686c36852b", "text": "A simple, fast, and approximate voxel-based approach to 6-DOF haptic rendering is presented. It can reliably sustain a 1000 Hz haptic refresh rate without resorting to asynchronous physics and haptic rendering loops. It enables the manipulation of a modestly complex rigid object within an arbitrarily complex environment of static rigid objects. It renders a short-range force field surrounding the static objects, which repels the manipulated object and strives to maintain a voxel-scale minimum separation distance that is known to preclude exact surface interpenetration. Force discontinuities arising from the use of a simple penalty force model are mitigated by a dynamic simulation based on virtual coupling. A generalization of octree improves voxel memory efficiency. In a preliminary implementation, a commercially available 6-DOF haptic prototype device is driven at a constant 1000 Hz haptic refresh rate from one dedicated haptic processor, with a separate processor for graphics. This system yields stable and convincing force feedback for a wide range of user controlled motion inside a large, complex virtual environment, with very few surface interpenetration events. This level of performance appears suited to applications such as certain maintenance and assembly task simulations that can tolerate voxel-scale minimum separation distances.", "title": "" }, { "docid": "c04f67fd5cc7f2f95452046bb18c6cfa", "text": "Bob is a free signal processing and machine learning toolbox originally developed by the Biometrics group at Idiap Research Institute, Switzerland. The toolbox is designed to meet the needs of researchers by reducing development time and efficiently processing data. Firstly, Bob provides a researcher-friendly Python environment for rapid development. Secondly, efficient processing of large amounts of multimedia data is provided by fast C++ implementations of identified bottlenecks. The Python environment is integrated seamlessly with the C++ library, which ensures the library is easy to use and extensible. Thirdly, Bob supports reproducible research through its integrated experimental protocols for several databases. Finally, a strong emphasis is placed on code clarity, documentation, and thorough unit testing. Bob is thus an attractive resource for researchers due to this unique combination of ease of use, efficiency, extensibility and transparency. Bob is an open-source library and an ongoing community effort.", "title": "" }, { "docid": "2a33f7e91a81435c41fbbaf18ca4b588", "text": "To enable light fields of large environments to be captured, they would have to be sparse, i.e. with a relatively large distance between views. Such sparseness, however, causes subsequent processing to be much more difficult than would be the case with dense light fields. This includes segmentation. In this paper, we address the problem of meaningful segmentation of a sparse planar light field, leading to segments that are coherent between views. In addition, uniquely our method does not make the assumption that all surfaces in the environment are perfect Lambertian reflectors, which further broadens its applicability. Our fully automatic segmentation pipeline leverages scene structure, and does not require the user to navigate through the views to fix inconsistencies. The key idea is to combine coarse estimations given by an over-segmentation of the scene into super-rays, with detailed ray-based processing. We show the merit of our algorithm by means of a novel way to perform intrinsic light field decomposition, outperforming state-of-the-art methods.", "title": "" }, { "docid": "44ffac24ef4d30a8104a2603bb1cdcb1", "text": "Most object detectors contain two important components: a feature extractor and an object classifier. The feature extractor has rapidly evolved with significant research efforts leading to better deep convolutional architectures. The object classifier, however, has not received much attention and many recent systems (like SPPnet and Fast/Faster R-CNN) use simple multi-layer perceptrons. This paper demonstrates that carefully designing deep networks for object classification is just as important. We experiment with region-wise classifier networks that use shared, region-independent convolutional features. We call them “Networks on Convolutional feature maps” (NoCs). We discover that aside from deep feature maps, a deep and convolutional per-region classifier is of particular importance for object detection, whereas latest superior image classification models (such as ResNets and GoogLeNets) do not directly lead to good detection accuracy without using such a per-region classifier. We show by experiments that despite the effective ResNets and Faster R-CNN systems, the design of NoCs is an essential element for the 1st-place winning entries in ImageNet and MS COCO challenges 2015.", "title": "" }, { "docid": "d53dde5a78d7e2635bafb3303dc479c6", "text": "Medicinal plants have historically proven their value as a source of molecules with therapeutic potential, and nowadays still represent an important pool for the identification of novel drug leads. In the past decades, pharmaceutical industry focused mainly on libraries of synthetic compounds as drug discovery source. They are comparably easy to produce and resupply, and demonstrate good compatibility with established high throughput screening (HTS) platforms. However, at the same time there has been a declining trend in the number of new drugs reaching the market, raising renewed scientific interest in drug discovery from natural sources, despite of its known challenges. In this survey, a brief outline of historical development is provided together with a comprehensive overview of used approaches and recent developments relevant to plant-derived natural product drug discovery. Associated challenges and major strengths of natural product-based drug discovery are critically discussed. A snapshot of the advanced plant-derived natural products that are currently in actively recruiting clinical trials is also presented. Importantly, the transition of a natural compound from a \"screening hit\" through a \"drug lead\" to a \"marketed drug\" is associated with increasingly challenging demands for compound amount, which often cannot be met by re-isolation from the respective plant sources. In this regard, existing alternatives for resupply are also discussed, including different biotechnology approaches and total organic synthesis. While the intrinsic complexity of natural product-based drug discovery necessitates highly integrated interdisciplinary approaches, the reviewed scientific developments, recent technological advances, and research trends clearly indicate that natural products will be among the most important sources of new drugs also in the future.", "title": "" }, { "docid": "15cb7023c175e2c92cd7b392205fb87f", "text": "Feedback has a strong influence on effective learning from computer-based instruction. Prior research on feedback in computer-based instruction has mainly focused on static feedback schedules that employ the same feedback schedule throughout an instructional session. This study examined transitional feedback schedules in computer-based multimedia instruction on procedural problem-solving in electrical circuit analysis. Specifically, we compared two transitional feedback schedules: the TFS-P schedule switched from initial feedback after each problem step to feedback after a complete problem at later learning states; the TFP-S schedule transitioned from feedback after a complete problem to feedback after each problem step. As control conditions, we also considered two static feedback schedules, namely providing feedback after each practice problem-solving step (SFS) or providing feedback after attempting a complete multi-step practice problem (SFP). Results indicate that the static stepwise (SFS) and transitional stepwise to problem (TFS-P) feedback produce higher problem solving near-transfer post-test performance than static problem (SFP) and transitional problem to step (TFP-S) feedback. Also, TFS-P resulted in higher ratings of program liking and feedback helpfulness than TFP-S. Overall, the study results indicate benefits of maintaining high feedback frequency (SFS) and reducing feedback frequency (TFS-P) compared to low feedback frequency (SFP) or increasing feedback frequency (TFP-S) as novice learners acquire engineering problem solving skills. © 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "6401448defd825a7ffe540f55f8b242e", "text": "Body sensor networks (BSNs) carry heterogeneous traffic types having diverse QoS requirements, such as delay, reliability and throughput. In this paper, we design a priority-based traffic load adaptivemedium access control (MAC) protocol for BSNs, namely, PLA-MAC, which addresses the aforementioned requirements and maintains efficiency in power consumption. In PLA-MAC, we classify sensed data packets according to their QoS requirements and accordingly calculate their priorities. The transmission schedules of the packets are determined based on their priorities. Also, the superframe structure of the proposed protocol varies depending on the amount of traffic load and thereby ensures minimal power consumption. Our performance evaluation shows that the PLA-MAC achieves significant improvements over the state-of-the-art protocols.", "title": "" }, { "docid": "3aa35438449590f17163bda1c683c590", "text": "Traditional barcode recognition algorithm usually do not fit the cylindrical code but the one on flat surface. This paper proposes a low-cost approach to implement recognition of the curved QR codes printed on bottles or cans. Finder patterns are extracted from detecting module width proportion and corners of contours and an efficient direct least-square ellipse fitting method is employed to extract the elliptic edge and the boundary of code region. Then the code is reconstructed by direct mapping from the stereoscopic coordinates to the image plane using the 3D back-projection, thus the data of code could be restored. Compared with previous approaches, the proposed algorithm outperforms in not only the computation amount but also higher accuracy of the barcode recognition, whether in the flat or the cylindrical surface.", "title": "" }, { "docid": "80bfff01fbb1f6453b37d39b3b8b63f8", "text": "We consider regularized empirical risk minimization problems. In particular, we minimize the sum of a smooth empirical risk function and a nonsmooth regularization function. When the regularization function is block separable, we can solve the minimization problems in a randomized block coordinate descent (RBCD) manner. Existing RBCD methods usually decrease the objective value by exploiting the partial gradient of a randomly selected block of coordinates in each iteration. Thus they need all data to be accessible so that the partial gradient of the block gradient can be exactly obtained. However, such a \"batch\" setting may be computationally expensive in practice. In this paper, we propose a mini-batch randomized block coordinate descent (MRBCD) method, which estimates the partial gradient of the selected block based on a mini-batch of randomly sampled data in each iteration. We further accelerate the MRBCD method by exploiting the semi-stochastic optimization scheme, which effectively reduces the variance of the partial gradient estimators. Theoretically, we show that for strongly convex functions, the MRBCD method attains lower overall iteration complexity than existing RBCD methods. As an application, we further trim the MRBCD method to solve the regularized sparse learning problems. Our numerical experiments shows that the MRBCD method naturally exploits the sparsity structure and achieves better computational performance than existing methods.", "title": "" }, { "docid": "3f268b6048d534720cac533f04c2aa7e", "text": "This paper seeks a simple, cost effective and compact gate drive circuit for bi-directional switch of matrix converter. Principals of IGBT commutation and bi-directional switch commutation in matrix converters are reviewed. Three simple IGBT gate drive circuits are presented and simulated in PSpice and simulation results are approved by experiments in the end of this paper. Paper concludes with comparative numbers of gate drive costs.", "title": "" }, { "docid": "83aa2a89f8ecae6a84134a2736a5bb22", "text": "The activity of dozens of simultaneously recorded neurons can be used to control the movement of a robotic arm or a cursor on a computer screen. This motor neural prosthetic technology has spurred an increased interest in the algorithms by which motor intention can be inferred. The simplest of these algorithms is the population vector algorithm (PVA), where the activity of each cell is used to weight a vector pointing in that neuron's preferred direction. Off-line, it is possible to show that more complicated algorithms, such as the optimal linear estimator (OLE), can yield substantial improvements in the accuracy of reconstructed hand movements over the PVA. We call this open-loop performance. In contrast, this performance difference may not be present in closed-loop, on-line control. The obvious difference between open and closed-loop control is the ability to adapt to the specifics of the decoder in use at the time. In order to predict performance gains that an algorithm may yield in closed-loop control, it is necessary to build a model that captures aspects of this adaptation process. Here we present a framework for modeling the closed-loop performance of the PVA and the OLE. Using both simulations and experiments, we show that (1) the performance gain with certain decoders can be far less extreme than predicted by off-line results, (2) that subjects are able to compensate for certain types of bias in decoders, and (3) that care must be taken to ensure that estimation error does not degrade the performance of theoretically optimal decoders.", "title": "" }, { "docid": "d37316c9a63d506b7da4797de0e645e8", "text": "Isomap is one of widely-used low-dimensional embedding methods, where geodesic distances on a weighted graph are incorporated with the classical scaling (metric multidimensional scaling). In this paper we pay our attention to two critical issues that were not considered in Isomap, such as: (1) generalization property (projection property); (2) topological stability. Then we present a robust kernel Isomap method, armed with such two properties. We present a method which relates the Isomap to Mercer kernel machines, so that the generalization property naturally emerges, through kernel principal component analysis. For topological stability, we investigate the network flow in a graph, providing a method for eliminating critical outliers. The useful behavior of the robust kernel Isomap is confirmed through numerical experiments with several data sets.", "title": "" }, { "docid": "2caf8a90640a98f3690785b6dd641e08", "text": "This paper presents a simple, novel, yet very powerful approach for robust rotation-invariant texture classification based on random projection. The proposed sorted random projection maintains the strengths of random projection, in being computationally efficient and low-dimensional, with the addition of a straightforward sorting step to introduce rotation invariance. At the feature extraction stage, a small set of random measurements is extracted from sorted pixels or sorted pixel differences in local image patches. The rotation invariant random features are embedded into a bag-of-words model to perform texture classification, allowing us to achieve global rotation invariance. The proposed unconventional and novel random features are very robust, yet by leveraging the sparse nature of texture images, our approach outperforms traditional feature extraction methods which involve careful design and complex steps. We report extensive experiments comparing the proposed method to six state-of-the-art methods, RP, Patch, LBP, WMFS and the methods of Lazebnik et al. and Zhang et al., in texture classification on five databases: CUReT, Brodatz, UIUC, UMD and KTH-TIPS. Our approach leads to significant improvements in classification accuracy, producing consistently good results on each database, including what we believe to be the best reported results for Brodatz, UMD and KTH-TIPS. & 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "9a7ef5c9f6ceca7a88d2351504404954", "text": "In this paper, we propose a 3D HMM (Three-dimensional Hidden Markov Models) approach to recognizing human facial expressions and associated emotions. Human emotion is usually classified by psychologists into six categories: Happiness, Sadness, Anger, Fear, Disgust and Surprise. Further, psychologists categorize facial movements based on the muscles that produce those movements using a Facial Action Coding System (FACS). We look beyond pure muscle movements and investigate facial features – brow, mouth, nose, eye height and facial shape – as a means of determining associated emotions. Histogram of Optical Flow is used as the descriptor for extracting and describing the key features, while training and testing are performed on 3D Hidden Markov Models. Experiments on datasets show our approach is promising and robust.", "title": "" }, { "docid": "734840224154ef88cdb196671fd3f3f8", "text": "Tiny face detection aims to find faces with high degrees of variability in scale, resolution and occlusion in cluttered scenes. Due to the very little information available on tiny faces, it is not sufficient to detect them merely based on the information presented inside the tiny bounding boxes or their context. In this paper, we propose to exploit the semantic similarity among all predicted targets in each image to boost current face detectors. To this end, we present a novel framework to model semantic similarity as pairwise constraints within the metric learning scheme, and then refine our predictions with the semantic similarity by utilizing the graph cut techniques. Experiments conducted on three widely-used benchmark datasets have demonstrated the improvement over the-state-of-the-arts gained by applying this idea.", "title": "" }, { "docid": "eb0e545f159ca8857db2582abcda6c8d", "text": "This is a survey of the field of Genetics-based Machine Learning (GBML): the application of evolutionary algorithms to machine learning. We assume readers are familiar with evolutionary algorithms and their application to optimisation problems, but not necessarily with machine learning. We briefly outline the scope of machine learning, introduce the more specific area of supervised learning, contrast it with optimisation and present arguments for and against GBML. Next we introduce a framework for GBML which includes ways of classifying GBML algorithms and a discussion of the interaction between learning and evolution. We then review the following areas with emphasis on their evolutionary aspects: GBML for sub-problems of learning, genetic programming, evolving ensembles, evolving neural networks, learning classifier systems, and genetic fuzzy systems.", "title": "" }, { "docid": "81273c11eb51349d0027e2ff2e54c080", "text": "The ground-volume separation of radar scattering plays an important role in the analysis of forested scenes. For this purpose, the data covariance matrix of multi-polarimetric (MP) multi-baseline (MB) SAR surveys can be represented thru a sum of two Kronecker products composed of the data covariance matrices and polarimetric signatures that correspond to the ground and canopy scattering mechanisms (SMs), respectively. The sum of Kronecker products (SKP) decomposition allows the use of different tomographic SAR focusing methods on the ground and canopy structural components separately, nevertheless, the main drawback of this technique relates to the rank-deficiencies of the resultant data covariance matrices, which restrict the usage of the adaptive beamforming techniques, requiring more advanced beamforming methods, such as compressed sensing (CS). This paper proposes a modification of the nonparametric iterative adaptive approach for amplitude and phase estimation (IAA-APES), which applied to MP-MB SAR data, serves as an alternative to the SKP-based techniques for ground-volume reconstruction, which main advantage relates precisely to the non-need of the SKP decomposition technique as a pre-processing step.", "title": "" }, { "docid": "93f8ba979ea679d6b9be6f949f8ee6ed", "text": "This paper presents a method for Simultaneous Localization and Mapping (SLAM), relying on a monocular camera as the only sensor, which is able to build outdoor, closed-loop maps much larger than previously achieved with such input. Our system, based on the Hierarchical Map approach [1], builds independent local maps in real-time using the EKF-SLAM technique and the inverse depth representation proposed in [2]. The main novelty in the local mapping process is the use of a data association technique that greatly improves its robustness in dynamic and complex environments. A new visual map matching algorithm stitches these maps together and is able to detect large loops automatically, taking into account the unobservability of scale intrinsic to pure monocular SLAM. The loop closing constraint is applied at the upper level of the Hierarchical Map in near real-time. We present experimental results demonstrating monocular SLAM as a human carries a camera over long walked trajectories in outdoor areas with people and other clutter, even in the more difficult case of forward-looking camera, and show the closing of loops of several hundred meters.", "title": "" } ]
scidocsrr
12a692f44aa690fbd30b3139779be3f2
FeatureSmith: Automatically Engineering Features for Malware Detection by Mining the Security Literature
[ { "docid": "71b5c8679979cccfe9cad229d4b7a952", "text": "Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one.\n In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally varound the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted.", "title": "" }, { "docid": "55a6353fa46146d89c7acd65bee237b5", "text": "The drastic increase of Android malware has led to a strong interest in developing methods to automate the malware analysis process. Existing automated Android malware detection and classification methods fall into two general categories: 1) signature-based and 2) machine learning-based. Signature-based approaches can be easily evaded by bytecode-level transformation attacks. Prior learning-based works extract features from application syntax, rather than program semantics, and are also subject to evasion. In this paper, we propose a novel semantic-based approach that classifies Android malware via dependency graphs. To battle transformation attacks, we extract a weighted contextual API dependency graph as program semantics to construct feature sets. To fight against malware variants and zero-day malware, we introduce graph similarity metrics to uncover homogeneous application behaviors while tolerating minor implementation differences. We implement a prototype system, DroidSIFT, in 23 thousand lines of Java code. We evaluate our system using 2200 malware samples and 13500 benign samples. Experiments show that our signature detection can correctly label 93\\% of malware instances; our anomaly detector is capable of detecting zero-day malware with a low false negative rate (2\\%) and an acceptable false positive rate (5.15\\%) for a vetting purpose.", "title": "" }, { "docid": "bb0731a3bc69ddfe293fb1feb096f5f2", "text": "To adapt to the rapidly evolving landscape of cyber threats, security professionals are actively exchanging Indicators of Compromise (IOC) (e.g., malware signatures, botnet IPs) through public sources (e.g. blogs, forums, tweets, etc.). Such information, often presented in articles, posts, white papers etc., can be converted into a machine-readable OpenIOC format for automatic analysis and quick deployment to various security mechanisms like an intrusion detection system. With hundreds of thousands of sources in the wild, the IOC data are produced at a high volume and velocity today, which becomes increasingly hard to manage by humans. Efforts to automatically gather such information from unstructured text, however, is impeded by the limitations of today's Natural Language Processing (NLP) techniques, which cannot meet the high standard (in terms of accuracy and coverage) expected from the IOCs that could serve as direct input to a defense system. In this paper, we present iACE, an innovation solution for fully automated IOC extraction. Our approach is based upon the observation that the IOCs in technical articles are often described in a predictable way: being connected to a set of context terms (e.g., \"download\") through stable grammatical relations. Leveraging this observation, iACE is designed to automatically locate a putative IOC token (e.g., a zip file) and its context (e.g., \"malware\", \"download\") within the sentences in a technical article, and further analyze their relations through a novel application of graph mining techniques. Once the grammatical connection between the tokens is found to be in line with the way that the IOC is commonly presented, these tokens are extracted to generate an OpenIOC item that describes not only the indicator (e.g., a malicious zip file) but also its context (e.g., download from an external source). Running on 71,000 articles collected from 45 leading technical blogs, this new approach demonstrates a remarkable performance: it generated 900K OpenIOC items with a precision of 95% and a coverage over 90%, which is way beyond what the state-of-the-art NLP technique and industry IOC tool can achieve, at a speed of thousands of articles per hour. Further, by correlating the IOCs mined from the articles published over a 13-year span, our study sheds new light on the links across hundreds of seemingly unrelated attack instances, particularly their shared infrastructure resources, as well as the impacts of such open-source threat intelligence on security protection and evolution of attack strategies.", "title": "" } ]
[ { "docid": "9dced2607496d2a3042b9afff96bd7dc", "text": "This paper presents a regularized patch-based representation for single sample per person face recognition. We represent each image by a collection of patches and seek their sparse representations under the gallery images patches and intra-class variance dictionaries at the same time. For the reconstruction coefficients of all the patches from the same image, by imposing a group sparsity constraint on the reconstruction coefficients corresponding to the patches from the gallery images, and by imposing a sparsity constraint on the reconstruction coefficients corresponding to the intra-class variance dictionaries, our formulation harvests the advantages of both patch-based image representation and global image representation, i.e. our method overcomes the side effect of those patches which are severely corrupted by the variances in face recognition, while enforcing those less discriminative patches to be constructed by the gallery patches from the right person. Moreover, instead of using the manually designed intra-class variance dictionaries, we propose to learn the intra-class variance dictionaries which not only greatly accelerate the prediction of the probe images but also improve the face recognition accuracy in the single sample per person scenario. Experimental results on the AR, Extended Yale B, CMU-PIE, and LFW datasets show that our method outperforms sparse coding related face recognition methods as well as some other specially designed single sample per person face representation methods, and achieves the best performance. These encouraging results demonstrate the effectiveness of regularized patch-based face representation for single sample per person face recognition.", "title": "" }, { "docid": "d16f126a07ad5fa41acfa9da7b180898", "text": "Matrix metalloproteinases (MMPs) are a family of endopeptidases that function to remodel tissue during both normal physiology and pathology. Research performed by the Medical University of South Carolina found an increased release of several MMP species during cardiopulmonary bypass (CPB), including the subtype MMP-9, but whether and to what degree the extracorporeal circulation circuit (ECC) induces the release of MMPs has yet to be determined. Human bank whole blood scheduled for discard was obtained and exposed to an ECC. The first set of studies (N = 8) was performed with a loop circuit using a standard arterial line filter. A leukoreduction filter was incorporated during the first 30 min of the pump run for the second set of trials; the leukoreduction filter was then bypassed and a standard arterial filter used for the remaining 60 minutes on pump (N = 8). Blood samples were drawn at four time points for analysis (baseline, 30, 60, and 90 min). Data were analyzed using repeated measures analysis of variance with between-subjects factors, and a p value of less than .1 was considered statistically significant. The MMP-9 level increased by 130.44% at 90 min on pump in the standard arterial filter group and decreased by 34.62% at 90 min on pump in the leukoreduction group. There was a significant difference between the baseline MMP-9 level and the MMP-9 concentrations at 30, 60, and 90 min for both groups (p = .0348); there was a significant difference in MMP-9 levels between the two filter groups (p = .0611). The present study found a significant increase in MMP-9 levels when blood was exposed to an ECC with a standard arterial filter. The use of a leukoreduction filter significantly reduced MMP-9 concentrations as compared to baseline levels in this study. Leukocyte depletion filtration may serve to benefit CPB patients by attenuating the inflammatory response and disrupting pathways that govern such mediators as the MMPs.", "title": "" }, { "docid": "5add362bec606515136b0842f885f5bf", "text": "We argue that the core problem facing peer-to-peer systems is locating documents in a decentralized network and propose Chord, a distributed lookup primitive. Chord provides an efficient method of locating documents while placing few constraints on the applications that use it. As proof that Chord’s functionality is useful in the development of peer-to-peer applications, we outline the implementation of a peer-to-peer file sharing system based on Chord.", "title": "" }, { "docid": "b25379a7a48ef2b6bcc2df8d84d7680b", "text": "Microblogging (Twitter or Facebook) has become a very popular communication tool among Internet users in recent years. Information is generated and managed through either computer or mobile devices by one person and is consumed by many other persons, with most of this user-generated content being textual information. As there are a lot of raw data of people posting real time messages about their opinions on a variety of topics in daily life, it is a worthwhile research endeavor to collect and analyze these data, which may be useful for users or managers to make informed decisions, for example. However this problem is challenging because a micro-blog post is usually very short and colloquial, and traditional opinion mining algorithms do not work well in such type of text. Therefore, in this paper, we propose a new system architecture that can automatically analyze the sentiments of these messages. We combine this system with manually annotated data from Twitter, one of the most popular microblogging platforms, for the task of sentiment analysis. In this system, machines can learn how to automatically extract the set of messages which contain opinions, filter out nonopinion messages and determine their sentiment directions (i.e. positive, negative). Experimental results verify the effectiveness of our system on sentiment analysis in real microblogging applications.", "title": "" }, { "docid": "558533fe6149adc6b506153e657b0ba2", "text": "Graphical modelling of various aspects of software and systems is a common part of software development. UML is the de-facto standard for various types of software models. To be able to research UML, academia needs to have a corpus of UML models. For building such a database, an automated system that has the ability to classify UML class diagram images would be very beneficial, since a large portion of UML class diagrams (UML CDs) is available as images on the Internet. In this study, we propose 23 image-features and investigate the use of these features for the purpose of classifying UML CD images. We analyse the performance of the features and assess their contribution based on their Information Gain Attribute Evaluation scores. We study specificity and sensitivity scores of six classification algorithms on a set of 1300 images. We found that 19 out of 23 introduced features can be considered as influential predictors for classifying UML CD images. Through the six algorithms, the prediction rate achieves nearly 96% correctness for UML-CD and 91% of correctness for non-UML CD.", "title": "" }, { "docid": "536e7bfef009c07075bb6b7c8a626c89", "text": "This paper proposes a method which combines Sobel edge detection operator and soft-threshold wavelet de-noising to do edge detection on images which include White Gaussian noises. In recent years, a lot of edge detection methods are proposed. The commonly used methods which combine mean de-noising and Sobel operator or median filtering and Sobel operator can not remove salt and pepper noise very well. In this paper, we firstly use soft-threshold wavelet to remove noise, then use Sobel edge detection operator to do edge detection on the image. This method is mainly used on the images which includes White Gaussian noises. Through the pictures obtained by the experiment, we can see very clearly that, compared to the traditional edge detection methods, the method proposed in this paper has a more obvious effect on edge detection.", "title": "" }, { "docid": "92386d23413e6f951f76e7cdc0ee0aa3", "text": "This study covers a complete overview of the theore tical rationale of application of robots, other ins tructional inter faces like CALL, MALL, m-learning, r-learning , different types of robots, their instructional ro es , their educational activities, the related researches, findings, and c hallenges of robotic assisted language learning . S ince robotic revolution, many investigators in different countries have atte mpted to utilize robots to enhance education. As ma ny people in the world have personal computers (PCs), in the followi ng years, Personal Robots (PR) may become the next tool for every one’s life. Robots not only have the attributes of CALL/MALL, but also are able for independent moveme nts, voice/visual recognition and environmental interactions, non-ver bal communication, collaboration with native speake rs, diagnosing pronunciation, video conferencing with native speak ers, native speaker tutoring, adaptability, sensing , repeatability, intelligence, mobility and human appearance. Robotaided learning (rlearning) services can be descr ibed as interactive and instructional activities which can be interacte d and performed between robots and learners in both virtual and real worlds.", "title": "" }, { "docid": "768e9846a82567a5f29f653f1a86f0d1", "text": "In SDN, forwarding rules are frequently updated to adapt to network dynamics. During the procedure, path consistency needs to be preserved; otherwise, in-flight packets might meet with forwarding errors such as loops and black holes. Despite a large number of suggestions have been proposed, they take either a long duration or have high rule-space overheads, thus fail to be practical for large-scale high dynamic networks. In this paper, we propose FLUS, a Segment Routing (SR) based mechanism, to achieve fast and lightweight path updates. Basically, when a route needs a change, FLUS instantly employs SR to construct its desired new path by concatenating some fragments of the already existing paths. After the actual paths are established, FLUS then shifts incoming packets to them and disables the transitional ones. Such a design helps packets enjoy their new paths immediately without introducing rule-space overheads. This paper presents FLUS's segment allocation, path construction, and the corresponding optimal algorithms in detail. Our evaluation based on real and synthesized networks shows: FLUS can handle up to 92-100% updates using SR in real-time and save 72-88% rule overhead compared to prior methods.", "title": "" }, { "docid": "906e7a5c855597356858e326bd6023db", "text": "This paper proposes an online transfer framework to capture the interaction among agents and shows that current transfer learning in reinforcement learning is a special case of online transfer. Furthermore, this paper re-characterizes existing agents-teaching-agents methods as online transfer and analyze one such teaching method in three ways. First, the convergence of Q-learning and Sarsa with tabular representation with a finite budget is proven. Second, the convergence of Qlearning and Sarsa with linear function approximation is established. Third, the we show the asymptotic performance cannot be hurt through teaching. Additionally, all theoretical results are empirically validated.", "title": "" }, { "docid": "dbbd9f6440ee0c137ee0fb6a4aadba38", "text": "In local differential privacy (LDP), each user perturbs her data locally before sending the noisy data to a data collector. The latter then analyzes the data to obtain useful statistics. Unlike the setting of centralized differential privacy, in LDP the data collector never gains access to the exact values of sensitive data, which protects not only the privacy of data contributors but also the collector itself against the risk of potential data leakage. Existing LDP solutions in the literature are mostly limited to the case that each user possesses a tuple of numeric or categorical values, and the data collector computes basic statistics such as counts or mean values. To the best of our knowledge, no existing work tackles more complex data mining tasks such as heavy hitter discovery over set-valued data. In this paper, we present a systematic study of heavy hitter mining under LDP. We first review existing solutions, extend them to the heavy hitter estimation, and explain why their effectiveness is limited. We then propose LDPMiner, a two-phase mechanism for obtaining accurate heavy hitters with LDP. The main idea is to first gather a candidate set of heavy hitters using a portion of the privacy budget, and focus the remaining budget on refining the candidate set in a second phase, which is much more efficient budget-wise than obtaining the heavy hitters directly from the whole dataset. We provide both in-depth theoretical analysis and extensive experiments to compare LDPMiner against adaptations of previous solutions. The results show that LDPMiner significantly improves over existing methods. More importantly, LDPMiner successfully identifies the majority true heavy hitters in practical settings.", "title": "" }, { "docid": "3d52248b140f516b82abc452336fa40c", "text": "Requirements engineering is a creative process in which stakeholders and designers work together to create ideas for new systems that are eventually expressed as requirements. This paper describes RESCUE, a scenario-driven requirements engineering process that includes workshops that integrate creativity techniques with different types of use case and system context modelling. It reports a case study in which RESCUE creativity workshops were used to discover stakeholder and system requirements for DMAN, a future air traffic management system for managing departures from major European airports. The workshop was successful in that it provided new and important outputs for subsequent requirements processes. The paper describes the workshop structure and wider RESCUE process, important results and key lessons learned.", "title": "" }, { "docid": "e66f2052a2e9a7e870f8c1b4f2bfb56d", "text": "New algorithms with previous native palm pdf reader approaches, with gains of over an order of magnitude using.We present two new algorithms for solving this problem. Regularities, association rules, and gave an algorithm for finding such rules. 4 An.fast discovery of association rules based on our ideas in 33, 35. New algorithms with previous approaches, with gains of over an order of magnitude using.", "title": "" }, { "docid": "5218f1ddf65b9bc1db335bb98d7e71b4", "text": "The popular Biometric used to authenticate a person is Fingerprint which is unique and permanent throughout a person’s life. A minutia matching is widely used for fingerprint recognition and can be classified as ridge ending and ridge bifurcation. In this paper we projected Fingerprint Recognition using Minutia Score Matching method (FRMSM). For Fingerprint thinning, the Block Filter is used, which scans the image at the boundary to preserves the quality of the image and extract the minutiae from the thinned image. The false matching ratio is better compared to the existing algorithm. Key-words:-Fingerprint Recognition, Binarization, Block Filter Method, Matching score and Minutia.", "title": "" }, { "docid": "ef9650746ac9ab803b2a3bbdd5493fee", "text": "This paper addresses the problem of establishing correspondences between two sets of visual features using higher order constraints instead of the unary or pairwise ones used in classical methods. Concretely, the corresponding hypergraph matching problem is formulated as the maximization of a multilinear objective function over all permutations of the features. This function is defined by a tensor representing the affinity between feature tuples. It is maximized using a generalization of spectral techniques where a relaxed problem is first solved by a multidimensional power method and the solution is then projected onto the closest assignment matrix. The proposed approach has been implemented, and it is compared to state-of-the-art algorithms on both synthetic and real data.", "title": "" }, { "docid": "8d4bdc3e5e84a63a76e6a226a9f0e558", "text": "HTTP cookies are the de facto mechanism for session authentication in Web applications. However, their inherent security weaknesses allow attacks against the integrity of Web sessions. HTTPS is often recommended to protect cookies, but deploying full HTTPS support can be challenging due to performance and financial concerns, especially for highly distributed applications. Moreover, cookies can be exposed in a variety of ways even when HTTPS is enabled. In this article, we propose one-time cookies (OTC), a more robust alternative for session authentication. OTC prevents attacks such as session hijacking by signing each user request with a session secret securely stored in the browser. Unlike other proposed solutions, OTC does not require expensive state synchronization in the Web application, making it easily deployable in highly distributed systems. We implemented OTC as a plug-in for the popular WordPress platform and as an extension for Firefox and Firefox for mobile browsers. Our extensive experimental analysis shows that OTC introduces a latency of less than 6 ms when compared to cookies—a negligible overhead for most Web applications. Moreover, we show that OTC can be combined with HTTPS to effectively add another layer of security to Web applications. In so doing, we demonstrate that one-time cookies can significantly improve the security of Web applications with minimal impact on performance and scalability.", "title": "" }, { "docid": "70b12ea4c5a5b3140496d1e652605592", "text": "Internet is speeding up and modifying the manner in which daily tasks such as online shopping, paying utility bills, watching new movies, communicating, etc., are accomplished. As an example, in older shopping methods, products were mass produced for a single market and audience but that approach is no longer viable. Markets based on long product and development cycles can no longer survive. To stay competitive, markets need to provide different products and services to different customers with different needs. The shift to online shopping has made it incumbent on producers and retailers to customize for customers' needs while providing more options than were possible before. This, however, poses a problem for customers who must now analyze every offering in order to determine what they actually need and will benefit from. To aid customers in this scenario, we discuss about common recommender systems techniques that have been employed and their associated trade-offs.", "title": "" }, { "docid": "d6a585443f5829b556a1064b9b92113a", "text": "The water quality monitoring system is designed for the need of environmental protection department in a particular area of the water quality requirements. The system is based on the Wireless Sensor Network (WSN). It consists of Wireless Water Quality Monitoring Network and Remote Data Center. The hardware platform use wireless microprocessor CC2430 as the core of the node. The sensor network is builted in accordance with Zigbee wireless transmission agreement. WSN Sample the water quality, and send the data to Internet with the help of the GPRS DTU which has a built-in TCP/IP protocol. Through the Internet, Remote Data Center gets the real-time water quality data, and then analysis, process and record the data. Environmental protection department can provide real-time guidance to those industry which depends on regional water quality conditions, like industrial, plant and aquaculture. The most important is that the work can be more efficient and less cost.", "title": "" }, { "docid": "f530ebff8396da2345537363449b99c9", "text": "In this research, a fast, accurate, and stable system of lung cancer detection based on novel deep learning techniques is proposed. A convolutional neural network (CNN) structure akin to that of GoogLeNet was built using a transfer learning approach. In contrast to previous studies, Median Intensity Projection (MIP) was employed to include multi-view features of three-dimensional computed tomography (CT) scans. The system was evaluated on the LIDC-IDRI public dataset of lung nodule images and 100-fold data augmentation was performed to ensure training efficiency. The trained system produced 81% accuracy, 84% sensitivity, and 78% specificity after 300 epochs, better than other available programs. In addition, a t-based confidence interval for the population mean of the validation accuracies verified that the proposed system would produce consistent results for multiple trials. Subsequently, a controlled variable experiment was performed to elucidate the net effects of two core factors of the system - fine-tuned GoogLeNet and MIPs - on its detection accuracy. Four treatment groups were set by training and testing fine-tuned GoogLeNet and Alexnet on MIPs and common 2D CT scans, respectively. It was noteworthy that MIPs improved the network's accuracy by 12.3%, and GoogLeNet outperformed Alexnet by 2%. Lastly, remote access to the GPU-based system was enabled through a web server, which allows long-distance management of the system and its future transition into a practical tool.", "title": "" }, { "docid": "79a9208d16541c7ed4fbc9996a82ef6a", "text": "Query processing in data integration occurs over network-bound, autonomous data sources. This requires extensions to traditional optimization and execution techniques for three reasons: there is an absence of quality statistics about the data, data transfer rates are unpredictable and bursty, and slow or unavailable data sources can often be replaced by overlapping or mirrored sources. This paper presents the Tukwila data integration system, designed to support adaptivity at its core using a two-pronged approach. Interleaved planning and execution with partial optimization allows Tukwila to quickly recover from decisions based on inaccurate estimates. During execution, Tukwila uses adaptive query operators such as the double pipelined hash join, which produces answers quickly, and the dynamic collector, which robustly and efficiently computes unions across overlapping data sources. We demonstrate that the Tukwila architecture extends previous innovations in adaptive execution (such as query scrambling, mid-execution re-optimization, and choose nodes), and we present experimental evidence that our techniques result in behavior desirable for a data integration system.", "title": "" }, { "docid": "2364fc795ff8e449a557eda4b498b42d", "text": "With the increasing utilization and popularity of the cloud infrastructure, more and more data are moved to the cloud storage systems. This makes the availability of cloud storage services critically important, particularly given the fact that outages of cloud storage services have indeed happened from time to time. Thus, solely depending on a single cloud storage provider for storage services can risk violating the service-level agreement (SLA) due to the weakening of service availability. This has led to the notion of Cloud-of-Clouds, where data redundancy is introduced to distribute data among multiple independent cloud storage providers, to address the problem. The key in the effectiveness of the Cloud-of-Clouds approaches lies in how the data redundancy is incorporated and distributed among the clouds. However, the existing Cloud-of-Clouds approaches utilize either replication or erasure codes to redundantly distribute data across multiple clouds, thus incurring either high space or high performance overheads. In this paper, we propose a hybrid redundant data distribution approach, called HyRD, to improve the cloud storage availability in Cloud-of-Clouds by exploiting the workload characteristics and the diversity of cloud providers. In HyRD, large files are distributed in multiple cost-efficient cloud storage providers with erasure-coded data redundancy while small files and file system metadata are replicated on multiple high-performance cloud storage providers. The experiments conducted on our lightweight prototype implementation of HyRD show that HyRD improves the cost efficiency by 33.4 and 20.4 percent, and reduces the access latency by 58.7 and 34.8 percent than the DuraCloud and RACS schemes, respectively.", "title": "" } ]
scidocsrr
f57a2144408a98510c4c87fec431762f
A Fast Parallel Maximum Clique Algorithm for Large Sparse Graphs and Temporal Strong Components
[ { "docid": "947ffeb4fff1ca4ee826d71d4add399e", "text": "Description bttroductian. A maximal complete subgraph (clique) is a complete subgraph that is not contained in any other complete subgraph. A recent paper [1] describes a number of techniques to find maximal complete subgraphs of a given undirected graph. In this paper, we present two backtracking algorithms, using a branchand-bound technique [4] to cut off branches that cannot lead to a clique. The first version is a straightforward implementation of the basic algorithm. It is mainly presented to illustrate the method used. This version generates cliques in alphabetic (lexicographic) order. The second version is derived from the first and generates cliques in a rather unpredictable order in an attempt to minimize the number of branches to be traversed. This version tends to produce the larger cliques first and to generate sequentially cliques having a large common intersection. The detailed algorithm for version 2 is presented here. Description o f the algorithm--Version 1. Three sets play an important role in the algorithm. (1) The set compsub is the set to be extended by a new point or shrunk by one point on traveling along a branch of the backtracking tree. The points that are eligible to extend compsub, i.e. that are connected to all points in compsub, are collected recursively in the remaining two sets. (2) The set candidates is the set of all points that will in due time serve as an extension to the present configuration of compsub. (3) The set not is the set of all points that have at an earlier stage already served as an extension of the present configuration of compsub and are now explicitly excluded. The reason for maintaining this set trot will soon be made clear. The core of the algorithm consists of a recursively defined extension operator that will be applied to the three sets Just described. It has the duty to generate all extensions of the given configuration of compsub that it can make with the given set of candidates and that do not contain any of the points in not. To put it differently: all extensions of compsub containing any point in not have already been generated. The basic mechanism now consists of the following five steps:", "title": "" }, { "docid": "34c343413fc748c1fc5e07fb40e3e97d", "text": "We study online social networks in which relationships can be either positive (indicating relations such as friendship) or negative (indicating relations such as opposition or antagonism). Such a mix of positive and negative links arise in a variety of online settings; we study datasets from Epinions, Slashdot and Wikipedia. We find that the signs of links in the underlying social networks can be predicted with high accuracy, using models that generalize across this diverse range of sites. These models provide insight into some of the fundamental principles that drive the formation of signed links in networks, shedding light on theories of balance and status from social psychology; they also suggest social computing applications by which the attitude of one user toward another can be estimated from evidence provided by their relationships with other members of the surrounding social network.", "title": "" } ]
[ { "docid": "fa7d10c25602bd71ce1f46f1bb0b3f7a", "text": "Plastic marine pollution is a major environmental concern, yet a quantitative description of the scope of this problem in the open ocean is lacking. Here, we present a time series of plastic content at the surface of the western North Atlantic Ocean and Caribbean Sea from 1986 to 2008. More than 60% of 6136 surface plankton net tows collected buoyant plastic pieces, typically millimeters in size. The highest concentration of plastic debris was observed in subtropical latitudes and associated with the observed large-scale convergence in surface currents predicted by Ekman dynamics. Despite a rapid increase in plastic production and disposal during this time period, no trend in plastic concentration was observed in the region of highest accumulation.", "title": "" }, { "docid": "ab10546d62e19df347f9a09e8855d622", "text": "Medical image segmentation plays an important role in treatment planning, identifying tumors, tumor volume, patient follow up and computer guided surgery. There are various techniques for medical image segmentation. This paper presents a image segmentation technique for locating brain tumor(AstrocytomaA type of brain tumor).Proposed work has been divided in two phases-In the first phase MRI image database(Astrocytoma grade I to IV) is collected and then preprocessing is done to improve quality of image. Second-phase includes three steps-Feature extraction, Feature selection and Image segmentation. For feature extraction proposed work uses GLCM (Grey Level co-occurrence matrix).To improve accuracy only a subset of feature is selected using hybrid Genetic algorithm(Genetic Algorithm+fuzzy rough set) and based on these features fuzzy rules and membership functions are defined for segmenting brain tumor from MRI images of .ANFIS is a adaptive network which combines benefits of both fuzzy and neural network .Finally, a comparative analysis is performed between ANFIS, neural network, Fuzzy ,FCM,K-NN, DWT+SOM,DWT+PCA+KN, Texture combined +ANN, Texture Combined+ SVM in terms of sensitivity ,specificity ,accuracy.", "title": "" }, { "docid": "ab1642d0e42f1a2e2d0c56c6740903b9", "text": "The Human Gene Mutation Database (HGMD®) is a comprehensive collection of germline mutations in nuclear genes that underlie, or are associated with, human inherited disease. By June 2013, the database contained over 141,000 different lesions detected in over 5,700 different genes, with new mutation entries currently accumulating at a rate exceeding 10,000 per annum. HGMD was originally established in 1996 for the scientific study of mutational mechanisms in human genes. However, it has since acquired a much broader utility as a central unified disease-oriented mutation repository utilized by human molecular geneticists, genome scientists, molecular biologists, clinicians and genetic counsellors as well as by those specializing in biopharmaceuticals, bioinformatics and personalized genomics. The public version of HGMD ( http://www.hgmd.org ) is freely available to registered users from academic institutions/non-profit organizations whilst the subscription version (HGMD Professional) is available to academic, clinical and commercial users under license via BIOBASE GmbH.", "title": "" }, { "docid": "9e6681268531cb66761dacd9730e7aa0", "text": "Previous work using an atomic force microscope in nanoindenter mode indicated that the outer, 10- to 15-μm thick, keratinised layer of tree frog toe pads has a modulus of elasticity equivalent to silicone rubber (5–15 MPa) (Scholz et al. 2009), but gave no information on the physical properties of deeper structures. In this study, micro-indentation is used to measure the stiffness of whole toe pads of the tree frog, Litoria caerulea. We show here that tree frog toe pads are amongst the softest of biological structures (effective elastic modulus 4–25 kPa), and that they exhibit a gradient of stiffness, being stiffest on the outside. This stiffness gradient results from the presence of a dense network of capillaries lying beneath the pad epidermis, which probably has a shock absorbing function. Additionally, we compare the physical properties (elastic modulus, work of adhesion, pull-off force) of the toe pads of immature and adult frogs.", "title": "" }, { "docid": "769a263c08934e330a87c1af15b6af21", "text": "Realization of brain-like computer has always been human's ultimate dream. Today, the possibility of having this dream come true has been significantly boosted due to the advent of several emerging non-volatile memory devices. Within these innovative technologies, phase-change memory device has been commonly regarded as the most promising candidate to imitate the biological brain, owing to its excellent scalability, fast switching speed, and low energy consumption. In this context, a detailed review concerning the physical principles of the neuromorphic circuit using phase-change materials as well as a comprehensive introduction of the currently available phase-change neuromorphic prototypes becomes imperative for scientists to continuously progress the technology of artificial neural networks. In this paper, we first present the biological mechanism of human brain, followed by a brief discussion about physical properties of phase-change materials that recently receive a widespread application on non-volatile memory field. We then survey recent research on different types of neuromorphic circuits using phase-change materials in terms of their respective geometrical architecture and physical schemes to reproduce the biological events of human brain, in particular for spike-time-dependent plasticity. The relevant virtues and limitations of these devices are also evaluated. Finally, the future prospect of the neuromorphic circuit based on phase-change technologies is envisioned.", "title": "" }, { "docid": "07c8719c4b8be9e02d14cd24c6e4e05c", "text": "Sentiment and emotional analysis on online collaborative software development forums can be very useful to gain important insights into the behaviors and personalities of the developers. Such information can later on be used to increase productivity of developers by making recommendations on how to behave best in order to get a task accomplished. However, due to the highly technical nature of the data present in online collaborative software development forums, mining sentiments and emotions becomes a very challenging task. In this work we present a new approach for mining sentiments and emotions from software development datasets using Interaction Process Analysis(IPA) labels and machine learning. We also apply distance metric learning as a preprocessing step before training a feed forward neural network and report the precision, recall, F1 and accuracy.", "title": "" }, { "docid": "7d78e87112f3a29f228bcf5a5f64b5d9", "text": "Register transfer level (RTL) synthesis model which simplified the design of clocked circuits allowed design automation boost and VLSI progress for more than a decade. Shrinking technology and progressive increase in clock frequency are bringing clock to its physical limits. Asynchronous circuits, which are believed to replace globally clocked designs in the future, remain out of the competition due to the design complexity of some automated approaches and poor results of other techniques. Successful asynchronous designs are known but they are primarily custom. This work sketches an automated approach for automatically re-implementing conventional RTL designs as fine-grain pipelined asynchronous quasi-delay-insensitive (QDI) circuits and presents a framework for automated synthesis of such implementations from high-level behavior specifications. Experimental results are presented using our new dynamic asynchronous library.", "title": "" }, { "docid": "aa5daa83656a2265dc27ec6ee5e3c1cb", "text": "Firms traditionally rely on interviews and focus groups to identify customer needs for marketing strategy and product development. User-generated content (UGC) is a promising alternative source for identifying customer needs. However, established methods are neither efficient nor effective for large UGC corpora because much content is non-informative or repetitive. We propose a machine-learning approach to facilitate qualitative analysis by selecting content for efficient review. We use a convolutional neural network to filter out non-informative content and cluster dense sentence embeddings to avoid sampling repetitive content. We further address two key questions: Are UGCbased customer needs comparable to interview-based customer needs? Do the machine-learning methods improve customer-need identification? These comparisons are enabled by a custom dataset of customer needs for oral care products identified by professional analysts using industry-standard experiential interviews. The analysts also coded 12,000 UGC sentences to identify which previously identified customer needs and/or new customer needs were articulated in each sentence. We show that (1) UGC is at least as valuable as a source of customer needs for product development, likely morevaluable, than conventional methods, and (2) machine-learning methods improve efficiency of identifying customer needs from UGC (unique customer needs per unit of professional services cost).", "title": "" }, { "docid": "749fd082229c1095f774f1a03e2083cd", "text": "Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL, which exploits an observation that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. The introduction of relative entropy regularization allows Trust-PCL to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL significantly improves the solution quality and sample efficiency of TRPO.1", "title": "" }, { "docid": "77ec1741e7a0876a0fe9fb85dd57f552", "text": "Despite growing recognition that attention fluctuates from moment-to-moment during sustained performance, prevailing analysis strategies involve averaging data across multiple trials or time points, treating these fluctuations as noise. Here, using alternative approaches, we clarify the relationship between ongoing brain activity and performance fluctuations during sustained attention. We introduce a novel task (the gradual onset continuous performance task), along with innovative analysis procedures that probe the relationships between reaction time (RT) variability, attention lapses, and intrinsic brain activity. Our results highlight 2 attentional states-a stable, less error-prone state (\"in the zone\"), characterized by higher default mode network (DMN) activity but during which subjects are at risk of erring if DMN activity rises beyond intermediate levels, and a more effortful mode of processing (\"out of the zone\"), that is less optimal for sustained performance and relies on activity in dorsal attention network (DAN) regions. These findings motivate a new view of DMN and DAN functioning capable of integrating seemingly disparate reports of their role in goal-directed behavior. Further, they hold potential to reconcile conflicting theories of sustained attention, and represent an important step forward in linking intrinsic brain activity to behavioral phenomena.", "title": "" }, { "docid": "e8796509d6e4db6f635ce2ccbc7b79ea", "text": "Humanoid robotics is a promising field because the strong human preference to interact with anthropomorphic interfaces. Despite this, humanoid robots are far from reaching main stream adoption and the features available in such robots seem to lag that of the latest smartphones. A fragmented robot ecosystem and low incentives to developers do not help to foster the creation of Robot-Apps either. In contrast, smartphones enjoy high adoption rates and a vibrant app ecosystem (4M apps published). Given this, it seems logical to apply the mobile SW and HW development model to humanoid robots. One way is to use a smartphone to power the robot. Smartphones have been embedded in toys and drones before. However, they have never been used as the main compute unit in a humanoid embodiment. Here, we introduce a novel robot architecture based on smartphones that demonstrates x3 cost reduction and that is compatible with iOS/Android.", "title": "" }, { "docid": "e3a4b77f05ed29b0643a1d699d747415", "text": "This letter develops an optical pixel sensor that is based on hydrogenated amorphous silicon thin-film transistors. Exploiting the photo sensitivity of the photo TFTs and combining different color filters, the proposed sensor can sense an optical input signal of a specified color under high ambient illumination conditions. Measurements indicate that the proposed pixel sensor effectively reacts to the optical input signal under light intensities from 873 to 12,910 lux, proving that the sensor is highly reliable under strong ambient illumination.", "title": "" }, { "docid": "8108f8c3d53f44ca3824f4601aacdce1", "text": "This paper presents a robust multi-class multi-object tracking (MCMOT) formulated by a Bayesian filtering framework. Multiobject tracking for unlimited object classes is conducted by combining detection responses and changing point detection (CPD) algorithm. The CPD model is used to observe abrupt or abnormal changes due to a drift and an occlusion based spatiotemporal characteristics of track states. The ensemble of convolutional neural network (CNN) based object detector and Lucas-Kanede Tracker (KLT) based motion detector is employed to compute the likelihoods of foreground regions as the detection responses of different object classes. Extensive experiments are performed using lately introduced challenging benchmark videos; ImageNet VID and MOT benchmark dataset. The comparison to state-of-the-art video tracking techniques shows very encouraging results.", "title": "" }, { "docid": "086a70e10e5c00ff771698728b0d01a4", "text": "We report an autopsy case of a 42-year-old woman who, when discovered, had been dead in her apartment for approximately 1 week under circumstances involving treachery, assault and possible drug overdose. This case is unique as it involved two autopsies of the deceased by two different medical examiners who reached opposing conclusions. The first autopsy was performed about 10 days after death. The second autopsy was performed after an exhumation approximately 2 years after burial. Evidence collected at the crime scene included blood samples from which DNA was extracted and analysed, fingerprints and clothing containing dried body fluids. The conclusion of the first autopsy was accidental death due to cocaine toxicity; the conclusion of the second autopsy was death due to homicide given the totality of evidence. Suspects 1 and 2 were linked to the death of the victim by physical evidence and suspect 3 was linked by testimony. Suspect 1 received life in prison, and suspects 2 and 3 received 45 and 20 years in prison, respectively. This case indicates that cocaine toxicity is difficult to determine in putrefied tissue and that exhumations can be important in collecting forensic information. It further reveals that the combined findings of medical examiners, even though contradictory, are useful in determining the circumstances leading to death in criminal justice. Thus, this report demonstrates that such criminal circumstances require comparative forensic review and, in such cases, scientific conclusions can be difficult.", "title": "" }, { "docid": "b1d00c44127956ab703204490de0acd7", "text": "The key issue of few-shot learning is learning to generalize. This paper proposes a large margin principle to improve the generalization capacity of metric based methods for few-shot learning. To realize it, we develop a unified framework to learn a more discriminative metric space by augmenting the classification loss function with a large margin distance loss function for training. Extensive experiments on two state-of-the-art few-shot learning methods, graph neural networks and prototypical networks, show that our method can improve the performance of existing models substantially with very little computational overhead, demonstrating the effectiveness of the large margin principle and the potential of our method.", "title": "" }, { "docid": "ca4752a75f440dda1255a71764258a51", "text": "Neurofeedback is a method for using neural activity displayed on a computer to regulate one's own brain function and has been shown to be a promising technique for training individuals to interact with brain-machine interface applications such as neuroprosthetic limbs. The goal of this study was to develop a user-friendly functional near-infrared spectroscopy (fNIRS)-based neurofeedback system to upregulate neural activity associated with motor imagery, which is frequently used in neuroprosthetic applications. We hypothesized that fNIRS neurofeedback would enhance activity in motor cortex during a motor imagery task. Twenty-two participants performed active and imaginary right-handed squeezing movements using an elastic ball while wearing a 98-channel fNIRS device. Neurofeedback traces representing localized cortical hemodynamic responses were graphically presented to participants in real time. Participants were instructed to observe this graphical representation and use the information to increase signal amplitude. Neural activity was compared during active and imaginary squeezing with and without neurofeedback. Active squeezing resulted in activity localized to the left premotor and supplementary motor cortex, and activity in the motor cortex was found to be modulated by neurofeedback. Activity in the motor cortex was also shown in the imaginary squeezing condition only in the presence of neurofeedback. These findings demonstrate that real-time fNIRS neurofeedback is a viable platform for brain-machine interface applications.", "title": "" }, { "docid": "3712b06059d60211843a92a507075f86", "text": "Automatically filtering relevant information about a real-world incident from Social Web streams and making the information accessible and findable in the given context of the incident are non-trivial scientific challenges. In this paper, we engineer and evaluate solutions that analyze the semantics of Social Web data streams to solve these challenges. We introduce Twitcident, a framework and Web-based system for filtering, searching and analyzing information about real-world incidents or crises. Given an incident, our framework automatically starts tracking and filtering information that is relevant for the incident from Social Web streams and Twitter particularly. It enriches the semantics of streamed messages to profile incidents and to continuously improve and adapt the information filtering to the current temporal context. Faceted search and analytical tools allow people and emergency services to retrieve particular information fragments and overview and analyze the current situation as reported on the Social Web.\n We put our Twitcident system into practice by connecting it to emergency broadcasting services in the Netherlands to allow for the retrieval of relevant information from Twitter streams for any incident that is reported by those services. We conduct large-scale experiments in which we evaluate (i) strategies for filtering relevant information for a given incident and (ii) search strategies for finding particular information pieces. Our results prove that the semantic enrichment offered by our framework leads to major and significant improvements of both the filtering and the search performance. A demonstration is available via: http://wis.ewi.tudelft.nl/twitcident/", "title": "" }, { "docid": "8b9503b4251db557ef577e4407577b1f", "text": "In this paper we present a successful application of logic programming for e-tourism: the iTravel system. The system exploits two technologies that are based on the state-of-the-art computational logic system DLV: (i) a system for ontology representation and reasoning, called OntoDLV; and, (ii) HıLεX a semantic information-extraction tool. The core of iTravel is an ontology which models the domain of tourism offers. The ontology is automatically populated by extracting the information contained in the tourism leaflets produced by tour operators. A set of specifically devised logic programs is used to reason on the information contained in the ontology for selecting the holiday packages that best fit the customer needs. An intuitive web-based user interface eases the task of interacting with the system for both the customers and the operators of a travel agency.", "title": "" }, { "docid": "6dd9ede81468fca04991a9516baba85a", "text": "In this paper, we propose an efficient pseudonymous authentication scheme with strong privacy preservation (PASS), for vehicular communications. Unlike traditional pseudonymous authentication schemes, the size of the certificate revocation list (CRL) in PASS is linear with the number of revoked vehicles and unrelated to how many pseudonymous certificates are held by the revoked vehicles. PASS supports the roadside unit (RSU)-aided distributed certificate service that allows the vehicles to update certificates on road, but the service overhead is almost unrelated to the number of updated certificates. Furthermore, PASS provides strong privacy preservation to the vehicles so that the adversaries cannot trace any vehicle, even though all RSUs have been compromised. Extensive simulations demonstrate that PASS outperforms previously reported schemes in terms of the revocation cost and the certificate updating overhead.", "title": "" }, { "docid": "61c4146ac8b55167746d3f2b9c8b64e8", "text": "In a variety of Network-based Intrusion Detection System (NIDS) applications, one desires to detect groups of unknown attack (e.g., botnet) packet-flows, with a group potentially manifesting its atypicality (relative to a known reference “normal”/null model) on a low-dimensional subset of the full measured set of features used by the IDS. What makes this anomaly detection problem quite challenging is that it is a priori unknown which (possibly sparse) subset of features jointly characterizes a particular application, especially one that has not been seen before, which thus represents an unknown behavioral class (zero-day threat). Moreover, nowadays botnets have become evasive, evolving their behavior to avoid signature-based IDSes. In this work, we apply a novel active learning (AL) framework for botnet detection, facilitating detection of unknown botnets (assuming no ground truth examples of same). We propose a new anomaly-based feature set that captures the informative features and exploits the sequence of packet directions in a given flow. Experiments on real world network traffic data, including several common Zeus botnet instances, demonstrate the advantage of our proposed features and AL system.", "title": "" } ]
scidocsrr
8563e6999d21f9729a6755d27c76f3d1
Predicting Thorax Diseases with NIH Chest X-Rays
[ { "docid": "23d1534a9daee5eeefaa1fdc8a5db0aa", "text": "Obtaining a protein’s 3D structure is crucial to the understanding of its functions and interactions with other proteins. It is critical to accelerate the protein crystallization process with improved accuracy for understanding cancer and designing drugs. Systematic high-throughput approaches in protein crystallization have been widely applied, generating a large number of protein crystallization-trial images. Therefore, an efficient and effective automatic analysis for these images is a top priority. In this paper, we present a novel system, CrystalNet, for automatically labeling outcomes of protein crystallization-trial images. CrystalNet is a deep convolutional neural network that automatically extracts features from X-ray protein crystallization images for classification. We show that (1) CrystalNet can provide real-time labels for crystallization images effectively, requiring approximately 2 seconds to provide labels for all 1536 images of crystallization microassay on each plate; (2) compared with the stateof-the-art classification systems in crystallization image analysis, our technique demonstrates an improvement of 8% in accuracy, and achieve 90.8% accuracy in classification. As a part of the high-throughput pipeline which generates millions of images a year, CrystalNet can lead to a substantial reduction of labor-intensive screening.", "title": "" } ]
[ { "docid": "bdd86f5b88b47b62356a14234467dd9a", "text": "Multi-sampled imaging is a general framework for using pixels on an image detector to simultaneously sample multiple dimensions of imaging (space, time, spectrum, brightness, polarization, etc.). The mosaic of red, green and blue spectral filters found in most solid-state color cameras is one example of multi-sampled imaging. We briefly describe how multi-sampling can be used to explore other dimensions of imaging. Once such an image is captured, smooth reconstructions along the individual dimensions can be obtained using standard interpolation algorithms. Typically, this results in a substantial reduction of resolution (and hence image quality). One can extract significantly greater resolution in each dimension by noting that the light fields associated with real scenes have enormous redundancies within them, causing different dimensions to be highly correlated. Hence, multi-sampled images can be better interpolated using local structural models that are learned off- line from a diverse set of training images. The specific type of structural models we use are based on polynomial functions of measured image intensities. They are very effective as well as computationally efficient. We demonstrate the benefits of structural interpolation using three specific applications. These are (a) traditional color imaging with a mosaic of color filters, (b) high dynamic range monochrome imaging using a mosaic of exposure filters, and (c) high dynamic range color imaging using a mosaic of overlapping color and exposure filters.", "title": "" }, { "docid": "ab148ea69cf884b2653823b350ed5cfc", "text": "The application of information retrieval techniques to search tasks in software engineering is made difficult by the lexical gap between search queries, usually expressed in natural language (e.g. English), and retrieved documents, usually expressed in code (e.g. programming languages). This is often the case in bug and feature location, community question answering, or more generally the communication between technical personnel and non-technical stake holders in a software project. In this paper, we propose bridging the lexical gap by projecting natural language statements and code snippets as meaning vectors in a shared representation space. In the proposed architecture, word embeddings are first trained on API documents, tutorials, and reference documents, and then aggregated in order to estimate semantic similarities between documents. Empirical evaluations show that the learned vector space embeddings lead to improvements in a previously explored bug localization task and a newly defined task of linking API documents to computer programming questions.", "title": "" }, { "docid": "4d18ea8816e9e4abf428b3f413c82f9e", "text": "This paper reviews computer vision and image analysis studies aiming at automated diagnosis or screening of malaria infection in microscope images of thin blood film smears. Existing works interpret the diagnosis problem differently or propose partial solutions to the problem. A critique of these works is furnished. In addition, a general pattern recognition framework to perform diagnosis, which includes image acquisition, pre-processing, segmentation, and pattern classification components, is described. The open problems are addressed and a perspective of the future work for realization of automated microscopy diagnosis of malaria is provided.", "title": "" }, { "docid": "67379c945de57c3662a2cd96cd67c15b", "text": "Introduction This paper’s purpose is to illustrate the relationship of profitability to intermediate, customer-related outcomes that managers can influence directly. It is predominantly a general management discussion, consistent with the Nordic School’s view that services are highly interdisciplinary, requiring a “service management” approach (see Grönroos, 1984, 1991). Its findings support the theory that customer satisfaction is related to customer loyalty, which in turn is related to profitability (Heskett et al., 1994, and discussed in Storbacka et al., 1994). While this theory has been advocated for service firms as a class, this paper presents an empirical analysis of one retail bank, limiting the findings’ generalizability. The service profit chain (Heskett et al., 1994) hypothesizes that:", "title": "" }, { "docid": "f103277dbbcab26d8e5c176520666db9", "text": "Air pollution in urban environments has risen steadily in the last several decades. Such cities as Beijing and Delhi have experienced rises to dangerous levels for citizens. As a growing and urgent public health concern, cities and environmental agencies have been exploring methods to forecast future air pollution, hoping to enact policies and provide incentives and services to benefit their citizenry. Much research is being conducted in environmental science to generate deterministic models of air pollutant behavior; however, this is both complex, as the underlying molecular interactions in the atmosphere need to be simulated, and often inaccurate. As a result, with greater computing power in the twenty-first century, using machine learning methods for forecasting air pollution has become more popular. This paper investigates the use of the LSTM recurrent neural network (RNN) as a framework for forecasting in the future, based on time series data of pollution and meteorological information in Beijing. Due to the sequence dependencies associated with large-scale and longer time series datasets, RNNs, and in particular LSTM models, are well-suited. Our results show that the LSTM framework produces equivalent accuracy when predicting future timesteps compared to the baseline support vector regression for a single timestep. Using our LSTM framework, we can now extend the prediction from a single timestep out to 5 to 10 hours into the future. This is promising in the quest for forecasting urban air quality and leveraging that insight to enact beneficial policy.", "title": "" }, { "docid": "ad86262394b1633243ae44d1f43c1e68", "text": "OBJECTIVE\nTo study dimensional alterations of the alveolar ridge that occurred following tooth extraction as well as processes of bone modelling and remodelling associated with such change.\n\n\nMATERIAL AND METHODS\nTwelve mongrel dogs were included in the study. In both quadrants of the mandible incisions were made in the crevice region of the 3rd and 4th premolars. Minute buccal and lingual full thickness flaps were elevated. The four premolars were hemi-sected. The distal roots were removed. The extraction sites were covered with the mobilized gingival tissue. The extractions of the roots and the sacrifice of the dogs were staggered in such a manner that all dogs contributed with sockets representing 1, 2, 4 and 8 weeks of healing. The animals were sacrificed and tissue blocks containing the extraction socket were dissected, decalcified in EDTA, embedded in paraffin and cut in the buccal-lingual plane. The sections were stained in haematoxyline-eosine and examined in the microscope.\n\n\nRESULTS\nIt was demonstrated that marked dimensional alterations occurred during the first 8 weeks following the extraction of mandibular premolars. Thus, in this interval there was a marked osteoclastic activity resulting in resorption of the crestal region of both the buccal and the lingual bone wall. The reduction of the height of the walls was more pronounced at the buccal than at the lingual aspect of the extraction socket. The height reduction was accompanied by a \"horizontal\" bone loss that was caused by osteoclasts present in lacunae on the surface of both the buccal and the lingual bone wall.\n\n\nCONCLUSIONS\nThe resorption of the buccal/lingual walls of the extraction site occurred in two overlapping phases. During phase 1, the bundle bone was resorbed and replaced with woven bone. Since the crest of the buccal bone wall was comprised solely of bundle this modelling resulted in substantial vertical reduction of the buccal crest. Phase 2 included resorption that occurred from the outer surfaces of both bone walls. The reason for this additional bone loss is presently not understood.", "title": "" }, { "docid": "08004e3adc08e395732cc121a96b7300", "text": "Impressive image captioning results (i.e., an objective description for an image) are achieved with plenty of training pairs. In this paper, we take one step further to investigate the creation of narrative paragraph for a photo stream. This task is even more challenging due to the difficulty in modeling an ordered photo sequence and in generating a relevant paragraph with expressive language style for storytelling. The difficulty can even be exacerbated by the limited training data, so that existing approaches almost focus on search-based solutions. To deal with these challenges, we propose a sequenceto-sequence modeling approach with reinforcement learning and adversarial training. First, to model the ordered photo stream, we propose a hierarchical recurrent neural network as story generator, which is optimized by reinforcement learning with rewards. Second, to generate relevant and story-style paragraphs, we design the rewards with two critic networks, including a multi-modal and a language-style discriminator. Third, we further consider the story generator and reward critics as adversaries. The generator aims to create indistinguishable paragraphs to human-level stories, whereas the critics aim at distinguishing them and further improving the generator by policy gradient. Experiments on three widely-used datasets show the effectiveness, against state-of-the-art methods with relative increase of 20.2% by METEOR. We also show the subjective preference for the proposed approach over the baselines through a user study with 30 human subjects.", "title": "" }, { "docid": "997b9f66d7695c8694936f2f0965d197", "text": "The DETER project aims to advance cybersecurity research and education. Over the past seven years, the project has focused on improving and redefining the methods, technology, and infrastructure for developing cyberdefense technology. The project's research results are put into practice by DeterLab, a public, free-for-use experimental facility available to researchers and educators worldwide. Educators can use DeterLab's exercises to teach cybersecurity technology and practices. This use of DeterLab provides valuable feedback on DETER innovations and helps grow the pool of cybersecurity innovators and cyberdefenders.", "title": "" }, { "docid": "4088b1148b5631f91f012ddc700cc136", "text": "BACKGROUND\nAny standard skin flap of the body including a detectable or identified perforator at its axis can be safely designed and harvested in a free-style fashion.\n\n\nMETHODS\nFifty-six local free-style perforator flaps in the head and neck region, 33 primary and 23 recycle flaps, were performed in 53 patients. The authors introduced the term \"recycle\" to describe a perforator flap harvested within the borders of a previously transferred flap. A Doppler device was routinely used preoperatively for locating perforators in the area adjacent to a given defect. The final flap design and degree of mobilization were decided intraoperatively, depending on the location of the most suitable perforator and the ability to achieve primary closure of the donor site. Based on clinical experience, the authors suggest a useful classification of local free-style perforator flaps.\n\n\nRESULTS\nAll primary and 20 of 23 recycle free-style perforator flaps survived completely, providing tension-free coverage and a pleasing final contour for patients. In the remaining three recycle cases, the skeletonization of the pedicle resulted in pedicle damage, because of surrounding postradiotherapy scarring and flap failure. All donor sites except one were closed primarily, and all of them healed without any complications.\n\n\nCONCLUSIONS\nThe free-style concept has significantly increased the potential and versatility of the standard local and recycled head and neck flap alternatives for moderate to large defects, providing a more robust, custom-made, tissue-sparing, and cosmetically superior outcome in a one-stage procedure, with minimal donor-site morbidity.\n\n\nCLINICAL QUESTION/LEVEL OF EVIDENCE\nTherapeutic, V.", "title": "" }, { "docid": "717b685f6d0ac94555dcf1b3d209b2be", "text": "Human faces in surveillance videos often suffer from severe image blur, dramatic pose variations, and occlusion. In this paper, we propose a comprehensive framework based on Convolutional Neural Networks (CNN) to overcome challenges in video-based face recognition (VFR). First, to learn blur-robust face representations, we artificially blur training data composed of clear still images to account for a shortfall in real-world video training data. Using training data composed of both still images and artificially blurred data, CNN is encouraged to learn blur-insensitive features automatically. Second, to enhance robustness of CNN features to pose variations and occlusion, we propose a Trunk-Branch Ensemble CNN model (TBE-CNN), which extracts complementary information from holistic face images and patches cropped around facial components. TBE-CNN is an end-to-end model that extracts features efficiently by sharing the low- and middle-level convolutional layers between the trunk and branch networks. Third, to further promote the discriminative power of the representations learnt by TBE-CNN, we propose an improved triplet loss function. Systematic experiments justify the effectiveness of the proposed techniques. Most impressively, TBE-CNN achieves state-of-the-art performance on three popular video face databases: PaSC, COX Face, and YouTube Faces. With the proposed techniques, we also obtain the first place in the BTAS 2016 Video Person Recognition Evaluation.", "title": "" }, { "docid": "4681e8f07225e305adfc66cd1b48deb8", "text": "Collaborative work among students, while an important topic of inquiry, needs further treatment as we still lack the knowledge regarding obstacles that students face, the strategies they apply, and the relations among personal and group aspects. This article presents a diary study of 54 master’s students conducting group projects across four semesters. A total of 332 diary entries were analysed using the C5 model of collaboration that incorporates elements of communication, contribution, coordination, cooperation and collaboration. Quantitative and qualitative analyses show how these elements relate to one another for students working on collaborative projects. It was found that face-to-face communication related positively with satisfaction and group dynamics, whereas online chat correlated positively with feedback and closing the gap. Managing scope was perceived to be the most common challenge. The findings suggest the varying affordances and drawbacks of different methods of communication, collaborative work styles and the strategies of group members.", "title": "" }, { "docid": "7f22d49801bad71b3649d515c41290e5", "text": "Measurements of the impact and history of research literature provide a useful complement to scientific digital library collections. Bibliometric indicators have been extensively studied, mostly in the context of journals. However, journal-based metrics poorly capture topical distinctions in fast-moving fields, and are increasingly problematic with the rise of open-access publishing. Recent developments in latent topic models have produced promising results for automatic sub-field discovery. The fine-grained, faceted topics produced by such models provide a clearer view of the topical divisions of a body of research literature and the interactions between those divisions. We demonstrate the usefulness of topic models in measuring impact by applying a new phrase-based topic discovery model to a collection of 300,000 Computer Science publications, collected by the Rexa automatic citation indexing system.", "title": "" }, { "docid": "51c14998480e2b1063b727bf3e4f4ad0", "text": "With the rapid growth of multimedia information, the font library has become a part of people’s work life. Compared to the Western alphabet language, it is difficult to create new font due to huge quantity and complex shape. At present, most of the researches on automatic generation of fonts use traditional methods requiring a large number of rules and parameters set by experts, which are not widely adopted. This paper divides Chinese characters into strokes and generates new font strokes by fusing the styles of two existing font strokes and assembling them into new fonts. This approach can effectively improve the efficiency of font generation, reduce the costs of designers, and is able to inherit the style of existing fonts. In the process of learning to generate new fonts, the popular of deep learning areas, Generative Adversarial Nets has been used. Compared with the traditional method, it can generate higher quality fonts without well-designed and complex loss function.", "title": "" }, { "docid": "de119196672efda310f457b15f0b6e63", "text": "Agile processes focus on facilitating early and fast production of working code, and are based on software development process models that support iterative, incremental development of software. Although agile methods have existed for a number of years now, answers to questions concerning the suitability of agile processes to particular software development environments are still often based on anecdotal accounts of experiences. An appreciation of the (often unstated) assumptions underlying agile processes can lead to a better understanding of the applicability of agile processes to particular situations. Agile processes are less likely to be applicable in situations in which core assumptions do not hold. This paper examines the principles and advocated practices of agile processes to identify underlying assumptions. The paper also identifies limitations that may arise from these assumptions and outlines how the limitations can be addresses by incorporating other software development techniques and practices into agile development environments.", "title": "" }, { "docid": "a25839666b7e208810979dc93d20f950", "text": "Energy consumption management has become an essential concept in cloud computing. In this paper, we propose a new power aware load balancing, named Bee-MMT (artificial bee colony algorithm-Minimal migration time), to decline power consumption in cloud computing; as a result of this decline, CO2 production and operational cost will be decreased. According to this purpose, an algorithm based on artificial bee colony algorithm (ABC) has been proposed to detect over utilized hosts and then migrate one or more VMs from them to reduce their utilization; following that we detect underutilized hosts and, if it is possible, migrate all VMs which have been allocated to these hosts and then switch them to the sleep mode. However, there is a trade-off between energy consumption and providing high quality of service to the customers. Consequently, we consider SLA Violation as a metric to qualify the QOS that require to satisfy the customers. The results show that the proposed method can achieve greater power consumption saving than other methods like LR-MMT (local regression-Minimal migration time), DVFS (Dynamic Voltage Frequency Scaling), IQR-MMT (Interquartile Range-MMT), MAD-MMT (Median Absolute Deviation) and non-power aware.", "title": "" }, { "docid": "a5052a27ebbfb07b02fa18b3d6bff6fc", "text": "Popular techniques for domain adaptation such as the feature augmentation method of Daumé III (2009) have mostly been considered for sparse binary-valued features, but not for dense realvalued features such as those used in neural networks. In this paper, we describe simple neural extensions of these techniques. First, we propose a natural generalization of the feature augmentation method that uses K + 1 LSTMs where one model captures global patterns across all K domains and the remaining K models capture domain-specific information. Second, we propose a novel application of the framework for learning shared structures by Ando and Zhang (2005) to domain adaptation, and also provide a neural extension of their approach. In experiments on slot tagging over 17 domains, our methods give clear performance improvement over Daumé III (2009) applied on feature-rich CRFs.", "title": "" }, { "docid": "14dd650afb3dae58ffb1a798e065825a", "text": "Copilot is a coprocessor-based kernel integrity monitor for commodity systems. Copilot is designed to detect malicious modifications to a host’s kernel and has correctly detected the presence of 12 real-world rootkits, each within 30 seconds of their installation with less than a 1% penalty to the host’s performance. Copilot requires no modifications to the protected host’s software and can be expected to operate correctly even when the host kernel is thoroughly compromised – an advantage over traditional monitors designed to run on the host itself.", "title": "" }, { "docid": "aba638a83116131a62dcce30a7470252", "text": "A general method is proposed to automatically generate a DfT solution aiming at the detection of catastrophic faults in analog and mixed-signal integrated circuits. The approach consists in modifying the topology of the circuit by pulling up (down) nodes and then probing differentiating node voltages. The method generates a set of optimal hardware implementations addressing the multi-objective problem such that the fault coverage is maximized and the silicon overhead is minimized. The new method was applied to a real-case industrial circuit, demonstrating a nearly 100 percent coverage at the expense of an area increase of about 5 percent.", "title": "" }, { "docid": "76eef8117ac0bc5dbb0529477d10108d", "text": "Most existing switched-capacitor (SC) DC-DC converters only offer a few voltage conversion ratios (VCRs), leading to significant efficiency fluctuations under wide input/output dynamics (e.g. up to 30% in [1]). Consequently, systematic SC DC-DC converters with fine-grained VCRs (FVCRs) become attractive to achieve high efficiency over a wide operating range. Both the Recursive SC (RSC) [2,3] and Negator-based SC (NSC) [4] topologies offer systematic FVCR generations with high conductance, but their binary-switching nature fundamentally results in considerable parasitic loss. In bulk CMOS, the restriction of using low-parasitic MIM capacitors for high efficiency ultimately limits their achievable power density to <1mW/mm2. This work reports a fully integrated fine-grained buck-boost SC DC-DC converter with 24 VCRs. It features an algorithmic voltage-feed-in (AVFI) topology to systematically generate any arbitrary buck-boost rational ratio with optimal conduction loss while achieving the lowest parasitic loss compared with [2,4]. With 10 main SC cells (MCs) and 10 auxiliary SC cells (ACs) controlled by the proposed reference-selective bootstrapping driver (RSBD) for wide-range efficient buck-boost operations, the AVFI converter in 65nm bulk CMOS achieves a peak efficiency of 84.1% at a power density of 13.2mW/mm2 over a wide range of input (0.22 to 2.4V) and output (0.85 to 1.2V).", "title": "" }, { "docid": "b692c9d802437b2935dad23e334529e1", "text": "Spike timing dependent plasticity (STDP) is a learning rule that modifies synaptic strength as a function of the relative timing of pre- and postsynaptic spikes. When a neuron is repeatedly presented with similar inputs, STDP is known to have the effect of concentrating high synaptic weights on afferents that systematically fire early, while postsynaptic spike latencies decrease. Here we use this learning rule in an asynchronous feedforward spiking neural network that mimics the ventral visual pathway and shows that when the network is presented with natural images, selectivity to intermediate-complexity visual features emerges. Those features, which correspond to prototypical patterns that are both salient and consistently present in the images, are highly informative and enable robust object recognition, as demonstrated on various classification tasks. Taken together, these results show that temporal codes may be a key to understanding the phenomenal processing speed achieved by the visual system and that STDP can lead to fast and selective responses.", "title": "" } ]
scidocsrr
1ac3321f1620a126231100a530a3d1d2
A survey: Several technologies of non-orthogonal transmission for 5G
[ { "docid": "9b220cb4c3883cb959d1665abefa5406", "text": "Time domain synchronous OFDM (TDS-OFDM) has a higher spectrum and energy efficiency than standard cyclic prefix OFDM (CP-OFDM) by replacing the unknown CP with a known pseudorandom noise (PN) sequence. However, due to mutual interference between the PN sequence and the OFDM data block, TDS-OFDM cannot support high-order modulation schemes such as 256QAM in realistic static channels with large delay spread or high-definition television (HDTV) delivery in fast fading channels. To solve these problems, we propose the idea of using multiple inter-block-interference (IBI)-free regions of small size to realize simultaneous multi-channel reconstruction under the framework of structured compressive sensing (SCS). This is enabled by jointly exploiting the sparsity of wireless channels as well as the characteristic that path delays vary much slower than path gains. In this way, the mutually conditional time-domain channel estimation and frequency-domain data demodulation in TDS-OFDM can be decoupled without the use of iterative interference removal. The Cramér-Rao lower bound (CRLB) of the proposed estimation scheme is also derived. Moreover, the guard interval amplitude in TDS-OFDM can be reduced to improve the energy efficiency, which is infeasible for CP-OFDM. Simulation results demonstrate that the proposed SCS-aided TDS-OFDM scheme has a higher spectrum and energy efficiency than CP-OFDM by more than 10% and 20% respectively in typical applications.", "title": "" }, { "docid": "d23fc72c7fb3cbbc9120d2ab9fc14e75", "text": "Generalized frequency division multiplexing (GFDM) is a new concept that can be seen as a generalization of traditional OFDM. The scheme is based on the filtered multi-carrier approach and can offer an increased flexibility, which will play a significant role in future cellular applications. In this paper we present the benefits of the pulse shaped carriers in GFDM. We show that based on the FFT/IFFT algorithm, the scheme can be implemented with reasonable computational effort. Further, to be able to relate the results to the recent LTE standard, we present a suitable set of parameters for GFDM.", "title": "" } ]
[ { "docid": "b917ec2f16939a819625b6750597c40c", "text": "In an increasing number of scientific disciplines, large data collections are emerging as important community resources. In domains as diverse as global climate change, high energy physics, and computational genomics, the volume of interesting data is already measured in terabytes and will soon total petabytes. The communities of researchers that need to access and analyze this data (often using sophisticated and computationally expensive techniques) are often large and are almost always geographically distributed, as are the computing and storage resources that these communities rely upon to store and analyze their data [17]. This combination of large dataset size, geographic distribution of users and resources, and computationally intensive analysis results in complex and stringent performance demands that are not satisfied by any existing data management infrastructure. A large scientific collaboration may generate many queries, each involving access to—or supercomputer-class computations on—gigabytes or terabytes of data. Efficient and reliable execution of these queries may require careful management of terabyte caches, gigabit/s data transfer over wide area networks, coscheduling of data transfers and supercomputer computation, accurate performance estimations to guide the selection of dataset replicas, and other advanced techniques that collectively maximize use of scarce storage, networking, and computing resources. The literature offers numerous point solutions that address these issues (e.g., see [17, 14, 19, 3]). But no integrating architecture exists that allows us to identify requirements and components common to different systems and hence apply different technologies in a coordinated fashion to a range of dataintensive petabyte-scale application domains. Motivated by these considerations, we have launched a collaborative effort to design and produce such an integrating architecture. We call this architecture the data grid, to emphasize its role as a specialization and extension of the “Grid” that has emerged recently as an integrating infrastructure for distributed computation [10, 20, 15]. Our goal in this effort is to define the requirements that a data grid must satisfy and the components and APIs that will be required in its implementation. We hope that the definition of such an architecture will accelerate progress on petascale data-intensive computing by enabling the integration of currently disjoint approaches, encouraging the deployment of basic enabling technologies, and revealing technology gaps that require further research and development. In addition, we plan to construct a reference implementation for this architecture so as to enable large-scale experimentation.", "title": "" }, { "docid": "7f23e4b069d6c76a3858c3255269edfd", "text": "This study examines the case of a sophomore high school history class where Making History, a video game designed with educational purposes in mind, is used in the classroom to teach about World War II. Data was gathered using observation, focus group and individual interviews, and document analysis. The high school was a rural school located in a small town in the Midwestern United States. The teacher had been teaching with the game for several years and spent one school week teaching World War II, with students playing the game in class for three days of that week. The purpose of this study was to understand teacher and student experiences with and perspectives on the in-class use of an educational video game. Results showed that the use of the video game resulted in a shift from a traditional teachercentered learning environment to a student-centered environment where the students were much more active and engaged. Also, the teacher had evolved implementation strategies based on his past experiences using the game to maximize the focus on learning. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "2e92ddaabf0b0e3fdc4a2f8fdc074361", "text": "Firms spend a significant part of their marketing budgets on sales promotions. Retail (2012) indicates that during 1997–2011, promotion accounted for roughly 75% of marketing expenditures for US packaged goods manufacturers; the other 25% was for advertising. In 2011, 58% of the budget was spent on promotion to the trade (i.e., from manufacturers to retailers), and 15% on manufacturer promotions to consumers. Since the impact of promotions on sales is usually immediate and strong (Blattberg et al. 1995), promotions are attractive to results-oriented managers seeking to increase sales in the short term (Neslin 2002). In a meta-analysis, Bijmolt et al. (2005) report that the average short-term sales promotion elasticity is –3.63, which implies that a 20% temporary price cut leads to a 73% rise in sales. There are few, if any, other marketing instruments that are equally effective. Because of this, coupled with the availability of scanner data, marketing researchers have been very active in developing models for analyzing sales promotions. Most applications analyze promotions for consumer packaged goods, and this chapter reflects this practice. Nevertheless, many of the models could be applied to other settings as well. This chapter discusses models for measuring sales promotion effects. Part I (Sects. 2.1–2.10) focuses on descriptive models, i.e., models that describe and explain sales promotion phenomena. We start by discussing promotions to consumers. Sections 2.1 through 2.5 focus on analyzing the direct impact of promotions on sales and decomposing that impact into a variety of sources. Section 2.6", "title": "" }, { "docid": "da1d1e9ddb5215041b9565044b9feecb", "text": "As multiprocessors with large numbers of processors become more prevalent, we face the task of developing scheduling algorithms for the multiprogrammed use of such machines. The scheduling decisions must take into account the number of processors available, the overall system load, and the ability of each application awaiting activation to make use of a given number of processors.\nThe parallelism within an application can be characterized at a number of different levels of detail. At the highest level, it might be characterized by a single parameter (such as the proportion of the application that is sequential, or the average number of processors the application would use if an unlimited number of processors were available). At the lowest level, representing all the parallelism in the application requires the full data dependency graph (which is more information than is practically manageable).\nIn this paper, we examine the quality of processor allocation decisions under multiprogramming that can be made with several different high-level characterizations of application parallelism. We demonstrate that decisions based on parallelism characterizations with two to four parameters are superior to those based on single-parameter characterizations (such as fraction sequential or average parallelism). The results are based predominantly on simulation, with some guidance from a simple analytic model.", "title": "" }, { "docid": "d338c807948016bf978aa7a03841f292", "text": "Emotions accompany everyone in the daily life, playing a key role in non-verbal communication, and they are essential to the understanding of human behavior. Emotion recognition could be done from the text, speech, facial expression or gesture. In this paper, we concentrate on recognition of “inner” emotions from electroencephalogram (EEG) signals as humans could control their facial expressions or vocal intonation. The need and importance of the automatic emotion recognition from EEG signals has grown with increasing role of brain computer interface applications and development of new forms of human-centric and human-driven interaction with digital media. We propose fractal dimension based algorithm of quantification of basic emotions and describe its implementation as a feedback in 3D virtual environments. The user emotions are recognized and visualized in real time on his/her avatar adding one more so-called “emotion dimension” to human computer interfaces.", "title": "" }, { "docid": "13211210ca0a3fda62fd44383eca6b52", "text": "Cancer is the most important cause of death for both men and women. The early detection of cancer can be helpful in curing the disease completely. So the requirement of techniques to detect the occurrence of cancer nodule in early stage is increasing. A disease that is commonly misdiagnosed is lung cancer. Earlier diagnosis of Lung Cancer saves enormous lives, failing which may lead to other severe problems causing sudden fatal end. Its cure rate and prediction depends mainly on the early detection and diagnosis of the disease. One of the most common forms of medical malpractices globally is an error in diagnosis. Knowledge discovery and data mining have found numerous applications in business and scientific domain. Valuable knowledge can be discovered from application of data mining techniques in healthcare system. In this study, we briefly examine the potential use of classification based data mining techniques such as Rule based, Decision tree, Naïve Bayes and Artificial Neural Network to massive volume of healthcare data. The healthcare industry collects huge amounts of healthcare data which, unfortunately, are not “mined” to discover hidden information. For data preprocessing and effective decision making One Dependency Augmented Naïve Bayes classifier (ODANB) and naive creedal classifier 2 (NCC2) are used. This is an extension of naïve Bayes to imprecise probabilities that aims at delivering robust classifications also when dealing with small or incomplete data sets. Discovery of hidden patterns and relationships often goes unexploited. Diagnosis of Lung Cancer Disease can answer complex “what if” queries which traditional decision support systems cannot. Using generic lung cancer symptoms such as age, sex, Wheezing, Shortness of breath, Pain in shoulder, chest, arm, it can predict the likelihood of patients getting a lung cancer disease. Aim of the paper is to propose a model for early detection and correct diagnosis of the disease which will help the doctor in saving the life of the patient. Keywords—Lung cancer, Naive Bayes, ODANB, NCC2, Data Mining, Classification.", "title": "" }, { "docid": "7bb0ea76acaf4e23312ae62d0b6321db", "text": "The European honey bee exploits floral resources efficiently and may therefore compete with solitary wild bees. Hence, conservationists and bee keepers are debating about the consequences of beekeeping for the conservation of wild bees in nature reserves. We observed flower-visiting bees on flowers of Calluna vulgaris in sites differing in the distance to the next honey-bee hive and in sites with hives present and absent in the Lüneburger Heath, Germany. Additionally, we counted wild bee ground nests in sites that differ in their distance to the next hive and wild bee stem nests and stem-nesting bee species in sites with hives present and absent. We did not observe fewer honey bees or higher wild bee flower visits in sites with different distances to the next hive (up to 1,229 m). However, wild bees visited fewer flowers and honey bee visits increased in sites containing honey-bee hives and in sites containing honey-bee hives we found fewer stem-nesting bee species. The reproductive success, measured as number of nests, was not affected by distance to honey-bee hives or their presence but by availability and characteristics of nesting resources. Our results suggest that beekeeping in the Lüneburg Heath can affect the conservation of stem-nesting bee species richness but not the overall reproduction either of stem-nesting or of ground-nesting bees. Future experiments need control sites with larger distances than 500 m to hives. Until more information is available, conservation efforts should forgo to enhance honey bee stocking rates but enhance the availability of nesting resources.", "title": "" }, { "docid": "e9474d646b9da5e611475f4cdfdfc30e", "text": "Wearable medical sensors (WMSs) are garnering ever-increasing attention from both the scientific community and the industry. Driven by technological advances in sensing, wireless communication, and machine learning, WMS-based systems have begun transforming our daily lives. Although WMSs were initially developed to enable low-cost solutions for continuous health monitoring, the applications of WMS-based systems now range far beyond health care. Several research efforts have proposed the use of such systems in diverse application domains, e.g., education, human-computer interaction, and security. Even though the number of such research studies has grown drastically in the last few years, the potential challenges associated with their design, development, and implementation are neither well-studied nor well-recognized. This article discusses various services, applications, and systems that have been developed based on WMSs and sheds light on their design goals and challenges. We first provide a brief history of WMSs and discuss how their market is growing. We then discuss the scope of applications of WMS-based systems. Next, we describe the architecture of a typical WMS-based system and the components that constitute such a system, and their limitations. Thereafter, we suggest a list of desirable design goals that WMS-based systems should satisfy. Finally, we discuss various research directions related to WMSs and how previous research studies have attempted to address the limitations of the components used in WMS-based systems and satisfy the desirable design goals.", "title": "" }, { "docid": "6b125ab0691988a5836855346f277970", "text": "Cardol (C₁₅:₃), isolated from cashew (Anacardium occidentale L.) nut shell liquid, has been shown to exhibit bactericidal activity against various strains of Staphylococcus aureus, including methicillin-resistant strains. The maximum level of reactive oxygen species generation was detected at around the minimum bactericidal concentration of cardol, while reactive oxygen species production drastically decreased at doses above the minimum bactericidal concentration. The primary response for bactericidal activity around the bactericidal concentration was noted to primarily originate from oxidative stress such as intracellular reactive oxygen species generation. High doses of cardol (C₁₅:₃) were shown to induce leakage of K⁺ from S. aureus cells, which may be related to the decrease in reactive oxygen species. Antioxidants such as α-tocopherol and ascorbic acid restricted reactive oxygen species generation and restored cellular damage induced by the lipid. Cardol (C₁₅:₃) overdose probably disrupts the native membrane-associated function as it acts as a surfactant. The maximum antibacterial activity of cardols against S. aureus depends on their log P values (partition coefficient in octanol/water) and is related to their similarity to those of anacardic acids isolated from the same source.", "title": "" }, { "docid": "bd721a6a06348bdf5624edc4ba176e3b", "text": "We show that the task of question answering (QA) can significantly benefit from the transfer learning of models trained on a different large, fine-grained QA dataset. We achieve the state of the art in two well-studied QA datasets, WikiQA and SemEval-2016 (Task 3A), through a basic transfer learning technique from SQuAD. For WikiQA, our model outperforms the previous best model by more than 8%. We demonstrate that finer supervision provides better guidance for learning lexical and syntactic information than coarser supervision, through quantitative results and visual analysis. We also show that a similar transfer learning procedure achieves the state of the art on an entailment task.", "title": "" }, { "docid": "1edd6cb3c6ed4657021b6916efbc23d9", "text": "Siamese-like networks, Streetscore-CNN (SS-CNN) and Ranking SS-CNN, to predict pairwise comparisons Figure 1: User Interface for Crowdsourced Online Game Performance Analysis • SS-CNN: We calculate the % of pairwise comparisons in test set predicted correctly by (1) Softmax of output neurons in final layer (2) comparing TrueSkill scores [2] obtained from synthetic pairwise comparisons from the CNN (3) extracting features from penultimate layer of CNN and feeding pairwise feature representations to a RankSVM [3] • RSS-CNN: We compare the ranking function outputs for both images in a test pair to decide which image wins, and calculate the binary prediction accuracy.", "title": "" }, { "docid": "3da4bcf1e3bcb3c5feb27fd05e43da80", "text": "This paper introduces a texture representation suitable for recognizing images of textured surfaces under a wide range of transformations, including viewpoint changes and nonrigid deformations. At the feature extraction stage, a sparse set of affine Harris and Laplacian regions is found in the image. Each of these regions can be thought of as a texture element having a characteristic elliptic shape and a distinctive appearance pattern. This pattern is captured in an affine-invariant fashion via a process of shape normalization followed by the computation of two novel descriptors, the spin image and the RIFT descriptor. When affine invariance is not required, the original elliptical shape serves as an additional discriminative feature for texture recognition. The proposed approach is evaluated in retrieval and classification tasks using the entire Brodatz database and a publicly available collection of 1,000 photographs of textured surfaces taken from different viewpoints.", "title": "" }, { "docid": "86f273bc450b9a3b6acee0e8d183b3cd", "text": "This paper aims to determine which is the best human action recognition method based on features extracted from RGB-D devices, such as the Microsoft Kinect. A review of all the papers that make reference to MSR Action3D, the most used dataset that includes depth information acquired from a RGB-D device, has been performed. We found that the validation method used by each work differs from the others. So, a direct comparison among works cannot be made. However, almost all the works present their results comparing them without taking into account this issue. Therefore, we present different rankings according to the methodology used for the validation in orden to clarify the existing confusion.", "title": "" }, { "docid": "8b94a3040ee23fa3d4403b14b0f550e2", "text": "Reactive programming has recently gained popularity as a paradigm that is well-suited for developing event-driven and interactive applications. It facilitates the development of such applications by providing abstractions to express time-varying values and automatically managing dependencies between such values. A number of approaches have been recently proposed embedded in various languages such as Haskell, Scheme, JavaScript, Java, .NET, etc. This survey describes and provides a taxonomy of existing reactive programming approaches along six axes: representation of time-varying values, evaluation model, lifting operations, multidirectionality, glitch avoidance, and support for distribution. From this taxonomy, we observe that there are still open challenges in the field of reactive programming. For instance, multidirectionality is supported only by a small number of languages, which do not automatically track dependencies between time-varying values. Similarly, glitch avoidance, which is subtle in reactive programs, cannot be ensured in distributed reactive programs using the current techniques.", "title": "" }, { "docid": "37651559403dca847dc0b4baed59d7d7", "text": "Reading strategies have been shown to improve comprehension levels, especially for readers lacking adequate prior knowledge. Just as the process of knowledge accumulation is time-consuming for human readers, it is resource-demanding to impart rich general domain knowledge into a language model via pre-training (Radford et al., 2018; Devlin et al., 2018). Inspired by reading strategies identified in cognitive science, and given limited computational resources — just a pre-trained model and a fixed number of training instances — we therefore propose three simple domain-independent strategies aimed to improve non-extractive machine reading comprehension (MRC): (i) BACK AND FORTH READING that considers both the original and reverse order of an input sequence, (ii) HIGHLIGHTING, which adds a trainable embedding to the text embedding of tokens that are relevant to the question and candidate answers, and (iii) SELF-ASSESSMENT that generates practice questions and candidate answers directly from the text in an unsupervised manner. By fine-tuning a pre-trained language model (Radford et al., 2018) with our proposed strategies on the largest existing general domain multiple-choice MRC dataset RACE, we obtain a 5.8% absolute increase in accuracy over the previous best result achieved by the same pre-trained model fine-tuned on RACE without the use of strategies. We further fine-tune the resulting model on a target task, leading to new stateof-the-art results on six representative nonextractive MRC datasets from different domains (i.e., ARC, OpenBookQA, MCTest, MultiRC, SemEval-2018, and ROCStories). These results indicate the effectiveness of the proposed strategies and the versatility ∗ This work was done when the author was an intern at Tencent AI Lab and general applicability of our fine-tuned models that incorporate the strategies.", "title": "" }, { "docid": "5c5e9a93b4838cbebd1d031a6d1038c4", "text": "Live migration of virtual machines (VMs) is key feature of virtualization that is extensively leveraged in IaaS cloud environments: it is the basic building block of several important features, such as load balancing, pro-active fault tolerance, power management, online maintenance, etc. While most live migration efforts concentrate on how to transfer the memory from source to destination during the migration process, comparatively little attention has been devoted to the transfer of storage. This problem is gaining increasing importance: due to performance reasons, virtual machines that run large-scale, data-intensive applications tend to rely on local storage, which poses a difficult challenge on live migration: it needs to handle storage transfer in addition to memory transfer. This paper proposes a memory migration independent approach that addresses this challenge. It relies on a hybrid active push / prioritized prefetch strategy, which makes it highly resilient to rapid changes of disk state exhibited by I/O intensive workloads. At the same time, it is minimally intrusive in order to ensure a maximum of portability with a wide range of hypervisors. Large scale experiments that involve multiple simultaneous migrations of both synthetic benchmarks and a real scientific application show improvements of up to 10x faster migration time, 10x less bandwidth consumption and 8x less performance degradation over state-of-art.", "title": "" }, { "docid": "e4b3e5fa0820dbbe07f1ac005dc796dd", "text": "Alzheimer's disease is an irreversible, progressive neurodegenerative disorder. Various therapeutic approaches are being used to improve the cholinergic neurotransmission, but their role in AD pathogenesis is still unknown. Although, an increase in tau protein concentration in CSF has been described in AD, but several issues remains unclear. Extensive and accurate analysis of CSF could be helpful to define presence of tau proteins in physiological conditions, or released during the progression of neurodegenerative disease. The amyloid cascade hypothesis postulates that the neurodegeneration in AD caused by abnormal accumulation of amyloid beta (Aβ) plaques in various areas of the brain. The amyloid hypothesis has continued to gain support over the last two decades, particularly from genetic studies. Therefore, current research progress in several areas of therapies shall provide an effective treatment to cure this devastating disease. This review critically evaluates general biochemical and physiological functions of Aβ directed therapeutics and their relevance.", "title": "" }, { "docid": "21df2b20c9ecd6831788e00970b3ca79", "text": "Enterprises today face several challenges when hosting line-of-business applications in the cloud. Central to many of these challenges is the limited support for control over cloud network functions, such as, the ability to ensure security, performance guarantees or isolation, and to flexibly interpose middleboxes in application deployments. In this paper, we present the design and implementation of a novel cloud networking system called CloudNaaS. Customers can leverage CloudNaaS to deploy applications augmented with a rich and extensible set of network functions such as virtual network isolation, custom addressing, service differentiation, and flexible interposition of various middleboxes. CloudNaaS primitives are directly implemented within the cloud infrastructure itself using high-speed programmable network elements, making CloudNaaS highly efficient. We evaluate an OpenFlow-based prototype of CloudNaaS and find that it can be used to instantiate a variety of network functions in the cloud, and that its performance is robust even in the face of large numbers of provisioned services and link/device failures.", "title": "" }, { "docid": "0bd86b41fb7a183b5ea0f2e7836040ab", "text": "Profiling driving behavior has become a relevant aspect in fleet management, automotive insurance and eco-driving. Detecting inefficient or aggressive drivers can help reducing fleet degradation, insurance policy cost and fuel consumption. In this paper, we present a Fuzzy-Logic based driver scoring mechanism that uses smartphone sensing data, including accelerometers and GPS. In order to evaluate the proposed mechanism, we have collected traces from a testbed consisting in 20 vehicles equipped with an Android sensing application we have developed to this end. The results show that the proposed sensing variables using smartphones can be merged to provide each driver with a single score.", "title": "" }, { "docid": "9081cb169f74b90672f84afa526f40b3", "text": "The paper presents an analysis of the main mechanisms of decryption of SSL/TLS traffic. Methods and technologies for detecting malicious activity in encrypted traffic that are used by leading companies are also considered. Also, the approach for intercepting and decrypting traffic transmitted over SSL/TLS is developed, tested and proposed. The developed approach has been automated and can be used for remote listening of the network, which will allow to decrypt transmitted data in a mode close to real time.", "title": "" } ]
scidocsrr
0f1a5d7cfb1f359258cf0df141d9e214
An Energy Efficient Full-Frame Feature Extraction Accelerator With Shift-Latch FIFO in 28 nm CMOS
[ { "docid": "bb542460bf9196ef1905cecdce252bf3", "text": "Wireless sensor nodes have many compelling applications such as smart buildings, medical implants, and surveillance systems. However, existing devices are bulky, measuring >;1cm3, and they are hampered by short lifetimes and fail to realize the “smart dust” vision of [1]. Smart dust requires a mm3-scale, wireless sensor node with perpetual energy harvesting. Recently two application-specific implantable microsystems [2][3] demonstrated the potential of a mm3-scale system in medical applications. However, [3] is not programmable and [2] lacks a method for re-programming or re-synchronizing once encapsulated. Other practical issues remain unaddressed, such as a means to protect the battery during the time period between system assembly and deployment and the need for flexible design to enable use in multiple application domains.", "title": "" } ]
[ { "docid": "b4367baa4228b6d498da8bac657da17f", "text": "Information system design and optimum sizing is a very complex task. Theoretical research and practitioners often tackle the optimization problem by applying specific techniques for the optimization of individual design phases, usually leading to local optima. Conversely, this paper proposes the definition of a design methodology based on an evolutionary approach to the optimization of the client/server-farm distributed structure, which is typical of a distributed information technology (IT) architecture. The optimization problem consists of finding the minimum-cost physical systems that satisfy all architectural requirements given by the designer. The proposed methodology allows for the identification of the architectural solution that minimizes costs, against different information system requirements and multiple design alternatives, thorough a genetic-based exploration of the solution space. Experimental results show that costs can be significantly reduced with respect to conventional approaches adopted by IT designers and available in the professional literature.", "title": "" }, { "docid": "0bc847391ea276e19d91bdb0ab14a5e5", "text": "Modern machine learning models are beginning to rival human performance on some realistic object recognition tasks, but we still lack a full understanding of how the human brain solves this same problem. This thesis combines knowledge from machine learning and computational neuroscience to create models of human object recognition that are increasingly realistic both in their treatment of low-level neural mechanisms and in their reproduction of high-level human behaviour. First, I present extensions to the Neural Engineering Framework to make its preferred type of model—the “fixed-encoding” network—more accurate for object recognition tasks. These extensions include better distributions—such as Gabor filters—for the encoding weights, and better loss functions—namely weighted squared loss, softmax loss, and hinge loss—to solve for decoding weights. Second, I introduce increased biological realism into deep convolutional neural networks trained with backpropagation, by training them to run using spiking leaky integrate-andfire (LIF) neurons. Convolutional neural networks have been successful in machine learning, and I am able to convert them to spiking networks while retaining similar levels of performance. I present a novel method to smooth the LIF rate response function in order to avoid the common problems associated with differentiating spiking neurons in general and LIF neurons in particular. I also derive a number of novel characterizations of spiking variability, and use these to train spiking networks to be more robust to this variability. Finally, to address the problems with implementing backpropagation in a biological system, I train spiking deep neural networks using the more biological Feedback Alignment algorithm. I examine this algorithm in depth, including many variations on the core algorithm, methods to train using non-differentiable spiking neurons, and some of the limitations of the algorithm. Using these findings, I construct a spiking model that learns online in a biologically realistic manner. The models developed in this thesis help to explain both how spiking neurons in the brain work together to allow us to recognize complex objects, and how the brain may learn this behaviour. Their spiking nature allows them to be implemented on highly efficient neuromorphic hardware, opening the door to object recognition on energy-limited devices such as cell phones and mobile robots.", "title": "" }, { "docid": "de333f099bad8a29046453e099f91b84", "text": "Financial time-series forecasting has long been a challenging problem because of the inherently noisy and stochastic nature of the market. In the high-frequency trading, forecasting for trading purposes is even a more challenging task, since an automated inference system is required to be both accurate and fast. In this paper, we propose a neural network layer architecture that incorporates the idea of bilinear projection as well as an attention mechanism that enables the layer to detect and focus on crucial temporal information. The resulting network is highly interpretable, given its ability to highlight the importance and contribution of each temporal instance, thus allowing further analysis on the time instances of interest. Our experiments in a large-scale limit order book data set show that a two-hidden-layer network utilizing our proposed layer outperforms by a large margin all existing state-of-the-art results coming from much deeper architectures while requiring far fewer computations.", "title": "" }, { "docid": "82357d089e72c1ed259aba565ca97b1c", "text": "In this paper, we present an algorithm to find a sequence of top-down edit operations with minimum cost that transforms an XML document such that it conforms to a schema. It is shown that the algorithm runs in O(p x log p x n), where p is the size of the schema(grammar) and n is the size of the XML document (tree). We have also shown that edit distance with restricted top-down edit operations can be computed the same way.We will also show how to use the edit distances in document classification. Experimental studies have shown that our methods are effective in structure-oriented classification for both real and synthesized data sets.", "title": "" }, { "docid": "7ee422f9238c7e571744753883aea787", "text": "This paper tackles high-dynamic-range (HDR) image reconstruction given only a single low-dynamic-range (LDR) image as input. While the existing methods focus on minimizing the mean-squared-error (MSE) between the target and reconstructed images, we minimize a hybrid loss that consists of perceptual and adversarial losses in addition to HDR-reconstruction loss. The reconstruction loss instead of MSE is more suitable for HDR since it puts more weight on both overand underexposed areas. It makes the reconstruction faithful to the input. Perceptual loss enables the networks to utilize knowledge about objects and image structure for recovering the intensity gradients of saturated and grossly quantized areas. Adversarial loss helps to select the most plausible appearance from multiple solutions. The hybrid loss that combines all the three losses is calculated in logarithmic space of image intensity so that the outputs retain a large dynamic range and meanwhile the learning becomes tractable. Comparative experiments conducted with other state-of-the-art methods demonstrated that our method produces a leap in image quality.", "title": "" }, { "docid": "176636edbd9458b7b87c1bb511e4ed51", "text": "Numerous indigenous healing traditions around the world employ plants with psychoactive effects to facilitate divination and other spiritual healing rituals. Southern Africa has often been considered to have relatively few psychoactive plant species of cultural importance, and little has been published on the subject. This paper reports on 85 species of plants that are used for divination by southern Bantu-speaking people. Of these, 39 species (45 %) have other reported psychoactive uses, and a number have established hallucinogenic activity. These findings indicate that psychoactive plants have an important role in traditional healing practices in southern Africa.", "title": "" }, { "docid": "799be9729a01234c236431f5c754de8f", "text": "This meta-analytic review of 42 studies covering 8,009 participants (ages 4-20) examines the relation of moral emotion attributions to prosocial and antisocial behavior. A significant association is found between moral emotion attributions and prosocial and antisocial behaviors (d = .26, 95% CI [.15, .38]; d = .39, 95% CI [.29, .49]). Effect sizes differ considerably across studies and this heterogeneity is attributed to moderator variables. Specifically, effect sizes for predicted antisocial behavior are larger for self-attributed moral emotions than for emotions attributed to hypothetical story characters. Effect sizes for prosocial and antisocial behaviors are associated with several other study characteristics. Results are discussed with respect to the potential significance of moral emotion attributions for the social behavior of children and adolescents.", "title": "" }, { "docid": "853220dc960afe1b4b2137b934b1e235", "text": "Multi-level marketing is a marketing approach that motivates its participants to promote a certain product among their friends. The popularity of this approach increases due to the accessibility of modern social networks, however, it existed in one form or the other long before the Internet age began (the infamous Pyramid scheme that dates back at least a century is in fact a special case of multi-level marketing). This paper lays foundations for the study of reward mechanisms in multi-level marketing within social networks. We provide a set of desired properties for such mechanisms and show that they are uniquely satisfied by geometric reward mechanisms. The resilience of mechanisms to false-name manipulations is also considered; while geometric reward mechanisms fail against such manipulations, we exhibit other mechanisms which are false-name-proof.", "title": "" }, { "docid": "578b3611224988091ba29af702d91d6b", "text": "This paper presents a comparative study of various controllers for the speed control of DC motor. The most commonly used controller for the speed control of dc motor is ProportionalIntegral (P-I) controller. However, the P-I controller has some disadvantages such as: the high starting overshoot, sensitivity to controller gains and sluggish response due to sudden disturbance. So, the relatively new Integral-Proportional (I-P) controller is proposed to overcome the disadvantages of the P-I controller. Further, two Fuzzy logic based controllers namely; Fuzzy control and Neuro-fuzzy control are proposed and the performance these controllers are compared with both P-I and I-P controllers. Simulation results are presented and analyzed for all the controllers. It is observed that fuzzy logic based controllers give better responses than the traditional P-I as well as I-P controller for the speed control of dc motor drives. Keywords—Proportional-Integral (P-I) controller, IntegralProportional (I-P) controller, Fuzzy logic control, Neuro-fuzzy control, Speed control, DC Motor drive.", "title": "" }, { "docid": "ac24254a08f447f1090dc39f79298302", "text": "The 3 most often-used performance measures in the cognitive and decision sciences are choice, response or decision time, and confidence. We develop a random walk/diffusion theory-2-stage dynamic signal detection (2DSD) theory-that accounts for all 3 measures using a common underlying process. The model uses a drift diffusion process to account for choice and decision time. To estimate confidence, we assume that evidence continues to accumulate after the choice. Judges then interrupt the process to categorize the accumulated evidence into a confidence rating. The model explains all known interrelationships between the 3 indices of performance. Furthermore, the model also accounts for the distributions of each variable in both a perceptual and general knowledge task. The dynamic nature of the model also reveals the moderating effects of time pressure on the accuracy of choice and confidence. Finally, the model specifies the optimal solution for giving the fastest choice and confidence rating for a given level of choice and confidence accuracy. Judges are found to act in a manner consistent with the optimal solution when making confidence judgments.", "title": "" }, { "docid": "d97a992e8a7275a663883c7ee7e6cb56", "text": "Mindfulness originated in the Buddhist tradition as a way of cultivating clarity of thought. Despite the fact that this behavior is best captured using critical thinking (CT) assessments, no studies have examined the effects of mindfulness on CT or the mechanisms underlying any such possible relationship. Even so, mindfulness has been suggested as being beneficial for CT in higher education. CT is recognized as an important higher-order cognitive process which involves the ability to analyze and evaluate evidence and arguments. Such non-automatic, reflective responses generally require the engagement of executive functioning (EF) which includes updating, inhibition, and shifting of representations in working memory. Based on research showing that mindfulness enhances aspects of EF and certain higher-order cognitive processes, we hypothesized that individuals higher in facets of dispositional mindfulness would demonstrate greater CT performance, and that this relationship would be mediated by EF. Cross-sectional assessment of these constructs in a sample of 178 university students was achieved using the observing and non-reactivity sub-scales of the Five Factor Mindfulness Questionnaire, a battery of EF tasks and the Halpern Critical Thinking Assessment. Our hypotheses were tested by constructing a multiple meditation model which was analyzed using Structural Equation Modeling. Evidence was found for inhibition mediating the relationships between both observing and non-reactivity and CT in different ways. Indirect-only (or full) mediation was demonstrated for the relationship between observing, inhibition, and CT. Competitive mediation was demonstrated for the relationship between non-reactivity, inhibition, and CT. This suggests additional mediators of the relationship between non-reactivity and CT which are not accounted for in this model and have a negative effect on CT in addition to the positive effect mediated by inhibition. These findings are discussed in the context of the Default Interventionist Dual Process Theory of Higher-order Cognition and previous studies on mindfulness, self-regulation, EF, and higher-order cognition. In summary, dispositional mindfulness appears to facilitate CT performance and this effect is mediated by the inhibition component of EF. However, this relationship is not straightforward which suggests many possibilities for future research.", "title": "" }, { "docid": "8e878e5083d922d97f8d573c54cbb707", "text": "Deep neural networks have become the stateof-the-art models in numerous machine learning tasks. However, general guidance to network architecture design is still missing. In our work, we bridge deep neural network design with numerical differential equations. We show that many effective networks, such as ResNet, PolyNet, FractalNet and RevNet, can be interpreted as different numerical discretizations of differential equations. This finding brings us a brand new perspective on the design of effective deep architectures. We can take advantage of the rich knowledge in numerical analysis to guide us in designing new and potentially more effective deep networks. As an example, we propose a linear multi-step architecture (LM-architecture) which is inspired by the linear multi-step method solving ordinary differential equations. The LMarchitecture is an effective structure that can be used on any ResNet-like networks. In particular, we demonstrate that LM-ResNet and LMResNeXt (i.e. the networks obtained by applying the LM-architecture on ResNet and ResNeXt respectively) can achieve noticeably higher accuracy than ResNet and ResNeXt on both CIFAR and ImageNet with comparable numbers of trainable parameters. In particular, on both CIFAR and ImageNet, LM-ResNet/LM-ResNeXt can significantly compress the original networkSchool of Mathematical Sciences, Peking University, Beijing, China MGH/BWH Center for Clinical Data Science, Masschusetts General Hospital, Harvard Medical School Center for Data Science in Health and Medicine, Peking University Laboratory for Biomedical Image Analysis, Beijing Institute of Big Data Research Beijing International Center for Mathematical Research, Peking University Center for Data Science, Peking University. Correspondence to: Bin Dong <[email protected]>, Quanzheng Li <[email protected]>. Proceedings of the 35 th International Conference on Machine Learning, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018 by the author(s). s while maintaining a similar performance. This can be explained mathematically using the concept of modified equation from numerical analysis. Last but not least, we also establish a connection between stochastic control and noise injection in the training process which helps to improve generalization of the networks. Furthermore, by relating stochastic training strategy with stochastic dynamic system, we can easily apply stochastic training to the networks with the LM-architecture. As an example, we introduced stochastic depth to LM-ResNet and achieve significant improvement over the original LM-ResNet on CIFAR10.", "title": "" }, { "docid": "9da15e2851124d6ca1524ba28572f922", "text": "With the growth of mobile data application and the ultimate expectations of 5G technology, the need to expand the capacity of the wireless networks is inevitable. Massive MIMO technique is currently taking a major part of the ongoing research, and expected to be the key player in the new cellular technologies. This papers presents an overview of the major aspects related to massive MIMO design including, antenna array general design, configuration, and challenges, in addition to advanced beamforming techniques and channel modeling and estimation issues affecting the implementation of such systems.", "title": "" }, { "docid": "91365154a173be8be29ef14a3a76b08e", "text": "Fraud is a criminal practice for illegitimate gain of wealth or tampering information. Fraudulent activities are of critical concern because of their severe impact on organizations, communities as well as individuals. Over the last few years, various techniques from different areas such as data mining, machine learning, and statistics have been proposed to deal with fraudulent activities. Unfortunately, the conventional approaches display several limitations, which were addressed largely by advanced solutions proposed in the advent of Big Data. In this paper, we present fraud analysis approaches in the context of Big Data. Then, we study the approaches rigorously and identify their limits by exploiting Big Data analytics.", "title": "" }, { "docid": "61f0e4bc4144ae401bac78d7432ca4cc", "text": "The tasks that an agent will need to solve often are not known during training. However, if the agent knows which properties of the environment are important then, after learning how its actions affect those properties, it may be able to use this knowledge to solve complex tasks without training specifically for them. Towards this end, we consider a setup in which an environment is augmented with a set of user defined attributes that parameterize the features of interest. We propose a method that learns a policy for transitioning between “nearby” sets of attributes, and maintains a graph of possible transitions. Given a task at test time that can be expressed in terms of a target set of attributes, and a current state, our model infers the attributes of the current state and searches over paths through attribute space to get a high level plan, and then uses its low level policy to execute the plan. We show in 3D block stacking, gridworld games, and StarCraft that our model is able to generalize to longer, more complex tasks at test time by composing simpler learned policies.", "title": "" }, { "docid": "2d22bb2c565fa716845f7b3065361200", "text": "Despite the popularity of Twitter for research, there are very few publicly available corpora, and those which are available are either too small or unsuitable for tasks such as event detection. This is partially due to a number of issues associated with the creation of Twitter corpora, including restrictions on the distribution of the tweets and the difficultly of creating relevance judgements at such a large scale. The difficulty of creating relevance judgements for the task of event detection is further hampered by ambiguity in the definition of event. In this paper, we propose a methodology for the creation of an event detection corpus. Specifically, we first create a new corpus that covers a period of 4 weeks and contains over 120 million tweets, which we make available for research. We then propose a definition of event which fits the characteristics of Twitter, and using this definition, we generate a set of relevance judgements aimed specifically at the task of event detection. To do so, we make use of existing state-of-the-art event detection approaches and Wikipedia to generate a set of candidate events with associated tweets. We then use crowdsourcing to gather relevance judgements, and discuss the quality of results, including how we ensured integrity and prevented spam. As a result of this process, along with our Twitter corpus, we release relevance judgements containing over 150,000 tweets, covering more than 500 events, which can be used for the evaluation of event detection approaches.", "title": "" }, { "docid": "373830558905e8559592c6173366c367", "text": "In this work, we present a depth-based solution to multi-level menus for selection and manipulation of virtual objects using freehand gestures. Navigation between and through menus is performed using three gesture states that utilize X, Y translations of the finger with boundary crossing. Although presented in a single context, this menu structure can be applied to a myriad of domains requiring several levels of menu data, and serves to supplement existing and emerging menu design for augmented, virtual, and mixed-reality applications.", "title": "" }, { "docid": "dc640115a55082961ad853e4cb7a3972", "text": "This paper presents a two-pole dual-band tunable bandpass filter (BPF) with independently controllable dual passbands based on a novel tunable dual-mode resonator. This resonator principally comprises a λ/2 resonator and two varactor diodes. One varactor is placed at the center of the resonator to determine the dominant even-mode resonant frequency; the other is installed between two ends of the resonator to control the dominant odd-mode resonant frequency. These two distinct odd- and even-mode resonances can be independently generated, and they are used to realize the two separated passbands as desired. Detailed discussion is carried on to provide a set of closed-form design equations for determination of all of the elements involved in this tunable filter, inclusive of capacitively loaded quarter-wavelength or λ/2 resonators, external quality factor, and coupling coefficient. Finally, a prototype tunable dual-band filter is fabricated and measured. Measured and simulated results are found in good agreement with each other. The results show that the first passband can be tuned in a frequency range from 0.77 to 1.00 GHz with the 3-dB fractional-bandwidth of 20.3%-24.7%, whereas the second passband varies from 1.57 to 2.00 GHz with the 3-dB absolute-bandwidth of 120 ± 8 MHz.", "title": "" }, { "docid": "ff8909a9a2b7317a4e0a7af457c0cb45", "text": "Much of the world's knowledge is recorded in natural language text, but making effective use of it in this form poses a major challenge. Information extraction converts this knowledge to a structured form suitable for computer manipulation, opening up many possibilities for using it. In this review, the author describes the processing pipeline of information extraction, how the pipeline components are trained, and how this training can be made more efficient. He also describes some of the challenges that must be addressed for information extraction to become a more widely used technology.", "title": "" } ]
scidocsrr
2ab7f9daaa3b20e58585f7732a3662b8
Network text sentiment analysis method combining LDA text representation and GRU-CNN
[ { "docid": "e3bbff933acaf7d42f91a6a88b43ac13", "text": "The problem of extracting sentiments from text is a very complex task, in particular due to the significant amount of Natural Language Processing (NLP) required. This task becomes even more difficult when dealing with morphologically rich languages such as Modern Standard Arabic (MSA) and when processing brief, noisy texts such as “tweets” or “Facebook statuses”. This paper highlights key issues researchers are facing and innovative approaches that have been developed when performing subjectivity and sentiment analysis (SSA) on Arabic text in general and Arabic social media text in particular. A preprocessing phase to sentiment analysis is proposed and shown to noticeably improve the results of sentiment extraction from Arabic social media data.", "title": "" }, { "docid": "5ceb415b17cc36e9171ddc72a860ccc8", "text": "Word embeddings and convolutional neural networks (CNN) have attracted extensive attention in various classification tasks for Twitter, e.g. sentiment classification. However, the effect of the configuration used to generate the word embeddings on the classification performance has not been studied in the existing literature. In this paper, using a Twitter election classification task that aims to detect election-related tweets, we investigate the impact of the background dataset used to train the embedding models, as well as the parameters of the word embedding training process, namely the context window size, the dimensionality and the number of negative samples, on the attained classification performance. By comparing the classification results of word embedding models that have been trained using different background corpora (e.g. Wikipedia articles and Twitter microposts), we show that the background data should align with the Twitter classification dataset both in data type and time period to achieve significantly better performance compared to baselines such as SVM with TF-IDF. Moreover, by evaluating the results of word embedding models trained using various context window sizes and dimensionalities, we find that large context window and dimension sizes are preferable to improve the performance. However, the number of negative samples parameter does not significantly affect the performance of the CNN classifiers. Our experimental results also show that choosing the correct word embedding model for use with CNN leads to statistically significant improvements over various baselines such as random, SVM with TF-IDF and SVM with word embeddings. Finally, for out-of-vocabulary (OOV) words that are not available in the learned word embedding models, we show that a simple OOV strategy to randomly initialise the OOV words without any prior knowledge is sufficient to attain a good classification performance among the current OOV strategies (e.g. a random initialisation using statistics of the pre-trained word embedding models).", "title": "" }, { "docid": "5b2efe77d0d60fa1c33aa8f51ea38e1e", "text": "Abstractive Text Summarization (ATS), which is the task of constructing summary sentences by merging facts from different source sentences and condensing them into a shorter representation while preserving information content and overall meaning. It is very difficult and time consuming for human beings to manually summarize large documents of text. In this paper, we propose an LSTM-CNN based ATS framework (ATSDL) that can construct new sentences by exploring more fine-grained fragments than sentences, namely, semantic phrases. Different from existing abstraction based approaches, ATSDL is composed of two main stages, the first of which extracts phrases from source sentences and the second generates text summaries using deep learning. Experimental results on the datasets CNN and DailyMail show that our ATSDL framework outperforms the state-of-the-art models in terms of both semantics and syntactic structure, and achieves competitive results on manual linguistic quality evaluation.", "title": "" } ]
[ { "docid": "faf3967b2287b8bdfdf1ebc55bcd5910", "text": "As an essential step in many computer vision tasks, camera calibration has been studied extensively. In this paper, we propose a novel calibration technique that, based on geometric analysis, camera parameters can be estimated effectively and accurately from just one view of only five corresponding points. Our core contribution is the geometric analysis for deriving the basic equations to realize camera calibration from four coplanar corresponding points and a fifth noncoplanar one. The position, orientation, and focal length of a zooming camera can be directly estimated with unique solution. The estimated parameters are further optimized by the bundle adjustment technique. The proposed calibration method is examined and evaluated on both computer simulated data and real images. The experimental results confirm the validity of the proposed method that camera parameters can be estimated with sufficient accuracy using just five-point correspondences from a single image, even in the presence of image noise.", "title": "" }, { "docid": "cdd3c529e1f934839444f054ecc93319", "text": "Flow visualization has been a very attractive component of scientific visualization research for a long time. Usually very large multivariate datasets require processing. These datasets often consist of a large number of sample locations and several time steps. The steadily increasing performance of computers has recently become a driving factor for a reemergence in flow visualization research, especially in texture-based techniques. In this paper, dense, texture-based flow visualization techniques are discussed. This class of techniques attempts to provide a complete, dense representation of the flow field with high spatio-temporal coherency. An attempt of categorizing closely related solutions is incorporated and presented. Fundamentals are shortly addressed as well as advantages and disadvantages of the methods.", "title": "" }, { "docid": "3b903b284e6a7bfb54113242b1143ddc", "text": "Hypertension — the chronic elevation of blood pressure — is a major human health problem. In most cases, the root cause of the disease remains unknown, but there is mounting evidence that many forms of hypertension are initiated and maintained by an elevated sympathetic tone. This review examines how the sympathetic tone to cardiovascular organs is generated, and discusses how elevated sympathetic tone can contribute to hypertension.", "title": "" }, { "docid": "a220595aea41424065d4c60d60768ffa", "text": "Characteristics of high-voltage dual-metal-trench (DMT) SiC Schottky pinch-rectifiers are reported for the first time. At a reverse bias of 300 V, the reverse leakage current of the SiC DMT device is 75 times less than that of a planar device while the forward bias characteristics remain comparable to those of a planar device. In this work, 4H-SiC pinch-rectifiers have been fabricated using a small/large barrier height (Ti/Ni) DMT device structure. The DMT structure is specially designed to permit simple fabrication in SiC. The Ti Schottky contact metal serves as a self-aligned trench etch mask and only four basic fabrication steps are required.", "title": "" }, { "docid": "808340c3bba7bcac0f96fdd375bd3bba", "text": "Comic books of all cultures are an active research area as digitizing content for mobile and web is becoming more common. Past research on comics has largely concentrated on text extraction, panel segmentation and document analysis, while the utilisation of the extracted data has had less attention. In this paper we present a method to automatically determine the reading order of Japanese manga text bubbles using only text bubble position and image data. Our method classifies and orders page and text position information on three layers, which are hierarchically sorted to obtain the final ordering. The method is evaluated on a data set of 1769 manga pages with 14726 manually annotated text positions and correct ordering. Evaluation shows the method has over 95% transition accuracies and vastly outperforms a naive implementation.", "title": "" }, { "docid": "c36bfde4e2f1cd3a5d6d8c0bcb8806d8", "text": "A 20/20 vision in ophthalmology implies a perfect view of things that are in front of you. The term is also used to mean a perfect sight of the things to come. Here we focus on a speculative vision of the VLDB in the year 2020. This panel is the follow-up of the one I organised (with S. Navathe) at the Kyoto VLDB in 1986, with the title: \"Anyone for a VLDB in the Year 2000?\". In that panel, the members discussed the major advances made in the database area and conjectured on its future, following a concern of many researchers that the database area was running out of interesting research topics and therefore it might disappear into other research topics, such as software engineering, operating systems and distributed systems. That did not happen.", "title": "" }, { "docid": "78a6af6e87f82ac483b213f04b1ce405", "text": "Data deduplication is one of important data compression techniques for eliminating duplicate copies of repeating data, and has been widely used in cloud storage to reduce the amount of storage space and save bandwidth. To protect the confidentiality been proposed to encrypt the data before outsourcing. To better protect data security, this paper makes the first attempt to formally address the problem of authorized data deduplication. Different from traditional deduplication systems, the differential privileges of users are further considered in duplicate check besides the data itself. We also present several new deduplication constructions supporting authorized duplicate check in a hybrid cloud architecture. Security analysis demonstrates that our scheme is secure in terms of the definitions specified in the proposed security model. As a proof of concept, we implement a prototype of our proposed authorized duplicate check scheme and conduct testbed experiments using our prototype. We show that our proposed authorized duplicate check scheme incurs minimal overhead compared to normal operations.", "title": "" }, { "docid": "f2d27b79f1ac3809f7ea605203136760", "text": "The Internet of Things (IoT) is a fast-growing movement turning devices into always-connected smart devices through the use of communication technologies. This facilitates the creation of smart strategies allowing monitoring and optimization as well as many other new use cases for various sectors. Low Power Wide Area Networks (LPWANs) have enormous potential as they are suited for various IoT applications and each LPWAN technology has certain features, capabilities and limitations. One of these technologies, namely LoRa/LoRaWAN has several promising features and private and public LoRaWANs are increasing worldwide. Similarly, researchers are also starting to study the potential of LoRa and LoRaWANs. This paper examines the work that has already been done and identifies flaws and strengths by performing a comparison of created testbeds. Limitations of LoRaWANs are also identified.", "title": "" }, { "docid": "9049805c56c9b7fc212fdb4c7f85dfe1", "text": "Intentions (6) Do all the important errands", "title": "" }, { "docid": "52ac110f343a39cc0dc21002bf8b59c1", "text": "Electronicvotingore-votinghasbeenusedinvaryingformssince1970swithfundamentalbenefits over paper-based systems such as increased efficiency and reduced errors. However, challenges remaintotheachievingofwidespreadadoptionofsuchsystems,especiallywithrespecttoimproving theirresilienceagainstpotentialfaults.Blockchainisadisruptivetechnologyofthecurrenteraand promises to improve theoverall resilienceof e-voting systems.This articlepresents aneffort to leveragebenefitsofblockchainsuchascryptographicfoundationsandtransparencytoachievean effectiveschemefore-voting.Theproposedschemeconformstothefundamentalrequirementsfor e-votingschemesandachievesend-to-endverifiability.Thearticlepresentsdetailsoftheproposed e-votingschemealongwithitsimplementationusingMultichainplatform.Thearticlealsopresents anin-depthevaluationoftheschemewhichsuccessfullydemonstratesitseffectivenesstoachievean end-to-endverifiablee-votingscheme. KEywoRDS Blockchain, E-Government, Electronic Voting, E-Voting, Verifiable Voting", "title": "" }, { "docid": "213daea0f909e9731aa77e001c447654", "text": "In the wake of a polarizing election, social media is laden with hateful content. To address various limitations of supervised hate speech classification methods including corpus bias and huge cost of annotation, we propose a weakly supervised twopath bootstrapping approach for an online hate speech detection model leveraging large-scale unlabeled data. This system significantly outperforms hate speech detection systems that are trained in a supervised manner using manually annotated data. Applying this model on a large quantity of tweets collected before, after, and on election day reveals motivations and patterns of inflammatory language.", "title": "" }, { "docid": "9a2c168b09c89a2f7edc8b659db4d1a6", "text": "Theintegration of information of different kinds, such asspatial and alphanumeric, at different levels of detail is a challenge. While a solution is not reached, it is widely recognized that the need to integrate information is so pressing that it does not matter if detail is lost, as long as integration is achieved. This paper shows the potential for extraction of different levels of information, within the framework of ontology-driven geographic information systems.", "title": "" }, { "docid": "0ff46a0a31e180e3bc2085cfbefa36c4", "text": "In this communication, we present a simple yet unconventional wideband resonant cavity antenna (RCA) with high gain and large gain-bandwidth product (GBP). The bandwidth enhancement of the RCA is approached by using a second-order height resonant cavity comprising a dielectric slab of high permittivity as the partially reflective surface and a ground plane with spherically modified geometry. It is shown that by imitating an open resonator structure, the spherically modified cavity can support multiple quasi-Laguerre–Gaussian beam modes and hence, dramatically improve the operation bandwidth of the RCA. A RCA with large GBP is designed, simulated, and then verified through measurement. Measured results show a bandwidth of 25% with a peak gain of 17.7 dBi. The proposed low-cost and easy-fabricate method shows its effectiveness in increasing the bandwidth for the RCA.", "title": "" }, { "docid": "00d76380bcc967a5b7eee4c8903cedf1", "text": "This paper demonstrates models that were designed and implemented to simulate slotted ALOHA multiple access computer network protocol. The models are spreadsheet-based simulating e-forms that were designed for students use in college level data communication and networking courses. Specifically, three models for simulating this protocol are fully implemented using spreadsheets. The features of these models are simplicity and quickness of implementation compared with other implementation techniques. These models assisted instructors to develop educational objects that in turn will help students for better understanding and exploring of the scientific concepts related to computer protocols by the aid of visual and interactive spreadsheet-based e-forms. Moreover, advanced spreadsheet techniques such as imagery integration, hyperlinks, conditional structures, conditional formats, and charts insetting, to simulate scientific notions that are taught to undergraduate students were exploited in these models. The models designing technique is characterized by simplicity, flexibility, and affordability. The technique can be applied and used in many disciplines of education, business, science, and technology. Generally, the developed computational e-forms can be used by instructors to illustrate topics in attractive fashions. In addition, students and learners can use the developed educational objects without instructor supervision in self-education or e-learning environments.", "title": "" }, { "docid": "a4a56e0647849c22b48e7e5dc3f3049b", "text": "The paper describes a 2D sound source mapping system for a mobile robot. We developed a multiple sound sources localization method for a mobile robot with a 32 channel concentric microphone array. The system can separate multiple moving sound sources using direction localization. Directional localization and separation of different pressure sound sources is achieved using the delay and sum beam forming (DSBF) and the frequency band selection (FBS) algorithm. Sound sources were mapped by using a wheeled robot equipped with the microphone array. The robot localizes sounds direction on the move and estimates sound sources position using triangulation. Assuming the movement of sound sources, the system set a time limit and uses only the last few seconds data. By using the random sample consensus (RANSAC) algorithm for position estimation, we achieved 2D multiple sound source mapping from time limited data with high accuracy. Also, moving sound source separation is experimentally demonstrated with segments of the DSBF enhanced signal derived from the localization process", "title": "" }, { "docid": "2793f528a9b29345b1ee8ce1202933e3", "text": "Neural Networks are prevalent in todays NLP research. Despite their success for different tasks, training time is relatively long. We use Hogwild! to counteract this phenomenon and show that it is a suitable method to speed up training Neural Networks of different architectures and complexity. For POS tagging and translation we report considerable speedups of training, especially for the latter. We show that Hogwild! can be an important tool for training complex NLP architectures.", "title": "" }, { "docid": "d3ebffaa5d7e2222bbf5b490debbce3f", "text": "Presenting visual feedback for image-guided surgery on a monitor requires the surgeon to perform time-consuming comparisons and diversion of sight and attention away from the patient. Deficiencies in previously developed augmented reality systems for image-guided surgery have, however, prevented the general acceptance of any one technique as a viable alternative to monitor displays. This work presents an evaluation of the feasibility and versatility of a novel augmented reality approach for the visualisation of surgical planning and navigation data. The approach, which utilises a portable image overlay device, was evaluated during integration into existing surgical navigation systems and during application within simulated navigated surgery scenarios. A range of anatomical models, surgical planning data and guidance information taken from liver surgery, cranio-maxillofacial surgery, orthopaedic surgery and biopsy were displayed on patient-specific phantoms, directly on to the patient’s skin and on to cadaver tissue. The feasibility of employing the proposed augmented reality visualisation approach in each of the four tested clinical applications was qualitatively assessed for usability, visibility, workspace, line of sight and obtrusiveness. The visualisation approach was found to assist in spatial understanding and reduced the need for sight diversion throughout the simulated surgical procedures. The approach enabled structures to be identified and targeted quickly and intuitively. All validated augmented reality scenes were easily visible and were implemented with minimal overhead. The device showed sufficient workspace for each of the presented applications, and the approach was minimally intrusiveness to the surgical scene. The presented visualisation approach proved to be versatile and applicable to a range of image-guided surgery applications, overcoming many of the deficiencies of previously described AR approaches. The approach presents an initial step towards a widely accepted alternative to monitor displays for the visualisation of surgical navigation data.", "title": "" }, { "docid": "624d645054e730855eed9001e4c4bbc4", "text": "In this paper, we argue that some tasks (e.g., meeting support) require more flexible hypermedia systems and we describe a prototype hypermedia system, DOLPHIN, that implements more flexibility. As part of the argument, we present a theoretical design space for information structuring systems and locate existing hypertext systems within it. The dimensions of the space highlight a system's internal representation of structure and the user's actions in creating structure. Second, we describe an empirically derived range of activities connected to conducting group meetings, including the pre- and post-preparation phases, and argue that hyptetext systems need to be more flexible in order to support this range of activities. Finally, we describe a hypermedia prototype, DOLPHIN, which implements this kind of flexible support for meetings. DOLPHIN supports different degrees of formality (e.g., handwriting and sketches as well as typed nodes and links are supported), coexistence of different structures (e.g., handwriting and nodes can exist on the same page) and mutual transformations between them (e.g., handwriting can be turned into nodes and vice versa).", "title": "" }, { "docid": "45c19ce0417a5f873184dc72eb107cea", "text": "Common Information Model (CIM) is emerging as a standard for information modelling for power control centers. While, IEC 61850 by International Electrotechnical Commission (IEC) is emerging as a standard for achieving interoperability and automation at the substation level. In future, once these two standards are well adopted, the issue of integration of these standards becomes imminent. Some efforts reported towards the integration of these standards have been surveyed. This paper describes a possible approach for the integration of IEC 61850 and CIM standards based on mapping between the representation of elements of these two standards. This enables seamless data transfer from one standard to the other. Mapping between the objects of IEC 61850 and CIM standards both in the static and dynamic models is discussed. A CIM based topology processing application is used to demonstrate the design of the data transfer between the standards. The scope and status of implementation of CIM in the Indian power sector is briefed.", "title": "" }, { "docid": "1af1ab4da0fe4368b1ad97801c4eb015", "text": "Standard approaches to Chinese word segmentation treat the problem as a tagging task, assigning labels to the characters in the sequence indicating whether the character marks a word boundary. Discriminatively trained models based on local character features are used to make the tagging decisions, with Viterbi decoding finding the highest scoring segmentation. In this paper we propose an alternative, word-based segmentor, which uses features based on complete words and word sequences. The generalized perceptron algorithm is used for discriminative training, and we use a beamsearch decoder. Closed tests on the first and secondSIGHAN bakeoffs show that our system is competitive with the best in the literature, achieving the highest reported F-scores for a number of corpora.", "title": "" } ]
scidocsrr
d239f26bc34128b46d7ea5b056ea72a8
Grammar Induction with Neural Language Models: An Unusual Replication
[ { "docid": "8695757545e44358fd63f06936335903", "text": "We propose a neural language model capable of unsupervised syntactic structure induction. The model leverages the structure information to form better semantic representations and better language modeling. Standard recurrent neural networks are limited by their structure and fail to efficiently use syntactic information. On the other hand, tree-structured recursive networks usually require additional structural supervision at the cost of human expert annotation. In this paper, We propose a novel neural language model, called the Parsing-Reading-Predict Networks (PRPN), that can simultaneously induce the syntactic structure from unannotated sentences and leverage the inferred structure to learn a better language model. In our model, the gradient can be directly back-propagated from the language model loss into the neural parsing network. Experiments show that the proposed model can discover the underlying syntactic structure and achieve state-of-the-art performance on word/character-level language model tasks.", "title": "" }, { "docid": "f2911f66107de4778dbc9d0b4c290038", "text": "We use reinforcement learning to learn tree-structured neural networks for computing representations of natural language sentences. In contrast with prior work on tree-structured models in which the trees are either provided as input or predicted using supervision from explicit treebank annotations, the tree structures in this work are optimized to improve performance on a downstream task. Experiments demonstrate the benefit of learning task-specific composition orders, outperforming both sequential encoders and recursive encoders based on treebank annotations. We analyze the induced trees and show that while they discover some linguistically intuitive structures (e.g., noun phrases, simple verb phrases), they are different than conventional English syntactic structures.", "title": "" } ]
[ { "docid": "a0e5c8945212e8cde979b4c5decb71d0", "text": "Cybercrime is a pervasive threat for today's Internet-dependent society. While the real extent and economic impact is hard to quantify, scientists and officials agree that cybercrime is a huge and still growing problem. A substantial fraction of cybercrime's overall costs to society can be traced to indirect opportunity costs, resulting from unused online services. This paper presents a parsimonious model that builds on technology acceptance research and insights from criminology to identify factors that reduce Internet users' intention to use online services. We hypothesize that avoidance of online banking, online shopping and online social networking is increased by cybercrime victimization and media reports. The effects are mediated by the perceived risk of cybercrime and moderated by the user's confidence online. We test our hypotheses using a structural equation modeling analysis of a representative pan-European sample. Our empirical results confirm the negative impact of perceived risk of cybercrime on the use of all three online service categories and support the role of cybercrime experience as an antecedent of perceived risk of cybercrime. We further show that more confident Internet users perceive less cybercriminal risk and are more likely to use online banking and online shopping, which highlights the importance of consumer education.", "title": "" }, { "docid": "bdf22b73549c774c4c42c48998da00f8", "text": "One of the key issues in practical speech processing is to achieve robust voice activity detection (VAD) against the background noise. Most of the statistical model-based approaches have tried to employ the Gaussian assumption in the discrete Fourier transform (DFT) domain, which, however, deviates from the real observation. In this paper, we propose a class of VAD algorithms based on several statistical models. In addition to the Gaussian model, we also incorporate the complex Laplacian and Gamma probability density functions to our analysis of statistical properties. With a goodness-of-fit tests, we analyze the statistical properties of the DFT spectra of the noisy speech under various noise conditions. Based on the statistical analysis, the likelihood ratio test under the given statistical models is established for the purpose of VAD. Since the statistical characteristics of the speech signal are differently affected by the noise types and levels, to cope with the time-varying environments, our approach is aimed at finding adaptively an appropriate statistical model in an online fashion. The performance of the proposed VAD approaches in both the stationary and nonstationary noise environments is evaluated with the aid of an objective measure.", "title": "" }, { "docid": "9747be055df9acedfdfe817eb7e1e06e", "text": "Text summarization solves the problem of extracting important information from huge amount of text data. There are various methods in the literature that aim to find out well-formed summaries. One of the most commonly used methods is the Latent Semantic Analysis (LSA). In this paper, different LSA based summarization algorithms are explained and two new LSA based summarization algorithms are proposed. The algorithms are evaluated on Turkish documents, and their performances are compared using their ROUGE-L scores. One of our algorithms produces the best scores.", "title": "" }, { "docid": "d5201bbe0f0de8008913cf2a16917036", "text": "Mobile learning provides unique learning experiences for learners in both formal and informal environments, supporting various pedagogies with the unique characteristics that are afforded by mobile technology. Mobile learning, as a growing topic of interest, brings challenges of design for teachers and course designers alike. Current research on mobile learning has covered various aspects such as personalization, context sensitivity, ubiquity and pedagogy. Existing theories and findings are valuable to the understanding of mobile learning, however they are fragmented and separate, and need to be understood within the broader mobile learning paradigm. This paper unifies existing theories into a method for mobile learning design that can be generalized across mobile learning applications. This method develops from a strategy – seeking objectives, identifying the approaches to learning and the context in which the course will exist, to guide the content, delivery and structure of the course towards a successful implementation that is evaluated against the initial objectives set out.", "title": "" }, { "docid": "8fcc9f13f34b03d68f59409b2e3b007a", "text": "Despite defensive advances, malicious software (malware) remains an ever present cyber-security threat. Cloud environments are far from malware immune, in that: i) they innately support the execution of remotely supplied code, and ii) escaping their virtual machine (VM) confines has proven relatively easy to achieve in practice. The growing interest in clouds by industries and governments is also creating a core need to be able to formally address cloud security and privacy issues. VM introspection provides one of the core cyber-security tools for analyzing the run-time behaviors of code. Traditionally, introspection approaches have required close integration with the underlying hypervisors and substantial re-engineering when OS updates and patches are applied. Such heavy-weight introspection techniques, therefore, are too invasive to fit well within modern commercial clouds. Instead, lighter-weight introspection techniques are required that provide the same levels of within-VM observability but without the tight hypervisor and OS patch-level integration. This work introduces Maitland as a prototype proof-of-concept implementation a lighter-weight introspection tool, which exploits paravirtualization to meet these end-goals. The work assesses Maitland's performance, highlights its use to perform packer-independent malware detection, and assesses whether, with further optimizations, Maitland could provide a viable approach for introspection in commercial clouds.", "title": "" }, { "docid": "4776f37d50709362b6173de58f6badd4", "text": "Current object recognition systems aim at recognizing numerous object classes under limited supervision conditions. This paper provides a benchmark for evaluating progress on this fundamental task. Several methods have recently proposed to utilize the commonalities between object classes in order to improve generalization accuracy. Such methods can be termed interclass transfer techniques. However, it is currently difficult to asses which of the proposed methods maximally utilizes the shared structure of related classes. In order to facilitate the development, as well as the assessment of methods for dealing with multiple related classes, a new dataset including images of several hundred mammal classes, is provided, together with preliminary results of its use. The images in this dataset are organized into five levels of variability, and their labels include information on the objects’ identity, location and pose. From this dataset, a classification benchmark has been derived, requiring fine distinctions between 72 mammal classes. It is then demonstrated that a recognition method which is highly successful on the Caltech101, attains limited accuracy on the current benchmark (36.5%). Since this method does not utilize the shared structure between classes, the question remains as to whether interclass transfer methods can increase the accuracy to the level of human performance (90%). We suggest that a labeled benchmark of the type provided, containing a large number of related classes is crucial for the development and evaluation of classification methods which make efficient use of interclass transfer.", "title": "" }, { "docid": "9eabecdc7c013099c0bcb266b43fa0dc", "text": "Aging influences how a person is perceived on multiple dimensions (e.g., physical power). Here we examined how facial structure informs these evolving social perceptions. Recent work examining young adults' faces has revealed the impact of the facial width-to-height ratio (fWHR) on perceived traits, such that individuals with taller, thinner faces are perceived to be less aggressive, less physically powerful, and friendlier. These perceptions are similar to those stereotypically associated with older adults. Examining whether fWHR might contribute to these changing perceptions over the life span, we found that age provides a shifting context through which fWHR differentially impacts aging-related social perceptions (Study 1). In addition, archival analyses (Study 2) established that fWHR decreases across age, and a subsequent study found that fWHR mediated the relationship between target age and multiple aging-related perceptions (Study 3). The findings provide evidence that fWHR decreases across age and influences stereotypical perceptions that change with age.", "title": "" }, { "docid": "47df1bd26f99313cfcf82430cb98d442", "text": "To manage supply chain efficiently, e-business organizations need to understand their sales effectively. Previous research has shown that product review plays an important role in influencing sales performance, especially review volume and rating. However, limited attention has been paid to understand how other factors moderate the effect of product review on online sales. This study aims to confirm the importance of review volume and rating on improving sales performance, and further examine the moderating roles of product category, answered questions, discount and review usefulness in such relationships. By analyzing 2939 records of data extracted from Amazon.com using a big data architecture, it is found that review volume and rating have stronger influence on sales rank for search product than for experience product. Also, review usefulness significantly moderates the effects of review volume and rating on product sales rank. In addition, the relationship between review volume and sales rank is significantly moderated by both answered questions and discount. However, answered questions and discount do not have significant moderation effect on the relationship between review rating and sales rank. The findings expand previous literature by confirming important interactions between customer review features and other factors, and the findings provide practical guidelines to manage e-businesses. This study also explains a big data architecture and illustrates the use of big data technologies in testing theoretical", "title": "" }, { "docid": "c4bd2667b2e105219e6a117838dd870d", "text": "Written contracts are a fundamental framework for commercial and cooperative transactions and relationships. Limited research has been published on the application of machine learning and natural language processing (NLP) to contracts. In this paper we report the classification of components of contract texts using machine learning and hand-coded methods. Authors studying a range of domains have found that combining machine learning and rule based approaches increases accuracy of machine learning. We find similar results which suggest the utility of considering leveraging hand coded classification rules for machine learning. We attained an average accuracy of 83.48% on a multiclass labelling task on 20 contracts combining machine learning and rule based approaches, increasing performance over machine learning alone.", "title": "" }, { "docid": "e4a3065209c9dde50267358cbe6829b7", "text": "OBJECTIVES\nWith the exponential increase in the number of articles published every year in the biomedical domain, there is a need to build automated systems to extract unknown information from the articles published. Text mining techniques enable the extraction of unknown knowledge from unstructured documents.\n\n\nMETHODS\nThis paper reviews text mining processes in detail and the software tools available to carry out text mining. It also reviews the roles and applications of text mining in the biomedical domain.\n\n\nRESULTS\nText mining processes, such as search and retrieval of documents, pre-processing of documents, natural language processing, methods for text clustering, and methods for text classification are described in detail.\n\n\nCONCLUSIONS\nText mining techniques can facilitate the mining of vast amounts of knowledge on a given topic from published biomedical research articles and draw meaningful conclusions that are not possible otherwise.", "title": "" }, { "docid": "280672ad5473e061269114d0d11acc90", "text": "With personalization, consumers can choose from various product attributes and a customized product is assembled based on their preferences. Marketers often offer personalization on websites. This paper investigates consumer purchase intentions toward personalized products in an online selling situation. The research builds and tests three hypotheses: (1) intention to purchase personalized products will be affected by individualism, uncertainty avoidance, power distance, and masculinity dimensions of a national culture; (2) consumers will be more likely to buy personalized search products than experience products; and (3) intention to buy a personalized product will not be influenced by price premiums up to some level. Results indicate that individualism is the only culture dimension to have a significant effect on purchase intention. Product type and individualism by price interaction also have a significant effect, whereas price does not. Major findings and implications are discussed. a Department of Business Administration, School of Economics and Business, Hanyang University, Ansan, South Korea b Department of International Business, School of Commerce and Business, University of Auckland, Auckland, New Zealand c School of Business, State University of New York at New Paltz, New Paltz, New York 12561, USA This work was supported by a Korea Research Foundation Grant (KRF-2004-041-B00211) to the first author. Corresponding author. Tel.: +82 31 400 5653; fax: +82 31 400 5591. E-mail addresses: [email protected] (J. Moon), [email protected] (D. Chadee), [email protected] (S. Tikoo). 1 Tel.: +64 9 373 7599 x85951. 2 Tel.: +1 845 257 2959.", "title": "" }, { "docid": "d87f336cc82cbd29df1f04095d98a7fb", "text": "The academic publishing world is changing significantly, with ever-growing numbers of publications each year and shifting publishing patterns. However, the metrics used to measure academic success, such as the number of publications, citation number, and impact factor, have not changed for decades. Moreover, recent studies indicate that these metrics have become targets and follow Goodhart’s Law, according to which “when a measure becomes a target, it ceases to be a good measure.” In this study, we analyzed over 120 million papers to examine how the academic publishing world has evolved over the last century. Our study shows that the validity of citation-based measures is being compromised and their usefulness is lessening. In particular, the number of publications has ceased to be a good metric as a result of longer author lists, shorter papers, and surging publication numbers. Citation-based metrics, such citation number and h-index, are likewise affected by the flood of papers, self-citations, and lengthy reference lists. Measures such as a journal’s impact factor have also ceased to be good metrics due to the soaring numbers of papers that are published in top journals, particularly from the same pool of authors. Moreover, by analyzing properties of over 2600 research fields, we observed that citation-based metrics are not beneficial for comparing researchers in different fields, or even in the same department. Academic publishing has changed considerably; now we need to reconsider how we measure success. Multimedia Links I Interactive Data Visualization I Code Tutorials I Fields-of-Study Features Table", "title": "" }, { "docid": "5c0f2bcde310b7b76ed2ca282fde9276", "text": "With the increasing prevalence of Alzheimer's disease, research focuses on the early computer-aided diagnosis of dementia with the goal to understand the disease process, determine risk and preserving factors, and explore preventive therapies. By now, large amounts of data from multi-site studies have been made available for developing, training, and evaluating automated classifiers. Yet, their translation to the clinic remains challenging, in part due to their limited generalizability across different datasets. In this work, we describe a compact classification approach that mitigates overfitting by regularizing the multinomial regression with the mixed ℓ1/ℓ2 norm. We combine volume, thickness, and anatomical shape features from MRI scans to characterize neuroanatomy for the three-class classification of Alzheimer's disease, mild cognitive impairment and healthy controls. We demonstrate high classification accuracy via independent evaluation within the scope of the CADDementia challenge. We, furthermore, demonstrate that variations between source and target datasets can substantially influence classification accuracy. The main contribution of this work addresses this problem by proposing an approach for supervised domain adaptation based on instance weighting. Integration of this method into our classifier allows us to assess different strategies for domain adaptation. Our results demonstrate (i) that training on only the target training set yields better results than the naïve combination (union) of source and target training sets, and (ii) that domain adaptation with instance weighting yields the best classification results, especially if only a small training component of the target dataset is available. These insights imply that successful deployment of systems for computer-aided diagnostics to the clinic depends not only on accurate classifiers that avoid overfitting, but also on a dedicated domain adaptation strategy.", "title": "" }, { "docid": "ac1d1bf198a178cb5655768392c3d224", "text": "-This paper discusses the two major query evaluation strategies used in large text retrieval systems and analyzes the performance of these strategies. We then discuss several optimization techniques that can be used to reduce evaluation costs and present simulation results to compare the performance of these optimization techniques when evaluating natural language queries with a collection of full text legal materials.", "title": "" }, { "docid": "e8f7006c9235e04f16cfeeb9d3c4f264", "text": "Widespread deployment of biometric systems supporting consumer transactions is starting to occur. Smart consumer devices, such as tablets and phones, have the potential to act as biometric readers authenticating user transactions. However, the use of these devices in uncontrolled environments is highly susceptible to replay attacks, where these biometric data are captured and replayed at a later time. Current approaches to counter replay attacks in this context are inadequate. In order to show this, we demonstrate a simple replay attack that is 100% effective against a recent state-of-the-art face recognition system; this system was specifically designed to robustly distinguish between live people and spoofing attempts, such as photographs. This paper proposes an approach to counter replay attacks for face recognition on smart consumer devices using a noninvasive challenge and response technique. The image on the screen creates the challenge, and the dynamic reflection from the person's face as they look at the screen forms the response. The sequence of screen images and their associated reflections digitally watermarks the video. By extracting the features from the reflection region, it is possible to determine if the reflection matches the sequence of images that were displayed on the screen. Experiments indicate that the face reflection sequences can be classified under ideal conditions with a high degree of confidence. These encouraging results may pave the way for further studies in the use of video analysis for defeating biometric replay attacks on consumer devices.", "title": "" }, { "docid": "86929d5c2c20b70c7d2529abf4489381", "text": "Integration of mm-wave multiple-antenna systems on silicon-based processes enables complex, low-cost systems for high-frequency communication and sensing applications. In this paper, the transmitter and LO-path phase-shifting sections of the first fully integrated 77-GHz phased-array transceiver are presented. The SiGe transceiver utilizes a local LO-path phase-shifting architecture to achieve beam steering and includes four transmit and receive elements, along with the LO frequency generation and distribution circuitry. The local LO-path phase-shifting scheme enables a robust distribution network that scales well with increasing frequency and/or number of elements while providing high-resolution phase shifts. Each element of the heterodyne transmitter generates +12.5 dBm of output power at 77 GHz with a bandwidth of 2.5 GHz leading to a 4-element effective isotropic radiated power (EIRP) of 24.5 dBm. Each on-chip PA has a maximum saturated power of +17.5 dBm at 77 GHz. The phased-array performance is measured using an internal test option and achieves 12-dB peak-to-null ratio with two transmit and receive elements active", "title": "" }, { "docid": "e3dc44074fe921f4d42135a7e05bf051", "text": "This paper presents a 60 GHz antenna structure built on glass and flip-chipped on a ceramic module. A single antenna and a two antenna array have been fabricated and demonstrated good performances. The single antenna shows a return loss below −10 dB and a gain of 6–7 dBi over a 7 GHz bandwidth. The array shows a gain of 7–8 dBi over a 3 GHz bandwidth.", "title": "" }, { "docid": "4d212f1613b826b97d8aee3ca2b98687", "text": "Undoubtedly, drought is one of the prime abiotic stresses in the world. Crop yield losses due to drought stress are considerable. Although a variety of approaches have been used to alleviate the problem of drought, plant breeding, either conventional breeding or genetic engineering, seems to be an efficient and economic means of tailoring crops to enable them to grow successfully in drought-prone environments. During the last century, although plant breeders have made ample progress through conventional breeding in developing drought tolerant lines/cultivars of some selected crops, the approach is, in fact, highly time-consuming and labor- and cost-intensive. Alternatively, marker-assisted breeding (MAB) is a more efficient approach, which identifies the usefulness of thousands of genomic regions of a crop under stress conditions, which was, in reality, previously not possible. Quantitative trait loci (QTL) for drought tolerance have been identified for a variety of traits in different crops. With the development of comprehensive molecular linkage maps, marker-assisted selection procedures have led to pyramiding desirable traits to achieve improvements in crop drought tolerance. However, the accuracy and preciseness in QTL identification are problematic. Furthermore, significant genetic x environment interaction, large number of genes encoding yield, and use of wrong mapping populations, have all harmed programs involved in mapping of QTL for high growth and yield under water limited conditions. Under such circumstances, a transgenic approach to the problem seems more convincing and practicable, and it is being pursued vigorously to improve qualitative and quantitative traits including tolerance to biotic and abiotic stresses in different crops. Rapid advance in knowledge on genomics and proteomics will certainly be beneficial to fine-tune the molecular breeding and transformation approaches so as to achieve a significant progress in crop improvement in future. Knowledge of gene regulation and signal transduction to generate drought tolerant crop cultivars/lines has been discussed in the present review. In addition, the advantages and disadvantages as well as future prospects of each breeding approach have also been discussed.", "title": "" }, { "docid": "f77a235f49cc8b0c037eb0c528b2c9dc", "text": "This paper describes the museum wearable: a wearable computer which orchestrates an audiovisual narration as a function of the visitor’s interests gathered from his/her physical path in the museum and length of stops. The wearable is made by a lightweight and small computer that people carry inside a shoulder pack. It offers an audiovisual augmentation of the surrounding environment using a small, lightweight eye-piece display (often called private-eye) attached to conventional headphones. Using custom built infrared location sensors distributed in the museum space, and statistical mathematical modeling, the museum wearable builds a progressively refined user model and uses it to deliver a personalized audiovisual narration to the visitor. This device will enrich and personalize the museum visit as a visual and auditory storyteller that is able to adapt its story to the audience’s interests and guide the public through the path of the exhibit.", "title": "" }, { "docid": "d03adda25ea5415c241310f12bf50470", "text": "The classical approach to depth from defocus (DFD) uses lenses with circular apertures for image capturing. We show in this paper that the use of a circular aperture severely restricts the accuracy of DFD. We derive a criterion for evaluating a pair of apertures with respect to the precision of depth recovery. This criterion is optimized using a genetic algorithm and gradient descent search to arrive at a pair of high resolution apertures. These two coded apertures are found to complement each other in the scene frequencies they preserve. This property enables them to not only recover depth with greater fidelity but also obtain a high quality all-focused image from the two captured images. Extensive simulations as well as experiments on a variety of real scenes demonstrate the benefits of using the coded apertures over conventional circular apertures.", "title": "" } ]
scidocsrr
9a998516d3e7651671af4fc8c6bdad5d
A Scenario-Based Analysis of Mobile Payment Acceptance
[ { "docid": "5affa179dd8b6742ac14fa5992c82575", "text": "It is commonly believed that good security improves trust, and that the perceptions of good security and trust will ultimately increase the use of electronic commerce. In fact, customers’ perceptions of the security of e-payment systems have become a major factor in the evolution of electronic commerce in markets. In this paper, we examine issues related to e-payment security from the viewpoint of customers. This study proposes a conceptual model that delineates the determinants of consumers’ perceived security and perceived trust, as well as the effects of perceived security and perceived trust on the use of epayment systems. To test the model, structural equation modeling is employed to analyze data collected from 219 respondents in Korea. This research provides a theoretical foundation for academics and also practical guidelines for service providers in dealing with the security aspects of e-payment systems. 2009 Elsevier B.V. All rights reserved.", "title": "" } ]
[ { "docid": "7d44a9227848baaf54b9bfb736727551", "text": "Introduction: The causal relation between tongue thrust swallowing or habit and development of anterior open bite continues to be made in clinical orthodontics yet studies suggest a lack of evidence to support a cause and effect. Treatment continues to be directed towards closing the anterior open bite frequently with surgical intervention to reposition the maxilla and mandible. This case report illustrates a highly successful non-surgical orthodontic treatment without extractions.", "title": "" }, { "docid": "1fc260f67567fc6b1ed08d8e4c26aa51", "text": "Electrochemical impedance spectroscopy (EIS) is a helpful tool to understand how a battery is behaving and how it degrades. One of the disadvantages is that it is typically an “off-line” process. This paper investigates an alternative method of looking at impedance spectroscopy of a battery system while it is on-line and operational by manipulating the switching pattern of the dc-dc converter to generate low frequency harmonics in conjunction with the normal high frequency switching pattern to determine impedance in real time. However, this adds extra ripple on the inductor which needs to be included in the design calculations. The paper describes the methodology and presents some experimental results in conjunction with EIS results to illustrate the concept.", "title": "" }, { "docid": "fea4f8d358afdee5aa9a57cdf19d63a0", "text": "Developers spend significant time reading and navigating code fragments spread across multiple locations. The file-based nature of contemporary IDEs makes it prohibitively difficult to create and maintain a simultaneous view of such fragments. We propose a novel user interface metaphor for code understanding based on collections of lightweight, editable fragments called bubbles, which form concurrently visible working sets. We present the results of a qualitative usability evaluation, and the results of a quantitative study which indicates Code Bubbles significantly improved code understanding time, while reducing navigation interactions over a widely-used IDE, for two controlled tasks.", "title": "" }, { "docid": "b01bc5df28e670c82d274892a407b0aa", "text": "We propose that many human behaviors can be accurately described as a set of dynamic models (e.g., Kalman filters) sequenced together by a Markov chain. We then use these dynamic Markov models to recognize human behaviors from sensory data and to predict human behaviors over a few seconds time. To test the power of this modeling approach, we report an experiment in which we were able to achieve 95 accuracy at predicting automobile drivers' subsequent actions from their initial preparatory movements.", "title": "" }, { "docid": "f56d5487c5f59d9b951841b993cbec07", "text": "We present Air+Touch, a new class of interactions that interweave touch events with in-air gestures, offering a unified input modality with expressiveness greater than each input modality alone. We demonstrate how air and touch are highly complementary: touch is used to designate targets and segment in-air gestures, while in-air gestures add expressivity to touch events. For example, a user can draw a circle in the air and tap to trigger a context menu, do a finger 'high jump' between two touches to select a region of text, or drag and in-air 'pigtail' to copy text to the clipboard. Through an observational study, we devised a basic taxonomy of Air+Touch interactions, based on whether the in-air component occurs before, between or after touches. To illustrate the potential of our approach, we built four applications that showcase seven exemplar Air+Touch interactions we created.", "title": "" }, { "docid": "896d9382066abc722f3d8a1793f0a67d", "text": "In this paper, we investigate the use of discourse-aware rewards with reinforcement learning to guide a model to generate long, coherent text. In particular, we propose to learn neural rewards to model cross-sentence ordering as a means to approximate desired discourse structure. Empirical results demonstrate that a generator trained with the learned reward produces more coherent and less repetitive text than models trained with crossentropy or with reinforcement learning with commonly used scores as rewards.", "title": "" }, { "docid": "b700f3c79d55a2251c84c227104e9eee", "text": "Recurrent neural network language models (RNNLMs) are becoming increasingly popular for a range of applications inc luding speech recognition. However, an important issue that li mi s the quantity of data, and hence their possible application a reas, is the computational cost in training. A standard appro ach to handle this problem is to use class-based outputs, allowi ng systems to be trained on CPUs. This paper describes an alternative approach that allows RNNLMs to be efficiently trained on GPUs. This enables larger quantities of data to be used, an d networks with an unclustered, full output layer to be traine d. To improve efficiency on GPUs, multiple sentences are “spliced ” together for each mini-batch or “bunch” in training. On a lar ge vocabulary conversational telephone speech recognition t ask, the training time was reduced by a factor of 27 over the standard CPU-based RNNLM toolkit. The use of an unclustered, full output layer also improves perplexity and recognition performance over class-based RNNLMs.", "title": "" }, { "docid": "f4f9a79bf6dc7afac056e9615c25c7f4", "text": "Multi-scanner Antivirus systems provide insightful information on the nature of a suspect application; however there is o‰en a lack of consensus and consistency between di‚erent Anti-Virus engines. In this article, we analyze more than 250 thousand malware signatures generated by 61 di‚erent Anti-Virus engines a‰er analyzing 82 thousand di‚erent Android malware applications. We identify 41 di‚erent malware classes grouped into three major categories, namely Adware, Harmful Œreats and Unknown or Generic signatures. We further investigate the relationships between such 41 classes using community detection algorithms from graph theory to identify similarities between them; and we €nally propose a Structure Equation Model to identify which Anti-Virus engines are more powerful at detecting each macro-category. As an application, we show how such models can help in identifying whether Unknown malware applications are more likely to be of Harmful or Adware type.", "title": "" }, { "docid": "568bc5272373a4e3fd38304f2c381e0f", "text": "With the growing complexity of web applications, identifying web interfaces that can be used for testing such applications has become increasingly challenging. Many techniques that work effectively when applied to simple web applications are insufficient when used on modern, dynamic web applications, and may ultimately result in inadequate testing of the applications' functionality. To address this issue, we present a technique for automatically discovering web application interfaces based on a novel static analysis algorithm. We also report the results of an empirical evaluation in which we compare our technique against a traditional approach. The results of the comparison show that our technique can (1) discover a higher number of interfaces and (2) help generate test inputs that achieve higher coverage.", "title": "" }, { "docid": "aa948c6380a54c8b5a24b062f854002c", "text": "This work focuses on the study of constant-time implementations; giving formal guarantees that such implementations are protected against cache-based timing attacks in virtualized platforms where their supporting operating system executes concurrently with other, potentially malicious, operating systems. We develop a model of virtualization that accounts for virtual addresses, physical and machine addresses, memory mappings, page tables, translation lookaside buffer, and cache; and provides an operational semantics for a representative set of actions, including reads and writes, allocation and deallocation, context switching, and hypercalls. We prove a non-interference result on the model that shows that an adversary cannot discover secret information using cache side-channels, from a constant-time victim.", "title": "" }, { "docid": "0d9cd7cbb37c410b1255f4f600c77c43", "text": "We present a nonparametric Bayesian approach to inverse rei nforcement learning (IRL) for multiple reward functions. Most previous IRL algo rithms assume that the behaviour data is obtained from an agent who is optimizin g a single reward function, but this assumption is hard to guarantee in practi ce. Our approach is based on integrating the Dirichlet process mixture model in to Bayesian IRL. We provide an efficient Metropolis-Hastings sampling algorit hm utilizing the gradient of the posterior to estimate the underlying reward function s, and demonstrate that our approach outperforms previous ones via experiments on a number of problem domains.", "title": "" }, { "docid": "ef92244350e267d3b5b9251d496e0ee2", "text": "A review of recent advances in power wafer level electronic packaging is presented based on the development of power device integration. The paper covers in more detail how advances in both semiconductor content and power advanced wafer level package design and materials have co-enabled significant advances in power device capability during recent years. Extrapolating the same trends in representative areas for the remainder of the decade serves to highlight where further improvement in materials and techniques can drive continued enhancements in usability, efficiency, reliability and overall cost of power semiconductor solutions. Along with next generation wafer level power packaging development, the role of modeling is a key to assure successful package design. An overview of the power package modeling is presented. Challenges of wafer level power semiconductor packaging and modeling in both next generation design and assembly processes are presented and discussed. 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "444fcf57aeaa2bba4f23c6b97c2b4849", "text": "This paper reports on two studies that investigated the relationship between the Big Five personality traits, self-estimates of intelligence (SEI), and scores on two psychometrically validated intelligence tests. In study 1 a total of 100 participants completed the NEOPI-R, the Wonderlic Personnel Test and the Baddeley Reasoning Test, and estimated their own intelligence on a normal distribution curve. Multiple regression showed that psychometric intelligence was predicted by Conscientiousness and SEI, while SEI was predicted by gender, Neuroticism (notably anxiety) and Agreeableness (notablymodesty). Personality was a better predictor of SEI than of psychometric intelligence itself. Study 2 attempted to explore the relationship between SEI and psychometric intelligence. A total of 130 participants completed the NEO-PI-R, the Baddeley Reasoning Test, and the S & M Spatial intelligence test. In addition, SEI and participants conceptions of intelligence were also examined. In combination with gender and previous IQ test experience, these variables were found to predict about 11% of the variance in SEI. SEI was the only significant predictor of psychometrically measured intelligence. Inconsistencies between results of the two studies, theoretical and applied implications, and limitations of this work are discussed.", "title": "" }, { "docid": "19b16abf5ec7efe971008291f38de4d4", "text": "Cross-modal retrieval has recently drawn much attention due to the widespread existence of multimodal data. It takes one type of data as the query to retrieve relevant data objects of another type, and generally involves two basic problems: the measure of relevance and coupled feature selection. Most previous methods just focus on solving the first problem. In this paper, we aim to deal with both problems in a novel joint learning framework. To address the first problem, we learn projection matrices to map multimodal data into a common subspace, in which the similarity between different modalities of data can be measured. In the learning procedure, the ℓ2-norm penalties are imposed on the projection matrices separately to solve the second problem, which selects relevant and discriminative features from different feature spaces simultaneously. A multimodal graph regularization term is further imposed on the projected data,which preserves the inter-modality and intra-modality similarity relationships.An iterative algorithm is presented to solve the proposed joint learning problem, along with its convergence analysis. Experimental results on cross-modal retrieval tasks demonstrate that the proposed method outperforms the state-of-the-art subspace approaches.", "title": "" }, { "docid": "f249a6089a789e52eeadc8ae16213bc1", "text": "We have collected a new face data set that will facilitate research in the problem of frontal to profile face verification `in the wild'. The aim of this data set is to isolate the factor of pose variation in terms of extreme poses like profile, where many features are occluded, along with other `in the wild' variations. We call this data set the Celebrities in Frontal-Profile (CFP) data set. We find that human performance on Frontal-Profile verification in this data set is only slightly worse (94.57% accuracy) than that on Frontal-Frontal verification (96.24% accuracy). However we evaluated many state-of-the-art algorithms, including Fisher Vector, Sub-SML and a Deep learning algorithm. We observe that all of them degrade more than 10% from Frontal-Frontal to Frontal-Profile verification. The Deep learning implementation, which performs comparable to humans on Frontal-Frontal, performs significantly worse (84.91% accuracy) on Frontal-Profile. This suggests that there is a gap between human performance and automatic face recognition methods for large pose variation in unconstrained images.", "title": "" }, { "docid": "dd0a65adcd94083c693cee00995c95cc", "text": "In this paper, a trajectory optimization algorithm is proposed, which formulates the lateral vehicle guidance task along a reference curve as a constrained optimal control problem. The optimization problem is solved by means of a linear time-varying model predictive control scheme that generates trajectories for path following under consideration of various time-varying system constraints in a receding horizon fashion. Formulating the system dynamics linearly in combination with a quadratic cost function has two great advantages. First, the system constraints can be set up not only to achieve collision avoidance with both static and dynamic obstacles, but also aspects of human driving behavior can be considered. Second, the optimization problem can be solved very efficiently, such that the algorithm can be run with little computational effort. In addition, due to an elaborate problem formulation, reference curves with discontinuous, high curvatures will be effortlessly smoothed out by the algorithm. This makes the proposed algorithm applicable to different traffic scenarios, such as parking or highway driving. Experimental results are presented for different real-world scenarios to demonstrate the algorithm’s abilities.", "title": "" }, { "docid": "ebb9d9f255b49f50d07abfb3d61a6a57", "text": "We propose Nested LSTMs (NLSTM), a novel RNN architecture with multiple levels of memory. Nested LSTMs add depth to LSTMs via nesting as opposed to stacking. The value of a memory cell in an NLSTM is computed by an LSTM cell, which has its own inner memory cell. Specifically, instead of computing the value of the (outer) memory cell as c t = ft ct−1 + it gt, NLSTM memory cells use the concatenation (ft ct−1, it gt) as input to an inner LSTM (or NLSTM) memory cell, and set c t = h inner t . Nested LSTMs outperform both stacked and single-layer LSTMs with similar numbers of parameters in our experiments on various character-level language modeling tasks, and the inner memories of an LSTM learn longer term dependencies compared with the higher-level units of a stacked LSTM.", "title": "" }, { "docid": "6979dcf4f63c7c16d66242b66b9c6c57", "text": "A PubMed query run in June 2018 using the keyword 'blockchain' retrieved 40 indexed papers, a reflection of the growing interest in blockchain among the medical and healthcare research and practice communities. Blockchain's foundations of decentralisation, cryptographic security and immutability make it a strong contender in reshaping the healthcare landscape worldwide. Blockchain solutions are currently being explored for: (1) securing patient and provider identities; (2) managing pharmaceutical and medical device supply chains; (3) clinical research and data monetisation; (4) medical fraud detection; (5) public health surveillance; (6) enabling truly public and open geo-tagged data; (7) powering many Internet of Things-connected autonomous devices, wearables, drones and vehicles, via the distributed peer-to-peer apps they run, to deliver the full vision of smart healthy cities and regions; and (8) blockchain-enabled augmented reality in crisis mapping and recovery scenarios, including mechanisms for validating, crediting and rewarding crowdsourced geo-tagged data, among other emerging use cases. Geospatially-enabled blockchain solutions exist today that use a crypto-spatial coordinate system to add an immutable spatial context that regular blockchains lack. These geospatial blockchains do not just record an entry's specific time, but also require and validate its associated proof of location, allowing accurate spatiotemporal mapping of physical world events. Blockchain and distributed ledger technology face similar challenges as any other technology threatening to disintermediate legacy processes and commercial interests, namely the challenges of blockchain interoperability, security and privacy, as well as the need to find suitable and sustainable business models of implementation. Nevertheless, we expect blockchain technologies to get increasingly powerful and robust, as they become coupled with artificial intelligence (AI) in various real-word healthcare solutions involving AI-mediated data exchange on blockchains.", "title": "" }, { "docid": "46ac5e994ca0bf0c3ea5dd110810b682", "text": "The Geosciences and Geography are not just yet another application area for semantic technologies. The vast heterogeneity of the involved disciplines ranging from the natural sciences to the social sciences introduces new challenges in terms of interoperability. Moreover, the inherent spatial and temporal information components also require distinct semantic approaches. For these reasons, geospatial semantics, geo-ontologies, and semantic interoperability have been active research areas over the last 20 years. The geospatial semantics community has been among the early adopters of the Semantic Web, contributing methods, ontologies, use cases, and datasets. Today, geographic information is a crucial part of many central hubs on the Linked Data Web. In this editorial, we outline the research field of geospatial semantics, highlight major research directions and trends, and glance at future challenges. We hope that this text will be valuable for geoscientists interested in semantics research as well as knowledge engineers interested in spatiotemporal data. Introduction and Motivation While the Web has changed with the advent of the Social Web from mostly authoritative content towards increasing amounts of user generated information, it is essentially still about linked documents. These documents provide structure and context for the described data and easy their interpretation. In contrast, the evolving Data Web is about linking data, not documents. Such datasets are not bound to a specific document but can be easily combined and used outside of their original creation context. With a growth rate of millions of new facts encoded as RDF-triples per month, the Linked Data cloud allows users to answer complex queries spanning multiple, heterogeneous data sources from different scientific domains. However, this uncoupling of data from its creation context makes the interpretation of data challenging. Thus, research on semantic interoperability and ontologies is crucial to ensure consistency and meaningful results. Space and time are fundamental ordering principles to structure such data and provide an implicit context for their interpretation. Hence, it is not surprising that many linked datasets either contain spatiotemporal identifiers themselves or link out to such datasets, making them central hubs of the Linked Data cloud. Prominent examples include Geonames.org as well as the Linked Geo Data project, which provides a RDF serialization of Points Of Interest from Open Street Map [103]. Besides such Voluntary Geographic Information (VGI), governments 1570-0844/12/$27.50 c © 2012 – IOS Press and the authors. All rights reserved", "title": "" }, { "docid": "1c005124e2014b1d2eaaa178eda3e4d0", "text": "BACKGROUND\nThere is increasing awareness that meta-analyses require a sufficiently large information size to detect or reject an anticipated intervention effect. The required information size in a meta-analysis may be calculated from an anticipated a priori intervention effect or from an intervention effect suggested by trials with low-risk of bias.\n\n\nMETHODS\nInformation size calculations need to consider the total model variance in a meta-analysis to control type I and type II errors. Here, we derive an adjusting factor for the required information size under any random-effects model meta-analysis.\n\n\nRESULTS\nWe devise a measure of diversity (D2) in a meta-analysis, which is the relative variance reduction when the meta-analysis model is changed from a random-effects into a fixed-effect model. D2 is the percentage that the between-trial variability constitutes of the sum of the between-trial variability and a sampling error estimate considering the required information size. D2 is different from the intuitively obvious adjusting factor based on the common quantification of heterogeneity, the inconsistency (I2), which may underestimate the required information size. Thus, D2 and I2 are compared and interpreted using several simulations and clinical examples. In addition we show mathematically that diversity is equal to or greater than inconsistency, that is D2 >or= I2, for all meta-analyses.\n\n\nCONCLUSION\nWe conclude that D2 seems a better alternative than I2 to consider model variation in any random-effects meta-analysis despite the choice of the between trial variance estimator that constitutes the model. Furthermore, D2 can readily adjust the required information size in any random-effects model meta-analysis.", "title": "" } ]
scidocsrr
731c519b7391d08f8dc9d147afc52b77
A dynamic self-structuring neural network model to combat phishing
[ { "docid": "f9cc9e1ddc0d1db56f362a1ef409274d", "text": "Phishing is increasing dramatically with the development of modern technologies and the global worldwide computer networks. This results in the loss of customer’s confidence in e-commerce and online banking, financial damages, and identity theft. Phishing is fraudulent effort aims to acquire sensitive information from users such as credit card credentials, and social security number. In this article, we propose a model for predicting phishing attacks based on Artificial Neural Network (ANN). A Feed Forward Neural Network trained by Back Propagation algorithm is developed to classify websites as phishing or legitimate. The suggested model shows high acceptance ability for noisy data, fault tolerance and high prediction accuracy with respect to false positive and false negative rates.", "title": "" } ]
[ { "docid": "a0d4d6c36cab8c5ed5be69bea1d8f302", "text": "In this paper, we propose a simple, fast decoding algorithm that fosters diversity in neural generation. The algorithm modifies the standard beam search algorithm by adding an intersibling ranking penalty, favoring choosing hypotheses from diverse parents. We evaluate the proposed model on the tasks of dialogue response generation, abstractive summarization and machine translation. We find that diverse decoding helps across all tasks, especially those for which reranking is needed. We further propose a variation that is capable of automatically adjusting its diversity decoding rates for different inputs using reinforcement learning (RL). We observe a further performance boost from this RL technique.1", "title": "" }, { "docid": "0e6c562a1760344ef59e40d7774b56fe", "text": "Sparsity is widely observed in convolutional neural networks by zeroing a large portion of both activations and weights without impairing the result. By keeping the data in a compressed-sparse format, the energy consumption could be considerably cut down due to less memory traffic. However, the wide SIMD-like MAC engine adopted in many CNN accelerators can not support the compressed input due to the data misalignment. In this work, a novel Dual Indexing Module (DIM) is proposed to efficiently handle the alignment issue where activations and weights are both kept in compressed-sparse format. The DIM is implemented in a representative SIMD-like CNN accelerator, and able to exploit both compressed-sparse activations and weights. The synthesis results with 40nm technology have shown that DIM can enhance up to 46% of energy consumption and 55.4% Energy-Delay-Product (EDP).", "title": "" }, { "docid": "4523358a96dbf48fd86a1098ffef5c7e", "text": "This paper proposes a new randomized strategy for adaptive MCMC using Bayesian optimization. This approach applies to nondifferentiable objective functions and trades off exploration and exploitation to reduce the number of potentially costly objective function evaluations. We demonstrate the strategy in the complex setting of sampling from constrained, discrete and densely connected probabilistic graphical models where, for each variation of the problem, one needs to adjust the parameters of the proposal mechanism automatically to ensure efficient mixing of the Markov chains.", "title": "" }, { "docid": "d043a086f143c713e4c4e74c38e3040c", "text": "Background: The NASA Metrics Data Program data sets have been heavily used in software defect prediction experiments. Aim: To demonstrate and explain why these data sets require significant pre-processing in order to be suitable for defect prediction. Method: A meticulously documented data cleansing process involving all 13 of the original NASA data sets. Results: Post our novel data cleansing process; each of the data sets had between 6 to 90 percent less of their original number of recorded values. Conclusions: One: Researchers need to analyse the data that forms the basis of their findings in the context of how it will be used. Two: Defect prediction data sets could benefit from lower level code metrics in addition to those more commonly used, as these will help to distinguish modules, reducing the likelihood of repeated data points. Three: The bulk of defect prediction experiments based on the NASA Metrics Data Program data sets may have led to erroneous findings. This is mainly due to repeated data points potentially causing substantial amounts of training and testing data to be identical.", "title": "" }, { "docid": "f8d89f36dc1582c300184ccc161f153f", "text": "To build effective malware analysis techniques and to evaluate new detection tools, up-to-date datasets reflecting the current Android malware landscape are essential. For such datasets to be maximally useful, they need to contain reliable and complete information on malware’s behaviors and techniques used in the malicious activities. Such a dataset shall also provide a comprehensive coverage of a large number of types of malware. The Android Malware Genome created circa 2011 has been the only well-labeled and widely studied dataset the research community had easy access to. But not only is it outdated and no longer represents the current Android malware landscape, it also does not provide as detailed information on malware’s behaviors as needed for research. Thus it is urgent to create a high-quality dataset for Android malware. While existing information sources such as VirusTotal are useful, to obtain the accurate and detailed information for malware behaviors, deep manual analysis is indispensable. In this work we present our approach to preparing a large Android malware dataset for the research community. We leverage existing anti-virus scan results and automation techniques in categorizing our large dataset (containing 24,650 malware app samples) into 135 varieties (based on malware behavioral semantics) which belong to 71 malware families. For each variety, we select three samples as representatives, for a total of 405 malware samples, to conduct in-depth manual analysis. Based on the manual analysis result we generate detailed descriptions of each malware variety’s behaviors and include them in our dataset. We also report our observations on the current landscape of Android malware as depicted in the dataset. Furthermore, we present detailed documentation of the process used in creating the dataset, including the guidelines for the manual analysis. We make our Android malware dataset available to the research community.", "title": "" }, { "docid": "3f5b4c9d6da1a6e7949169e8613e6e03", "text": "This study set out to investigate in which type of media individuals are more likely to tell self-serving and other-oriented lies, and whether this varied according to the recipient of the lie. One hundred and fifty participants rated on a likert-point scale how likely they would tell a lie. Participants were more likely to tell self-serving lies to people not well-known to them. They were more likely to tell self-serving lies in email, followed by phone, and finally face-to-face. Participants were more likely to tell other-oriented lies to individuals they felt close to and this did not vary according to the type media. Participants were also more likely to tell harsh truths to people not well-known to them via email.", "title": "" }, { "docid": "591bafe64e0dcf42bece55c22c1f9164", "text": "Typically, AI researchers and roboticists try to realize intelligent behavior in machines by tuning parameters of a predefined structure (body plan and/or neural network architecture) using evolutionary or learning algorithms. Another but not unrelated longstanding property of these systems is their brittleness to slight aberrations, as highlighted by the growing deep learning literature on adversarial examples. Here we show robustness can be achieved by evolving the geometry of soft robots, their control systems, and how their material properties develop in response to one particular interoceptive stimulus (engineering stress) during their lifetimes. By doing so we realized robots that were equally fit but more robust to extreme material defects (such as might occur during fabrication or by damage thereafter) than robots that did not develop during their lifetimes, or developed in response to a different interoceptive stimulus (pressure). This suggests that the interplay between changes in the containing systems of agents (body plan and/or neural architecture) at different temporal scales (evolutionary and developmental) along different modalities (geometry, material properties, synaptic weights) and in response to different signals (interoceptive and external perception) all dictate those agents' abilities to evolve or learn capable and robust strategies.", "title": "" }, { "docid": "fe20c0bee35db1db85968b4d2793b83b", "text": "The Smule Ocarina is a wind instrument designed for the iPhone, fully leveraging its wide array of technologies: microphone input (for breath input), multitouch (for fingering), accelerometer, real-time sound synthesis, highperformance graphics, GPS/location, and persistent data connection. In this mobile musical artifact, the interactions of the ancient flute-like instrument are both preserved and transformed via breath-control and multitouch finger-holes, while the onboard global positioning and persistent data connection provide the opportunity to create a new social experience, allowing the users of Ocarina to listen to one another. In this way, Ocarina is also a type of social instrument that enables a different, perhaps even magical, sense of global connectivity.", "title": "" }, { "docid": "fb04b391bb680c1fb5e9dedd2e74562c", "text": "Modern network intrusion detection systems need to perform regular expression matching at line rate in order to detect the occurrence of critical patterns in packet payloads. While deterministic finite automata (DFAs) allow this operation to be performed in linear time, they may exhibit prohibitive memory requirements. In [9], Kumar et al. propose Delayed Input DFAs (D2FAs), which provide a trade-off between the memory requirements of the compressed DFA and the number of states visited for each character processed, which corresponds directly to the memory bandwidth required to evaluate regular expressions.\n In this paper we introduce a general compression technique that results in at most 2N state traversals when processing a string of length N. In comparison to the D2FA approach, our technique achieves comparable levels of compression, with lower provable bounds on memory bandwidth (or greater compression for a given bandwidth bound). Moreover, our proposed algorithm has lower complexity, is suitable for scenarios where a compressed DFA needs to be dynamically built or updated, and fosters locality in the traversal process. Finally, we also describe a novel alphabet reduction scheme for DFA-based structures that can yield further dramatic reductions in data structure size.", "title": "" }, { "docid": "bc7209b09edae3ca916be1560fb1d396", "text": "The prediction and diagnosis of Tuberculosis survivability has been a challenging research problem for many researchers. Since the early dates of the related research, much advancement has been recorded in several related fields. For instance, thanks to innovative biomedical technologies, better explanatory prognostic factors are being measured and recorded; thanks to low cost computer hardware and software technologies, high volume better quality data is being collected and stored automatically; and finally thanks to better analytical methods, those voluminous data is being processed effectively and efficiently. Tuberculosis is one of the leading diseases for all people in developed countries including India. It is the most common cause of death in human being. The high incidence of Tuberculosis in all people has increased significantly in the last years. In this paper we have discussed various data mining approaches that have been utilized for Tuberculosis diagnosis and prognosis. This study paper summarizes various review and technical articles on Tuberculosis diagnosis and prognosis also we focus on current research being carried out using the data mining techniques to enhance the Tuberculosis diagnosis and prognosis. Here, we took advantage of those available technological advancements to develop the best prediction model for Tuberculosis survivability.", "title": "" }, { "docid": "dcf24411ffed0d5bf2709e005f6db753", "text": "Dynamic Causal Modelling (DCM) is an approach first introduced for the analysis of functional magnetic resonance imaging (fMRI) to quantify effective connectivity between brain areas. Recently, this framework has been extended and established in the magneto/encephalography (M/EEG) domain. DCM for M/EEG entails the inversion a full spatiotemporal model of evoked responses, over multiple conditions. This model rests on a biophysical and neurobiological generative model for electrophysiological data. A generative model is a prescription of how data are generated. The inversion of a DCM provides conditional densities on the model parameters and, indeed on the model itself. These densities enable one to answer key questions about the underlying system. A DCM comprises two parts; one part describes the dynamics within and among neuronal sources, and the second describes how source dynamics generate data in the sensors, using the lead-field. The parameters of this spatiotemporal model are estimated using a single (iterative) Bayesian procedure. In this paper, we will motivate and describe the current DCM framework. Two examples show how the approach can be applied to M/EEG experiments.", "title": "" }, { "docid": "75952f3945628c15a66b7288e6c1d1a7", "text": "Most of the samples discovered are variations of known malicious programs and thus have similar structures, however, there is no method of malware classification that is completely effective. To address this issue, the approach proposed in this paper represents a malware in terms of a vector, in which each feature consists of the amount of APIs called from a Dynamic Link Library (DLL). To determine if this approach is useful to classify malware variants into the correct families, we employ Euclidean Distance and a Multilayer Perceptron with several learning algorithms. The experimental results are analyzed to determine which method works best with the approach. The experiments were conducted with a database that contains real samples of worms and trojans and show that is possible to classify malware variants using the number of functions imported per library. However, the accuracy varies depending on the method used for the classification.", "title": "" }, { "docid": "316c106ae8830dcf8a3cf64775f56ebe", "text": "Friendship is the cornerstone to build a social network. In online social networks, statistics show that the leading reason for user to create a new friendship is due to recommendation. Thus the accuracy of recommendation matters. In this paper, we propose a Bayesian Personalized Ranking Deep Neural Network (BayDNN) model for friend recommendation in social networks. With BayDNN, we achieve significant improvement on two public datasets: Epinions and Slashdot. For example, on Epinions dataset, BayDNN significantly outperforms the state-of-the-art algorithms, with a 5% improvement on NDCG over the best baseline.\n The advantages of the proposed BayDNN mainly come from its underlying convolutional neural network (CNN), which offers a mechanism to extract latent deep structural feature representations of the complicated network data, and a novel Bayesian personalized ranking idea, which precisely captures the users' personal bias based on the extracted deep features. To get good parameter estimation for the neural network, we present a fine-tuned pre-training strategy for the proposed BayDNN model based on Poisson and Bernoulli probabilistic models.", "title": "" }, { "docid": "16e2d7a9e4ee97b1b7fa7f3785d641a2", "text": "In the era of the fourth industrial revolution (Industry 4.0), big data has major impact on businesses, since the revolution of networks, platforms, people and digital technology have changed the determinants of firms’ innovation and competitiveness. An ongoing huge hype for big data has been gained from academics and professionals, since big data analytics leads to valuable knowledge and promotion of innovative activity of enterprises and organizations, transforming economies in local, national and international level. In that context, data science is defined as the collection of fundamental principles that promote information and knowledge gaining from data. The techniques and applications that are used help to analyze critical data to support organizations in understanding their environment and in taking better decisions on time. Nowadays, the tremendous increase of data through the Internet of Things (continuous increase of connected devices, sensors and smartphones) has contributed to the rise of a “data-driven” era, where big data analytics are used in every sector (agriculture, health, energy and infrastructure, economics and insurance, sports, food and transportation) and every world economy. The growing expansion of available data is a recognized trend worldwide, while valuable knowledge arising from the information come from data analysis processes. In that context, the bulk of organizations are collecting, storing and analyzing data for strategic business decisions leading to valuable knowledge. The ability to manage, analyze and act on data (“data-driven decision systems”) is very important to organizations and is characterized as a significant asset. The prospects of big data analytics are important and the benefits for data-driven organizations are K. Vassakis (✉) ⋅ E. Petrakis Department of Economics, University of Crete, Gallos Campus, Rethymno, Crete 74100, Greece e-mail: [email protected] E. Petrakis e-mail: [email protected] I. Kopanakis Department of Business Administration, Technological Educational Institute of Crete, Agios Nikolaos, Crete 72100, Greece e-mail: [email protected] © Springer International Publishing AG 2018 G. Skourletopoulos et al. (eds.), Mobile Big Data, Lecture Notes on Data Engineering and Communications Technologies 10, https://doi.org/10.1007/978-3-319-67925-9_1 3 significant determinants for competitiveness and innovation performance. However, there are considerable obstacles to adopt data-driven approach and get valuable knowledge through big data.", "title": "" }, { "docid": "e96cf46cc99b3eff60d32f3feb8afc47", "text": "We present an field programmable gate arrays (FPGA) based implementation of the popular Viola-Jones face detection algorithm, which is an essential building block in many applications such as video surveillance and tracking. Our implementation is a complete system level hardware design described in a hardware description language and validated on the affordable DE2-115 evaluation board. Our primary objective is to study the achievable performance with a low-end FPGA chip based implementation. In addition, we release to the public domain the entire project. We hope that this will enable other researchers to easily replicate and compare their results to ours and that it will encourage and facilitate further research and educational ideas in the areas of image processing, computer vision, and advanced digital design and FPGA prototyping. 2017 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).", "title": "" }, { "docid": "36fbc5f485d44fd7c8726ac0df5648c0", "text": "We present “Ouroboros Praos”, a proof-of-stake blockchain protocol that, for the first time, provides security against fully-adaptive corruption in the semi-synchronous setting : Specifically, the adversary can corrupt any participant of a dynamically evolving population of stakeholders at any moment as long the stakeholder distribution maintains an honest majority of stake; furthermore, the protocol tolerates an adversarially-controlled message delivery delay unknown to protocol participants. To achieve these guarantees we formalize and realize in the universal composition setting a suitable form of forward secure digital signatures and a new type of verifiable random function that maintains unpredictability under malicious key generation. Our security proof develops a general combinatorial framework for the analysis of semi-synchronous blockchains that may be of independent interest. We prove our protocol secure under standard cryptographic assumptions in the random oracle model.", "title": "" }, { "docid": "2fb3e787ee9a4afac71292151965ec5c", "text": "We propose the 3dSOBS+ algorithm, a newly designed approach for moving object detection based on a neural background model automatically generated by a self-organizing method. The algorithm is able to accurately handle scenes containing moving backgrounds, gradual illumination variations, and shadows cast by moving objects, and is robust against false detections for different types of videos taken with stationary cameras. Experimental results and comparisons conducted on the Background Models Challenge benchmark dataset demonstrate the improvements achieved by the proposed algorithm, that compares well with the state-of-the-art methods. 2013 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "2bc379517b4acfd0cb1257e056ca414d", "text": "Many studies of creative cognition with a neuroimaging component now exist; what do they say about where and how creativity arises in the brain? We reviewed 45 brain-imaging studies of creative cognition. We found little clear evidence of overlap in their results. Nearly as many different tests were used as there were studies; this test diversity makes it impossible to interpret the different findings across studies with any confidence. Our conclusion is that creativity research would benefit from psychometrically informed revision, and the addition of neuroimaging methods designed to provide greater spatial localization of function. Without such revision in the behavioral measures and study designs, it is hard to see the benefit of imaging. We set out eight suggestions in a manifesto for taking creativity research forward.", "title": "" }, { "docid": "a87ba6d076c3c05578a6f6d9da22ac79", "text": "Here we review and extend a new unitary model for the pathophysiology of involutional osteoporosis that identifies estrogen (E) as the key hormone for maintaining bone mass and E deficiency as the major cause of age-related bone loss in both sexes. Also, both E and testosterone (T) are key regulators of skeletal growth and maturation, and E, together with GH and IGF-I, initiate a 3- to 4-yr pubertal growth spurt that doubles skeletal mass. Although E is required for the attainment of maximal peak bone mass in both sexes, the additional action of T on stimulating periosteal apposition accounts for the larger size and thicker cortices of the adult male skeleton. Aging women undergo two phases of bone loss, whereas aging men undergo only one. In women, the menopause initiates an accelerated phase of predominantly cancellous bone loss that declines rapidly over 4-8 yr to become asymptotic with a subsequent slow phase that continues indefinitely. The accelerated phase results from the loss of the direct restraining effects of E on bone turnover, an action mediated by E receptors in both osteoblasts and osteoclasts. In the ensuing slow phase, the rate of cancellous bone loss is reduced, but the rate of cortical bone loss is unchanged or increased. This phase is mediated largely by secondary hyperparathyroidism that results from the loss of E actions on extraskeletal calcium metabolism. The resultant external calcium losses increase the level of dietary calcium intake that is required to maintain bone balance. Impaired osteoblast function due to E deficiency, aging, or both also contributes to the slow phase of bone loss. Although both serum bioavailable (Bio) E and Bio T decline in aging men, Bio E is the major predictor of their bone loss. Thus, both sex steroids are important for developing peak bone mass, but E deficiency is the major determinant of age-related bone loss in both sexes.", "title": "" }, { "docid": "f05b001f03e00bf2d0807eb62d9e2369", "text": "Since the hydraulic actuating suspension system has nonlinear and time-varying behavior, it is difficult to establish an accurate model for designing a model-based controller. Here, an adaptive fuzzy sliding mode controller is proposed to suppress the sprung mass position oscillation due to road surface variation. This intelligent control strategy combines an adaptive rule with fuzzy and sliding mode control algorithms. It has online learning ability to deal with the system time-varying and nonlinear uncertainty behaviors, and adjust the control rules parameters. Only eleven fuzzy rules are required for this active suspension system and these fuzzy control rules can be established and modified continuously by online learning. The experimental results show that this intelligent control algorithm effectively suppresses the oscillation amplitude of the sprung mass with respect to various road surface disturbances.", "title": "" } ]
scidocsrr
dfe360b0ba208ac35e5b22ab87748bc7
Inferring ontology graph structures using OWL reasoning
[ { "docid": "b14ab40b4267ba8c69e755614e798f0b", "text": "To enhance the treatment of relations in biomedical ontologies we advance a methodology for providing consistent and unambiguous formal definitions of the relational expressions used in such ontologies in a way designed to assist developers and users in avoiding errors in coding and annotation. The resulting Relation Ontology can promote interoperability of ontologies and support new types of automated reasoning about the spatial and temporal dimensions of biological and medical phenomena.", "title": "" }, { "docid": "b1e0e77ece1f24d2a98d8a7b4763df48", "text": "Motivation\nBiological data and knowledge bases increasingly rely on Semantic Web technologies and the use of knowledge graphs for data integration, retrieval and federated queries. In the past years, feature learning methods that are applicable to graph-structured data are becoming available, but have not yet widely been applied and evaluated on structured biological knowledge. Results: We develop a novel method for feature learning on biological knowledge graphs. Our method combines symbolic methods, in particular knowledge representation using symbolic logic and automated reasoning, with neural networks to generate embeddings of nodes that encode for related information within knowledge graphs. Through the use of symbolic logic, these embeddings contain both explicit and implicit information. We apply these embeddings to the prediction of edges in the knowledge graph representing problems of function prediction, finding candidate genes of diseases, protein-protein interactions, or drug target relations, and demonstrate performance that matches and sometimes outperforms traditional approaches based on manually crafted features. Our method can be applied to any biological knowledge graph, and will thereby open up the increasing amount of Semantic Web based knowledge bases in biology to use in machine learning and data analytics.\n\n\nAvailability and implementation\nhttps://github.com/bio-ontology-research-group/walking-rdf-and-owl.\n\n\nContact\[email protected].\n\n\nSupplementary information\nSupplementary data are available at Bioinformatics online.", "title": "" }, { "docid": "205a5a9a61b6ac992f01c8c2fc09678a", "text": "We present the OWL API, a high level Application Programming Interface (API) for working with OWL ontologies. The OWL API is closely aligned with the OWL 2 structural specification. It supports parsing and rendering in the syntaxes defined in the W3C specification (Functional Syntax, RDF/XML, OWL/XML and the Manchester OWL Syntax); manipulation of ontological structures; and the use of reasoning engines. The reference implementation of the OWL API, written in Java, includes validators for the various OWL 2 profiles OWL 2 QL, OWL 2 EL and OWL 2 RL. The OWL API has widespread usage in a variety of tools and applications.", "title": "" }, { "docid": "7966ae20fc98b2fe2526e6148e7a5429", "text": "ℰ ℒ $\\mathcal {E} \\mathcal {L}$ is a simple tractable Description Logic that features conjunctions and existential restrictions. Due to its favorable computational properties and relevance to existing ontologies, ℰ ℒ $\\mathcal {E} \\mathcal {L}$ has become the language of choice for terminological reasoning in biomedical applications, and has formed the basis of the OWL EL profile of the Web ontology language OWL. This paper describes ELK—a high performance reasoner for OWL EL ontologies—and details various aspects from theory to implementation that make ELK one of the most competitive reasoning systems for ℰ ℒ $\\mathcal {E} \\mathcal {L}$ ontologies available today.", "title": "" } ]
[ { "docid": "83b376a0bd567e24dd1d3b5d415e08b2", "text": "BACKGROUND\nThe biomechanical effects of lateral meniscal posterior root tears with and without meniscofemoral ligament (MFL) tears in anterior cruciate ligament (ACL)-deficient knees have not been studied in detail.\n\n\nPURPOSE\nTo determine the biomechanical effects of the lateral meniscus (LM) posterior root tear in ACL-intact and ACL-deficient knees. In addition, the biomechanical effects of disrupting the MFLs in ACL-deficient knees with meniscal root tears were evaluated.\n\n\nSTUDY DESIGN\nControlled laboratory study.\n\n\nMETHODS\nTen paired cadaveric knees were mounted in a 6-degrees-of-freedom robot for testing and divided into 2 groups. The sectioning order for group 1 was (1) ACL, (2) LM posterior root, and (3) MFLs, and the order for group 2 was (1) LM posterior root, (2) ACL, and (3) MFLs. For each cutting state, displacements and rotations of the tibia were measured and compared with the intact state after a simulated pivot-shift test (5-N·m internal rotation torque combined with a 10-N·m valgus torque) at 0°, 20°, 30°, 60°, and 90° of knee flexion; an anterior translation load (88 N) at 0°, 30°, 60°, and 90° of knee flexion; and internal rotation (5 N·m) at 0°, 30°, 60°, 75°, and 90°.\n\n\nRESULTS\nCutting the LM root and MFLs significantly increased anterior tibial translation (ATT) during a pivot-shift test at 20° and 30° when compared with the ACL-cut state (both Ps < .05). During a 5-N·m internal rotation torque, cutting the LM root in ACL-intact knees significantly increased internal rotation by between 0.7° ± 0.3° and 1.3° ± 0.9° (all Ps < .05) except at 0° (P = .136). When the ACL + LM root cut state was compared with the ACL-cut state, the increase in internal rotation was significant at greater flexion angles of 75° and 90° (both Ps < .05) but not between 0°and 60° (all Ps > .2). For an anterior translation load, cutting the LM root in ACL-deficient knees significantly increased ATT only at 30° (P = .007).\n\n\nCONCLUSION\nThe LM posterior root was a significant stabilizer of the knee for ATT during a pivot-shift test at lower flexion angles and internal rotation at higher flexion angles.\n\n\nCLINICAL RELEVANCE\nIncreased knee anterior translation and rotatory instability due to posterior lateral meniscal root disruption may contribute to increased loads on an ACL reconstruction graft. It is recommended that lateral meniscal root tears be repaired at the same time as an ACL reconstruction to prevent possible ACL graft overload.", "title": "" }, { "docid": "23ef781d3230124360f24cc6e38fb15f", "text": "Exploration of ANNs for the economic purposes is described and empirically examined with the foreign exchange market data. For the experiments, panel data of the exchange rates (USD/EUR, JPN/USD, USD/ GBP) are examined and optimized to be used for time-series predictions with neural networks. In this stage the input selection, in which the processing steps to prepare the raw data to a suitable input for the models are investigated. The best neural network is found with the best forecasting abilities, based on a certain performance measure. A visual graphs on the experiments data set is presented after processing steps, to illustrate that particular results. The out-of-sample results are compared with training ones. & 2015 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "90b6b0ff4b60e109fc111b26aab4a25c", "text": "Due to its damage to Internet security, malware and its detection has caught the attention of both anti-malware industry and researchers for decades. Many research efforts have been conducted on developing intelligent malware detection systems. In these systems, resting on the analysis of file contents extracted from the file samples, like Application Programming Interface (API) calls, instruction sequences, and binary strings, data mining methods such as Naive Bayes and Support Vector Machines have been used for malware detection. However, driven by the economic benefits, both diversity and sophistication of malware have significantly increased in recent years. Therefore, anti-malware industry calls for much more novel methods which are capable to protect the users against new threats, and more difficult to evade. In this paper, other than based on file contents extracted from the file samples, we study how file relation graphs can be used for malware detection and propose a novel Belief Propagation algorithm based on the constructed graphs to detect newly unknown malware. A comprehensive experimental study on a real and large data collection from Comodo Cloud Security Center is performed to compare various malware detection approaches. Promising experimental results demonstrate that the accuracy and efficiency of our proposed method outperform other alternate data mining based detection techniques.", "title": "" }, { "docid": "bb9829b182241f70dbc1addd1452c09d", "text": "This paper presents the first complete 2.5 V, 77 GHz chipset for Doppler radar and imaging applications fabricated in 0.13 mum SiGe HBT technology. The chipset includes a voltage-controlled oscillator with -101.6 dBc/Hz phase noise at 1 MHz offset, an 25 dB gain low-noise amplifier, a novel low-voltage double-balanced Gilbert-cell mixer with two mm-wave baluns and IF amplifier achieving 12.8 dB noise figure and an OP1dB of +5 dBm, a 99 GHz static frequency divider consuming a record low 75 mW, and a power amplifier with 19 dB gain, +14.4 dBm saturated power, and 15.7% PAE. Monolithic spiral inductors and transformers result in the lowest reported 77 GHz receiver core area of only 0.45 mm times 0.30 mm. Simplified circuit topologies allow 77 GHz operation up to 125degC from 2.5 V/1.8 V supplies. Technology splits of the SiGe HBTs are employed to determine the optimum HBT profile for mm-wave performance.", "title": "" }, { "docid": "037c6208dd71882a870bd8c5a0eb64bc", "text": "Off-policy learning is key to scaling up reinforcement learning as it allows to learn about a target policy from the experience generated by a different behavior policy. Unfortunately, it has been challenging to combine off-policy learning with function approximation and multi-step bootstrapping in a way that leads to both stable and efficient algorithms. In this work, we show that the TREE BACKUP and RETRACE algorithms are unstable with linear function approximation, both in theory and in practice with specific examples. Based on our analysis, we then derive stable and efficient gradient-based algorithms using a quadratic convex-concave saddle-point formulation. By exploiting the problem structure proper to these algorithms, we are able to provide convergence guarantees and finite-sample bounds. The applicability of our new analysis also goes beyond TREE BACKUP and RETRACE and allows us to provide new convergence rates for the GTD and GTD2 algorithms without having recourse to projections or Polyak averaging.", "title": "" }, { "docid": "1cbb90368f82a0e3a86a5b8616ed97ab", "text": "In this paper, we propose two new generic attacks on the rank syndrome decoding (RSD) problem. Let C be a random [n, k] rank code over GF(qm) and let y = x + e be a received word, such that x ∈ C and rank(e) = r. The first attack, the support attack, is combinatorial and permits to recover an error e of rank weight r in min(O((n - k)3m3qr1(km/n)J, O((n - k)3m3q⌈(r-1)I(((k+1)m)/n)J))⌉ operations on GF(q). This new attack improves the exponent for the best generic attack for the RSD problem in the case n > m, by introducing the ratio m/n in the exponential coefficient of the previously best known attacks. The second attack, the annulator polynomial attack, is an algebraic attack based on the theory of q-polynomials introduced by Ore. We propose a new algebraic setting for the RSD problem that permits to consider equations and unknowns in the extension field GF(qm) rather than in GF(q) as it is usually the case. We consider two approaches to solve the problem in this new setting. The linearization technique shows that if n ≥ (k + 1) (r + 1) - 1 the RSD problem can be solved in polynomial time. More generally, we prove that if [(((r + 1)(k + 1)- (n + 1))/r)1 ≤ k, the RSD problem can be solved with an average complexity of O(r3k3qrΓ(((r+1)(k+1)-(n+1))/r)l)⌉ operations in the base field GF(q). We also consider solving with Gröbner bases for which we discuss theoretical complexity, we also consider hybrid solving with Gröbner bases on practical parameters. As an example of application, we use our new attacks on all recent cryptosystems parameters, which repair the GPT cryptosystem, we break all examples of published proposed parameters, and some parameters are broken in less than 1 s in certain cases.", "title": "" }, { "docid": "ca75798a9090810682f99400f6a8ff4e", "text": "We present the first empirical analysis of Bitcoin-based scams: operations established with fraudulent intent. By amalgamating reports gathered by voluntary vigilantes and tracked in online forums, we identify 192 scams and categorize them into four groups: Ponzi schemes, mining scams, scam wallets and fraudulent exchanges. In 21% of the cases, we also found the associated Bitcoin addresses, which enables us to track payments into and out of the scams. We find that at least $11 million has been contributed to the scams from 13 000 distinct victims. Furthermore, we present evidence that the most successful scams depend on large contributions from a very small number of victims. Finally, we discuss ways in which the scams could be countered.", "title": "" }, { "docid": "f4a2e2cc920e28ae3d7539ba8b822fb7", "text": "Neurologic injuries, such as stroke, spinal cord injuries, and weaknesses of skeletal muscles with elderly people, may considerably limit the ability of this population to achieve the main daily living activities. Recently, there has been an increasing interest in the development of wearable devices, the so-called exoskeletons, to assist elderly as well as patients with limb pathologies, for movement assistance and rehabilitation. In this paper, we review and discuss the state of the art of the lower limb exoskeletons that are mainly used for physical movement assistance and rehabilitation. An overview of the commonly used actuation systems is presented. According to different case studies, a classification and comparison between different types of actuators is conducted, such as hydraulic actuators, electrical motors, series elastic actuators, and artificial pneumatic muscles. Additionally, the mainly used control strategies in lower limb exoskeletons are classified and reviewed, based on three types of human-robot interfaces: the signals collected from the human body, the interaction forces between the exoskeleton and the wearer, and the signals collected from exoskeletons. Furthermore, the performances of several typical lower limb exoskeletons are discussed, and some assessment methods and performance criteria are reviewed. Finally, a discussion of the major advances that have been made, some research directions, and future challenges are presented.", "title": "" }, { "docid": "49fa06dc2a6ac105a2a4429eefde5efa", "text": "Now, we come to offer you the right catalogues of book to open. social media marketing in tourism and hospitality is one of the literary work in this world in suitable to be reading material. That's not only this book gives reference, but also it will show you the amazing benefits of reading a book. Developing your countless minds is needed; moreover you are kind of people with great curiosity. So, the book is very appropriate for you.", "title": "" }, { "docid": "cf056b44b0e93ad4fcbc529437cfbec3", "text": "Many advances in the treatment of cancer have been driven by the development of targeted therapies that inhibit oncogenic signaling pathways and tumor-associated angiogenesis, as well as by the recent development of therapies that activate a patient's immune system to unleash antitumor immunity. Some targeted therapies can have effects on host immune responses, in addition to their effects on tumor biology. These immune-modulating effects, such as increasing tumor antigenicity or promoting intratumoral T cell infiltration, provide a rationale for combining these targeted therapies with immunotherapies. Here, we discuss the immune-modulating effects of targeted therapies against the MAPK and VEGF signaling pathways, and how they may synergize with immunomodulatory antibodies that target PD1/PDL1 and CTLA4. We critically examine the rationale in support of these combinations in light of the current understanding of the underlying mechanisms of action of these therapies. We also discuss the available preclinical and clinical data for these combination approaches and their implications regarding mechanisms of action. Insights from these studies provide a framework for considering additional combinations of targeted therapies and immunotherapies for the treatment of cancer.", "title": "" }, { "docid": "b303349faddc80b9cd946ecdd90d6e78", "text": "Computational offloading is an effective method to address the limited battery power of a mobile device, by executing some components of a mobile application in the cloud. In this paper, a novel offloading algorithm called `Dynamic Programming with Hamming Distance Termination' (denoted DPH) is presented. Our algorithm uses randomization and a hamming distance termination criterion to find a nearly optimal offloading solution quickly. The algorithm will offload as many tasks as possible to the cloud when the network transmission bandwidth is high, thereby improving the total execution time of all tasks and minimizing the energy use of the mobile device. The algorithm can find very good solutions with low computational overhead. A novel and innovative approach to fill the dynamic programming table is used to avoid unnecessary computations, resulting in lower computation times compared to other schemes. Furthermore, the algorithm is extensible to handle larger offloading problems without a loss of computational efficiency. Performance evaluation shows that the proposed DPH algorithm can achieve near minimal energy while meeting an application's execution time constraints, and it can find a nearly optimal offloading decision within a few iterations.", "title": "" }, { "docid": "874d0fd931acaff40fa642f798eefd8b", "text": "Motivation: Spinal needle injections are technically demanding procedures. The use of ultrasound image guidance without prior CT and MR imagery promises to improve the efficacy and safety of these procedures in an affordable manner. Methodology: We propose to create a statistical shape model of the lumbar spine and warp this atlas to patient-specific ultrasound images during the needle placement procedure. From CT image volumes of 35 patients, statistical shape model of the L3 vertebra is built, including mean shape and main modes of variation. This shape model is registered to the ultrasound data by simultaneously optimizing the parameters of the model and its relative pose. Ground-truth data was established by printing 3D anatomical models of 3 patients using a rapid prototyping. CT and ultrasound data of these models were registered using fiducial markers. Results: Pairwise registration of the statistical shape model and 3D ultrasound images led to a mean target registration error of 3.4 mm, while 81% of all cases yielded clinically acceptable accuracy below the 3.5 mm threshold.", "title": "" }, { "docid": "dc204f2681acb6b89a9996f37374c0d6", "text": "The ever increasing prevalence of publicly available structured data on the World Wide Web enables new applications in a variety of domains. In this paper, we provide a conceptual approach that leverages such data in order to explain the input-output behavior of trained artificial neural networks. We apply existing Semantic Web technologies in order to provide an experimental proof of concept.", "title": "" }, { "docid": "10047da6f53996de153d638872914eaa", "text": "A harmonic suppressed coplanar waveguide (CPW)-fed slot-coupled circular slot loop antenna is proposed. Two coupling slots placed inside the radiating slot loop and connected directly to the CPW are used for the harmonic suppression. By properly adjusting the coupling slot lengths, the second and third harmonics can be suppressed separately or simultaneously. The principle of harmonic suppression is discussed, and simulation and measurement results are also presented and discussed.", "title": "" }, { "docid": "2e9a0bce883548288de0a5d380b1ddf6", "text": "Three-level neutral point clamped (NPC) inverter is a widely used topology of multilevel inverters. However, the neutral point fluctuates for certain switching states. At low modulation index, the fluctuations can be compensated using redundant switching states. But, at higher modulation index and in overmodulation region, the neutral point fluctuation deteriorates the performance of the inverter. This paper proposes a simple space vector pulsewidth modulation scheme for operating a three-level NPC inverter at higher modulation indexes, including overmodulation region, with neutral point balancing. Experimental results are provided", "title": "" }, { "docid": "4f9dd51d77b6a7008b213042a825c748", "text": "A crucial capability of real-world intelligent agents is their ability to plan a sequence of actions to achieve their goals in the visual world. In this work, we address the problem of visual semantic planning: the task of predicting a sequence of actions from visual observations that transform a dynamic environment from an initial state to a goal state. Doing so entails knowledge about objects and their affordances, as well as actions and their preconditions and effects. We propose learning these through interacting with a visual and dynamic environment. Our proposed solution involves bootstrapping reinforcement learning with imitation learning. To ensure cross task generalization, we develop a deep predictive model based on successor representations. Our experimental results show near optimal results across a wide range of tasks in the challenging THOR environment.", "title": "" }, { "docid": "69ed9f760a155366bcc2de6e03c7b1eb", "text": "Task: Chinese social media text summarization Problem: Current Chinese social media text summarization models are based on an encoder-decoder framework. Although its generated summaries are similar to source texts literally, they have low semantic relevance. Proposal: a Semantic Relevance Based neural model to encourage high semantic similarity between texts and summaries. Example of RNN Generated Summary Text: 昨晚,中联航空成都飞北京一架航班被发现有多人吸烟。后 因天气原因,飞机备降太原机场。有乘客要求重新安检,机长决 定继续飞行,引起机组人员与未吸烟乘客冲突。 Last night, several people were caught to smoke on a flight of China United Airlines from Chendu to Beijing. Later the flight temporarily landed on Taiyuan Airport. Some passengers asked for a security check but were denied by the captain, which led to a collision between crew and passengers. RNN: 中联航空机场发生爆炸致多人死亡。 China United Airlines exploded in the airport, leaving several people dead. Gold: 航班多人吸烟机组人员与乘客冲突。 Several people smoked on a flight which led to a collision between crew and passengers.", "title": "" }, { "docid": "1a4ea8915214a186eef7513729610c97", "text": "When an “updating” operation occurs on the current state of a data base, one has to ensure the new state obeys the integrity constraints. So, some of them have to be evaluated on this new state. The evaluation of an integrity constraint can be time consuming, but one can improve such an evaluation by taking advantage from the fact that the integrity constraint is satisfied in the current state. Indeed, it is then possible to derive a simplified form of this integrity constraint which is sufficient to evaluate in the new state in order to determine whether the initial constraint is still satisfied in this new state. The purpose of this paper is to present a simplification method yielding such simplified forms for integrity constraints. These simplified forms depend on the nature of the updating operation which is the cause of the state change. The operations of inserting, deleting, updating a tuple in a relation as well as transactions of such operations are considered. The proposed method is based on syntactical criteria and is validated through first order logic. Examples are treated and some aspects of the method application are discussed.", "title": "" }, { "docid": "0c4612a3fcf6c2e5afdfac1f271f40f6", "text": "In this paper, the eigenvalues and eigenvectors of the generalized discrete Fourier transform (GDFT), the generalized discrete Hartley transform (GDHT), the type-IV discrete cosine transform (DCT-IV), and the type-IV discrete sine transform (DST-IV) matrices are investigated in a unified framework. First, the eigenvalues and their multiplicities of the GDFT matrix are determined, and the theory of commuting matrices is applied to find the real, symmetric, orthogonal eigenvectors set that constitutes the discrete counterpart of Hermite Gaussian function. Then, the results of the GDFT matrix and the relationships among these four unitary transforms are used to find the eigenproperties of the GDHT, DCT-IV, and DST-IV matrices. Finally, the fractional versions of these four transforms are defined, and an image watermarking scheme is proposed to demonstrate the effectiveness of fractional transforms.", "title": "" }, { "docid": "6af29d76cbbb012625e22dddfbd30b28", "text": "UNLABELLED\nWhat aspects of neuronal activity distinguish the conscious from the unconscious brain? This has been a subject of intense interest and debate since the early days of neurophysiology. However, as any practicing anesthesiologist can attest, it is currently not possible to reliably distinguish a conscious state from an unconscious one on the basis of brain activity. Here we approach this problem from the perspective of dynamical systems theory. We argue that the brain, as a dynamical system, is self-regulated at the boundary between stable and unstable regimes, allowing it in particular to maintain high susceptibility to stimuli. To test this hypothesis, we performed stability analysis of high-density electrocorticography recordings covering an entire cerebral hemisphere in monkeys during reversible loss of consciousness. We show that, during loss of consciousness, the number of eigenmodes at the edge of instability decreases smoothly, independently of the type of anesthetic and specific features of brain activity. The eigenmodes drift back toward the unstable line during recovery of consciousness. Furthermore, we show that stability is an emergent phenomenon dependent on the correlations among activity in different cortical regions rather than signals taken in isolation. These findings support the conclusion that dynamics at the edge of instability are essential for maintaining consciousness and provide a novel and principled measure that distinguishes between the conscious and the unconscious brain.\n\n\nSIGNIFICANCE STATEMENT\nWhat distinguishes brain activity during consciousness from that observed during unconsciousness? Answering this question has proven difficult because neither consciousness nor lack thereof have universal signatures in terms of most specific features of brain activity. For instance, different anesthetics induce different patterns of brain activity. We demonstrate that loss of consciousness is universally and reliably associated with stabilization of cortical dynamics regardless of the specific activity characteristics. To give an analogy, our analysis suggests that loss of consciousness is akin to depressing the damper pedal on the piano, which makes the sounds dissipate quicker regardless of the specific melody being played. This approach may prove useful in detecting consciousness on the basis of brain activity under anesthesia and other settings.", "title": "" } ]
scidocsrr
bed293446b6a0cbf889661eeccdf4c4c
SigMal: a static signal processing based malware triage
[ { "docid": "5694ebf4c1f1e0bf65dd7401d35726ed", "text": "Data collection is not a big issue anymore with available honeypot software and setups. However malware collections gathered from these honeypot systems often suffer from massive sample counts, data analysis systems like sandboxes cannot cope with. Sophisticated self-modifying malware is able to generate new polymorphic instances of itself with different message digest sums for each infection attempt, thus resulting in many different samples stored for the same specimen. Scaling analysis systems that are fed by databases that rely on sample uniqueness based on message digests is only feasible to a certain extent. In this paper we introduce a non cryptographic, fast to calculate hash function for binaries in the Portable Executable format that transforms structural information about a sample into a hash value. Grouping binaries by hash values calculated with the new function allows for detection of multiple instances of the same polymorphic specimen as well as samples that are broken e.g. due to transfer errors. Practical evaluation on different malware sets shows that the new function allows for a significant reduction of sample counts.", "title": "" } ]
[ { "docid": "a7317f06cf34e501cb169bdf805e7e34", "text": "It's natural to promote your best and brightest, especially when you think they may leave for greener pastures if you don't continually offer them new challenges and rewards. But promoting smart, ambitious young managers too quickly often robs them of the chance to develop the emotional competencies that come with time and experience--competencies like the ability to negotiate with peers, regulate emotions in times of crisis, and win support for change. Indeed, at some point in a manager's career--usually at the vice president level--raw talent and ambition become less important than the ability to influence and persuade, and that's the point at which the emotionally immature manager will lose his effectiveness. This article argues that delaying a promotion can sometimes be the best thing a senior executive can do for a junior manager. The inexperienced manager who is given time to develop his emotional competencies may be better prepared for the interpersonal demands of top-level leadership. The authors recommend that senior executives employ these strategies to help boost their protégés' people skills: sharpen the 360-degree feedback process, give managers cross-functional assignments to improve their negotiation skills, make the development of emotional competencies mandatory, make emotional competencies a performance measure, and encourage managers to develop informal learning partnerships with peers and mentors. Delaying a promotion can be difficult given the steadfast ambitions of many junior executives and the hectic pace of organizational life. It may mean going against the norm of promoting people almost exclusively on smarts and business results. It may also mean contending with the disappointment of an esteemed subordinate. But taking the time to build people's emotional competencies isn't an extravagance; it's critical to developing effective leaders.", "title": "" }, { "docid": "eff3b5c790b62021d4615f4a1708d707", "text": "Web services are becoming business-critical components that must provide a non-vulnerable interface to the client applications. However, previous research and practice show that many web services are deployed with critical vulnerabilities. SQL Injection vulnerabilities are particularly relevant, as web services frequently access a relational database using SQL commands. Penetration testing and static code analysis are two well-know techniques often used for the detection of security vulnerabilities. In this work we compare how effective these two techniques are on the detection of SQL Injection vulnerabilities in web services code. To understand the strengths and limitations of these techniques, we used several commercial and open source tools to detect vulnerabilities in a set of vulnerable services. Results suggest that, in general, static code analyzers are able to detect more SQL Injection vulnerabilities than penetration testing tools. Another key observation is that tools implementing the same detection approach frequently detect different vulnerabilities. Finally, many tools provide a low coverage and a high false positives rate, making them a bad option for programmers.", "title": "" }, { "docid": "9b60816097ccdff7b1eec177aac0b9b8", "text": "We introduce a neural network that represents sentences by composing their words according to induced binary parse trees. We use Tree-LSTM as our composition function, applied along a tree structure found by a fully differentiable natural language chart parser. Our model simultaneously optimises both the composition function and the parser, thus eliminating the need for externally-provided parse trees which are normally required for Tree-LSTM. It can therefore be seen as a tree-based RNN that is unsupervised with respect to the parse trees. As it is fully differentiable, our model is easily trained with an off-the-shelf gradient descent method and backpropagation. We demonstrate that it achieves better performance compared to various supervised Tree-LSTM architectures on a textual entailment task and a reverse dictionary task.", "title": "" }, { "docid": "e54240e56b80916aab16980f7c7bd320", "text": "The aim of this study was to establish the optimal cut-off points of the Chen Internet Addiction Scale (CIAS), to screen for and diagnose Internet addiction among adolescents in the community by using the well-established diagnostic criteria of Internet addiction. This survey of 454 adolescents used screening (57/58) and diagnostic (63/64) cut-off points of the CIAS, a self-reported instrument, based on the results of systematic diagnostic interviews by psychiatrists. The area under the curve of the receiver operating characteristic curve revealed that CIAS has good diagnostic accuracy (89.6%). The screening cut-off point had high sensitivity (85.6%) and the diagnostic cut-off point had the highest diagnostic accuracy, classifying 87.6% of participants correctly. Accordingly, the screening point of the CIAS could provide a screening function in two-stage diagnosis, and the diagnostic point could serve as a diagnostic criterion in one-stage massive epidemiologic research.", "title": "" }, { "docid": "e8eba986ab77d519ce8808b3d33b2032", "text": "In this paper, an implementation of an extended target tracking filter using measurements from high-resolution automotive Radio Detection and Ranging (RADAR) is proposed. Our algorithm uses the Cartesian point measurements from the target's contour as well as the Doppler range rate provided by the RADAR to track a target vehicle's position, orientation, and translational and rotational velocities. We also apply a Gaussian Process (GP) to model the vehicle's shape. To cope with the nonlinear measurement equation, we implement an Extended Kalman Filter (EKF) and provide the necessary derivatives for the Doppler measurement. We then evaluate the effectiveness of incorporating the Doppler rate on simulations and on 2 sets of real data.", "title": "" }, { "docid": "ed97eae85ce430d6358826fccef3c0e1", "text": "Heart diseases, which are one of the death reasons, are among the several serious problems in this century and as per the latest survey, 60% of the patients die due to Heart problems. These diseases can be diagnosed by ECG (Electrocardiogram) signals. ECG measures electrical potentials on the body surface via contact electrodes thus it is very important signal in cardiology. Different artifacts affect the ECG signals which can thus cause problems in analyzing the ECG Thus signal processing schemes are applied to remove those interferences. The work proposed in this paper is removal of low frequency interference i.e. baseline wandering in ECG signal and digital filters are designed to remove it. The digital filters designed are FIR with different windowing methods as of Rectangular, Gaussian, Hamming, and Kaiser. The results obtained are at a low order of 56. The signals are taken from the MIT-BIH database which includes the normal and abnormal waveforms. The work has been done in MAT LAB environment where filters are designed in FDA Tool. The parameters are selected such that the noise is removed permanently. Also the best results are obtained at an order of 56 which makes hardware implementation easier. The result obtained for all FIR filters with different windows are compared by comparing the waveforms and power spectrums of the original and filtered ECG signals. The filters which gives the best results is the one using Kaiser Window.", "title": "" }, { "docid": "44352346cff6da1c4ac010ae932ce6fb", "text": "Most research on intelligent agents centers on the agent and not on the user. We look at the origins of agent-centric research for slotfilling, gaming and chatbot agents. We then argue that it is important to concentrate more on the user. After reviewing relevant literature, some approaches for creating and assessing user-centric systems are proposed.", "title": "" }, { "docid": "126b52ab2e2585eabf3345ef7fb39c51", "text": "We propose a method to build in real-time animated 3D head models using a consumer-grade RGB-D camera. Our framework is the first one to provide simultaneously comprehensive facial motion tracking and a detailed 3D model of the user's head. Anyone's head can be instantly reconstructed and his facial motion captured without requiring any training or pre-scanning. The user starts facing the camera with a neutral expression in the first frame, but is free to move, talk and change his face expression as he wills otherwise. The facial motion is tracked using a blendshape representation while the fine geometric details are captured using a Bump image mapped over the template mesh. We propose an efficient algorithm to grow and refine the 3D model of the head on-the-fly and in real-time. We demonstrate robust and high-fidelity simultaneous facial motion tracking and 3D head modeling results on a wide range of subjects with various head poses and facial expressions. Our proposed method offers interesting possibilities for animation production and 3D video telecommunications.", "title": "" }, { "docid": "bebd034597144d4656f6383d9bd22038", "text": "The Turing test aimed to recognize the behavior of a human from that of a computer algorithm. Such challenge is more relevant than ever in today’s social media context, where limited attention and technology constrain the expressive power of humans, while incentives abound to develop software agents mimicking humans. These social bots interact, often unnoticed, with real people in social media ecosystems, but their abundance is uncertain. While many bots are benign, one can design harmful bots with the goals of persuading, smearing, or deceiving. Here we discuss the characteristics of modern, sophisticated social bots, and how their presence can endanger online ecosystems and our society. We then review current efforts to detect social bots on Twitter. Features related to content, network, sentiment, and temporal patterns of activity are imitated by bots but at the same time can help discriminate synthetic behaviors from human ones, yielding signatures of engineered social tampering.", "title": "" }, { "docid": "c341a234ea76d603438f8589ca6ee2b1", "text": "In the last few years, in an attempt to further motivate students to learn a foreign language, there has been an increasing interest in task-based teaching techniques, which emphasize communication and the practical use of language, thus moving away from the repetitive grammar-translation methods. Within this approach, the significance of situating foreign language learners in scenarios where they can meaningfully learn has become a major priority for many educators. This approach is particularly relevant in the context of teaching foreign languages to young children, who need to be introduced to a new language by means of very concrete vocabulary, which is facilitated by the use of objects that they can handle and see. In this study, we investigate the benefits of using wearable and Internet-of-Things (IoT) technologies in streamlining the creation of such realistic task-based language learning scenarios. We show that the use of these technologies will prove beneficial by freeing the instructors of having to keep records of the tasks performed by each student during the class session. Instead, instructors can focus their efforts on creating a friendly environment and encouraging students to participate. Our study sets up a basis for showing the great benefits of using wearable and IoT technologies in streamlining 1) the creation of realistic scenarios in which young foreign language learners can feel comfortable engaging in chat and becoming better prepared for social interaction in a foreign language, and 2) the acquisition and processing of performance metrics.", "title": "" }, { "docid": "3ea5607d04419aae36592b6dcce25304", "text": "Optimization problems with rank constraints arise in many applications, including matrix regression, structured PCA, matrix completion and matrix decomposition problems. An attractive heuristic for solving such problems is to factorize the low-rank matrix, and to run projected gradient descent on the nonconvex factorized optimization problem. The goal of this problem is to provide a general theoretical framework for understanding when such methods work well, and to characterize the nature of the resulting fixed point. We provide a simple set of conditions under which projected gradient descent, when given a suitable initialization, converges geometrically to a statistically useful solution. Our results are applicable even when the initial solution is outside any region of local convexity, and even when the problem is globally concave. Working in a non-asymptotic framework, we show that our conditions are satisfied for a wide range of concrete models, including matrix regression, structured PCA, matrix completion with real and quantized observations, matrix decomposition, and graph clustering problems. Simulation results show excellent agreement with the theoretical predictions.", "title": "" }, { "docid": "58ca6012b209dc7bb0bb06ba4c8ef68b", "text": "With the increasing requirement of a high-density, high-performance, low-power alternative to traditional SRAM, Gain Cell (GC) embedded DRAMs have gained a renewed interest in recent years. Several industrial and academic publications have presented GC memory implementations for various target applications, including high-performance processor caches, wireless communication memories, and biomedical system storage. In this paper, we review and compare the recent publications, examining the design requirements and the implementation techniques that lead to achievement of the required design metrics of these applications.", "title": "" }, { "docid": "907b0abae26bd56da0191e5abf53a911", "text": "Online service platforms (OSPs), such as search engines, news-websites, ad-providers, etc., serve highly personalized content to the user, based on the profile extracted from her history with the OSP. In this paper, we capture OSP's personalization for an user in a new data structure called the personalization vector (?), which is a weighted vector over a set of topics, and present efficient algorithms to learn it.\n Our approach treats OSPs as black-boxes, and extracts η by mining only their output, specifically, the personalized (for an user) and vanilla (without any user information) contents served, and the differences in these content. We believe that such treatment of OSPs is a unique aspect of our work, not just enabling access to (so far hidden) profiles in OSPs, but also providing a novel and practical approach for retrieving information from OSPs by mining differences in their outputs.\n We formulate a new model called Latent Topic Personalization (LTP) that captures the personalization vector in a learning framework and present efficient inference algorithms for determining it. We perform extensive experiments targeting search engine personalization, using data from both real Google users and synthetic setup. Our results indicate that LTP achieves high accuracy (R-pre = 84%) in discovering personalized topics.For Google data, our qualitative results demonstrate that the topics determined by LTP for a user correspond well to his ad-categories determined by Google.", "title": "" }, { "docid": "313ded9d63967fd0c8bc6ca164ce064a", "text": "This paper presents a 0.35-mum SiGe BiCMOS VCO IC exhibiting a linear VCO gain (Kvco) for 5-GHz band application. To realize a linear Kvco, a novel resonant circuit is proposed. The measured Kvco changes from 224 MHz/V to 341 MHz/V. The ratio of the maximum Kvco to the minimum one is 1.5 which is less than one-half of that of a conventional VCO. The VCO oscillation frequency range is from 5.45 GHz to 5.95 GHz, the tuning range is 8.8 %, and the dc current consumption is 3.4 mA at a supply voltage of 3.0 V. The measured phase noise is -116 dBc/Hz at 1MHz offset, which is similar to the conventional VCO", "title": "" }, { "docid": "fddadfbc6c1b34a8ac14f8973f052da5", "text": "Abstract. Centroidal Voronoi tessellations are useful for subdividing a region in Euclidean space into Voronoi regions whose generators are also the centers of mass, with respect to a prescribed density function, of the regions. Their extensions to general spaces and sets are also available; for example, tessellations of surfaces in a Euclidean space may be considered. In this paper, a precise definition of such constrained centroidal Voronoi tessellations (CCVTs) is given and a number of their properties are derived, including their characterization as minimizers of an “energy.” Deterministic and probabilistic algorithms for the construction of CCVTs are presented and some analytical results for one of the algorithms are given. Computational examples are provided which serve to illustrate the high quality of CCVT point sets. Finally, CCVT point sets are applied to polynomial interpolation and numerical integration on the sphere.", "title": "" }, { "docid": "3686b88ab4b0fdfe690bb1b8869dce5c", "text": "In recent years, several special multiple-parameter discrete fractional transforms (MPDFRTs) have been proposed, and their advantages have been demonstrated in the fields of communication systems and information security. However, the general theoretical framework of MPDFRTs has not yet been established. In this paper, we propose two separate theoretical frameworks called the type I and II MPDFRT that can include existing multiple-parameter transforms as special cases. The properties of the type I and II MPDFRT have been analyzed in detail and their high-dimensional operators have been defined. Under the theoretical frameworks, we can construct new types of transforms that may be useful in signal processing and information security. Finally, we perform two applications about image encryption and image feature extraction in the type I and II MPDFRT domain. The simulation results demonstrate that the typical transforms constructed under the proposed theoretical frameworks yield promising results in these applications.", "title": "" }, { "docid": "e05ea52ecf42826e73ed7095ed162557", "text": "This paper aims at detecting and recognizing fish species from underwater images by means of Fast R-CNN (Regions with Convolutional Neural and Networks) features. Encouraged by powerful recognition results achieved by Convolutional Neural Networks (CNNs) on generic VOC and ImageNet dataset, we apply this popular deep ConvNets to domain-specific underwater environment which is more complicated than overland situation, using a new dataset of 24277 ImageCLEF fish images belonging to 12 classes. The experimental results demonstrate the promising performance of our networks. Fast R-CNN improves mean average precision (mAP) by 11.2% relative to Deformable Parts Model (DPM) baseline-achieving a mAP of 81.4%, and detects 80× faster than previous R-CNN on a single fish image.", "title": "" }, { "docid": "51c6c3030eca09cadff26da2cc4bebbc", "text": "This paper studies deep network architectures to address the problem of video classification. A multi-stream framework is proposed to fully utilize the rich multimodal information in videos. Specifically, we first train three Convolutional Neural Networks to model spatial, short-term motion and audio clues respectively. Long Short Term Memory networks are then adopted to explore long-term temporal dynamics. With the outputs of the individual streams on multiple classes, we propose to mine class relationships hidden in the data from the trained models. The automatically discovered relationships are then leveraged in the multi-stream multi-class fusion process as a prior, indicating which and how much information is needed from the remaining classes, to adaptively determine the optimal fusion weights for generating the final scores of each class. Our contributions are two-fold. First, the multi-stream framework is able to exploit multimodal features that are more comprehensive than those previously attempted. Second, our proposed fusion method not only learns the best weights of the multiple network streams for each class, but also takes class relationship into account, which is known as a helpful clue in multi-class visual classification tasks. Our framework produces significantly better results than the state of the arts on two popular benchmarks, 92.2% on UCF-101 (without using audio) and 84.9% on Columbia Consumer Videos.", "title": "" }, { "docid": "1dab5734e1e3e8e12eb533c8d2ca98f1", "text": "—The significant growth of online shopping makes the competition in this industry become more intense. Maintaining customer loyalty has been recognized as one of the essential factor for business survival and growth. The purpose of this study is to examine empirically the influence of satisfaction, trust and commitment on customer loyalty in online shopping. This paper describes a theoretical model for investigating the influence of satisfaction, trust and commitment on customer loyalty toward online shopping. Based on the theoretical model, hypotheses were formulated. The primary data were collected from the respondents which consists of 300 students. Multiple regression and qualitative analysis were used to test the study hypotheses. The empirical study results revealed that satisfaction, trust and commitment have significant impact on student loyalty toward online shopping.", "title": "" }, { "docid": "ddae1c6469769c2c7e683bfbc223ad1a", "text": "Deep reinforcement learning has achieved many impressive results in recent years. However, tasks with sparse rewards or long horizons continue to pose significant challenges. To tackle these important problems, we propose a general framework that first learns useful skills in a pre-training environment, and then leverages the acquired skills for learning faster in downstream tasks. Our approach brings together some of the strengths of intrinsic motivation and hierarchical methods: the learning of useful skill is guided by a single proxy reward, the design of which requires very minimal domain knowledge about the downstream tasks. Then a high-level policy is trained on top of these skills, providing a significant improvement of the exploration and allowing to tackle sparse rewards in the downstream tasks. To efficiently pre-train a large span of skills, we use Stochastic Neural Networks combined with an information-theoretic regularizer. Our experiments1 show2 that this combination is effective in learning a wide span of interpretable skills in a sample-efficient way, and can significantly boost the learning performance uniformly across a wide range of downstream tasks.", "title": "" } ]
scidocsrr
87f245c9a2145313c26326b3afda0f85
An intelligent content-based image retrieval system for clinical decision support in brain tumor diagnosis
[ { "docid": "6101b3c76db195a68fc46cb99c0cda1c", "text": "We review two clustering algorithms (hard c-means and single linkage) and three indexes of crisp cluster validity (Hubert's statistics, the Davies-Bouldin index, and Dunn's index). We illustrate two deficiencies of Dunn's index which make it overly sensitive to noisy clusters and propose several generalizations of it that are not as brittle to outliers in the clusters. Our numerical examples show that the standard measure of interset distance (the minimum distance between points in a pair of sets) is the worst (least reliable) measure upon which to base cluster validation indexes when the clusters are expected to form volumetric clouds. Experimental results also suggest that intercluster separation plays a more important role in cluster validation than cluster diameter. Our simulations show that while Dunn's original index has operational flaws, the concept it embodies provides a rich paradigm for validation of partitions that have cloud-like clusters. Five of our generalized Dunn's indexes provide the best validation results for the simulations presented.", "title": "" }, { "docid": "999eda741a3c132ac8640e55721b53bb", "text": "This paper presents an overview of color and texture descriptors that have been approved for the Final Committee Draft of the MPEG-7 standard. The color and texture descriptors that are described in this paper have undergone extensive evaluation and development during the past two years. Evaluation criteria include effectiveness of the descriptors in similarity retrieval, as well as extraction, storage, and representation complexities. The color descriptors in the standard include a histogram descriptor that is coded using the Haar transform, a color structure histogram, a dominant color descriptor, and a color layout descriptor. The three texture descriptors include one that characterizes homogeneous texture regions and another that represents the local edge distribution. A compact descriptor that facilitates texture browsing is also defined. Each of the descriptors is explained in detail by their semantics, extraction and usage. Effectiveness is documented by experimental results.", "title": "" } ]
[ { "docid": "a9fd8529dc3511dbf10ca76e776e35c1", "text": "Several works have separated the pressure waveform p in systemic arteries into reservoir p(r) and excess p(exc) components, p = p(r) + p(exc), to improve pulse wave analysis, using windkessel models to calculate the reservoir pressure. However, the mechanics underlying this separation and the physical meaning of p(r) and p(exc) have not yet been established. They are studied here using the time-domain, inviscid and linear one-dimensional (1-D) equations of blood flow in elastic vessels. Solution of these equations in a distributed model of the 55 larger human arteries shows that p(r) calculated using a two-element windkessel model is space-independent and well approximated by the compliance-weighted space-average pressure of the arterial network. When arterial junctions are well-matched for the propagation of forward-travelling waves, p(r) calculated using a three-element windkessel model is space-dependent in systole and early diastole and is made of all the reflected waves originated at the terminal (peripheral) reflection sites, whereas p(exc) is the sum of the rest of the waves, which are obtained by propagating the left ventricular flow ejection without any peripheral reflection. In addition, new definitions of the reservoir and excess pressures from simultaneous pressure and flow measurements at an arbitrary location are proposed here. They provide valuable information for pulse wave analysis and overcome the limitations of the current two- and three-element windkessel models to calculate p(r).", "title": "" }, { "docid": "faa8bb95a4b05bed78dbdfaec1cd147c", "text": "This paper describes the SimBow system submitted at SemEval2017-Task3, for the question-question similarity subtask B. The proposed approach is a supervised combination of different unsupervised textual similarities. These textual similarities rely on the introduction of a relation matrix in the classical cosine similarity between bag-of-words, so as to get a softcosine that takes into account relations between words. According to the type of relation matrix embedded in the soft-cosine, semantic or lexical relations can be considered. Our system ranked first among the official submissions of subtask B.", "title": "" }, { "docid": "0f37f7306f879ca0b5d35516a64818fb", "text": "Much of empirical corporate finance focuses on sources of the demand for various forms of capital, not the supply. Recently, this has changed. Supply effects of equity and credit markets can arise from a combination of three ingredients: investor tastes, limited intermediation, and corporate opportunism. Investor tastes when combined with imperfectly competitive intermediaries lead prices and interest rates to deviate from fundamental values. Opportunistic firms respond by issuing securities with high prices and investing the proceeds. A link between capital market prices and corporate finance can in principle come from either supply or demand. This framework helps to organize empirical approaches that more precisely identify and quantify supply effects through variation in one of these three ingredients. Taken as a whole, the evidence shows that shifting equity and credit market conditions play an important role in dictating corporate finance and investment. 181 A nn u. R ev . F in . E co n. 2 00 9. 1: 18 120 5. D ow nl oa de d fr om w w w .a nn ua lr ev ie w s. or g by H ar va rd U ni ve rs ity o n 02 /1 1/ 14 . F or p er so na l u se o nl y.", "title": "" }, { "docid": "46fdb284160db9b9b10fed2745cd1f59", "text": "The TCB shall be found resistant to penetration. Near flawless penetration testing is a requirement for high-rated secure systems — those rated above B1 based on the Trusted Computer System Evaluation Criteria (TCSEC) and its Trusted Network and Database Interpretations (TNI and TDI). Unlike security functional testing, which demonstrates correct behavior of the product's advertised security controls, penetration testing is a form of stress testing which exposes weaknesses — that is, flaws — in the trusted computing base (TCB). This essay describes the Flaw Hypothesis Methodology (FHM), the earliest comprehensive and widely used method for conducting penetrations testing. It reviews motivation for penetration testing and penetration test planning, which establishes the goals, ground rules, and resources available for testing. The TCSEC defines \" flaw \" as \" an error of commission, omission, or oversight in a system that allows protection mechanisms to be bypassed. \" This essay amplifies the definition of a flaw as a demonstrated unspecified capability that can be exploited to violate security policy. The essay provides an overview of FHM and its analogy to a heuristic-based strategy game. The 10 most productive ways to generate hypothetical flaws are described as part of the method, as are ways to confirm them. A review of the results and representative generic flaws discovered over the past 20 years is presented. The essay concludes with the assessment that FHM is applicable to the European ITSEC and with speculations about future methods of penetration analysis using formal methods, that is, mathematically 270 Information Security specified design, theorems, and proofs of correctness of the design. One possible development could be a rigorous extension of FHM to be integrated into the development process. This approach has the potential of uncovering problems early in the design , enabling iterative redesign. A security threat exists when there are the opportunity, motivation, and technical means to attack: the when, why, and how. FHM deals only with the \" how \" dimension of threats. It is a requirement for high-rated secure systems (for example, TCSEC ratings above B1) that penetration testing be completed without discovery of security flaws in the evaluated product, as part of a product or system evaluation [DOD85, NCSC88b, NCSC92]. Unlike security functional testing, which demonstrates correct behavior of the product's advertised security controls, penetration testing is a form of stress testing, which exposes weaknesses or flaws in the trusted computing base (TCB). It has …", "title": "" }, { "docid": "2de69420e8062f267b64bcf3342bd8b0", "text": "This paper describes a direct-sequence spread-spectrum superregenerative receiver using a PN code synchronization loop based on the tan-dither technique. The receiver minimizes overall complexity by using a single signal processing path for data detection and PN code synchronization. An analytical study on the loop dynamics is presented, and the conditions for optimum performance are examined. Experimental results in the 433 MHz European ISM band confirm the receiver ability to perform acquisition and tracking, achieving a sensitivity of -103 dBm and an input dynamic range of 65 dB.", "title": "" }, { "docid": "d58425a613f9daea2677d37d007f640e", "text": "Recently the improved bag of features (BoF) model with locality-constrained linear coding (LLC) and spatial pyramid matching (SPM) achieved state-of-the-art performance in image classification. However, only adopting SPM to exploit spatial information is not enough for satisfactory performance. In this paper, we use hierarchical temporal memory (HTM) cortical learning algorithms to extend this LLC & SPM based model. HTM regions consist of HTM cells are constructed to spatial pool the LLC codes. Each cell receives a subset of LLC codes, and adjacent subsets are overlapped so that more spatial information can be captured. Additionally, HTM cortical learning algorithms have two processes: learning phase which make the HTM cell only receive most frequent LLC codes, and inhibition phase which ensure that the output of HTM regions is sparse. The experimental results on Caltech 101 and UIUC-Sport dataset show the improvement on the original LLC & SPM based model.", "title": "" }, { "docid": "4de597faec62e1f6091cb72a721bc5ea", "text": "In this paper, we propose a unified facial beautification framework with respect to skin homogeneity, lighting, and color. A novel region-aware mask is constructed for skin manipulation, which can automatically select the edited regions with great precision. Inspired by the state-of-the-art edit propagation techniques, we present an adaptive edge-preserving energy minimization model with a spatially variant parameter and a high-dimensional guided feature space for mask generation. Using region-aware masks, our method facilitates more flexible and accurate facial skin enhancement while the complex manipulations are simplified considerably. In our beautification framework, a portrait is decomposed into smoothness, lighting, and color layers by an edge-preserving operator. Next, facial landmarks and significant features are extracted as input constraints for mask generation. After three region-aware masks have been obtained, a user can perform facial beautification simply by adjusting the skin parameters. Furthermore, the combinations of parameters can be optimized automatically, depending on the data priors and psychological knowledge. We performed both qualitative and quantitative evaluation for our method using faces with different genders, races, ages, poses, and backgrounds from various databases. The experimental results demonstrate that our technique is superior to previous methods and comparable to commercial systems, for example, PicTreat, Portrait+, and Portraiture.", "title": "" }, { "docid": "edb17cb58e7fd5862c84b53e9c9f2915", "text": "0747-5632/$ see front matter 2012 Elsevier Ltd. A doi:10.1016/j.chb.2011.12.003 ⇑ Corresponding author. Tel.: +49 40 41346826; fax E-mail addresses: [email protected] ( uni-hamburg.de (L. Reinecke), keno.juechems@stu Juechems). Online gaming has gained millions of users around the globe, which have been shown to virtually connect, to befriend, and to accumulate online social capital. Today, as online gaming has become a major leisure time activity, it seems worthwhile asking for the underlying factors of online social capital acquisition and whether online social capital increases offline social support. In the present study, we proposed that the online game players’ physical and social proximity as well as their mutual familiarity influence bridging and bonding social capital. Physical proximity was predicted to positively influence bonding social capital online. Social proximity and familiarity were hypothesized to foster both online bridging and bonding social capital. Additionally, we hypothesized that both social capital dimensions are positively related to offline social support. The hypotheses were tested with regard to members of e-sports clans. In an online survey, participants (N = 811) were recruited via the online portal of the Electronic Sports League (ESL) in several countries. The data confirmed all hypotheses, with the path model exhibiting an excellent fit. The results complement existing research by showing that online gaming may result in strong social ties, if gamers engage in online activities that continue beyond the game and extend these with offline activities. 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "a4164039cea3951982373edebd53d636", "text": "Vehicle detection with orientation estimation in aerial images has received widespread interest as it is important for intelligent traffic management. This is a challenging task, not only because of the complex background and relatively small size of the target, but also the various orientations of vehicles in aerial images captured from the top view. The existing methods for oriented vehicle detection need several post-processing steps to generate final detection results with orientation, which are not efficient enough. Moreover, they can only get discrete orientation information for each target. In this paper, we present an end-to-end single convolutional neural network to generate arbitrarily-oriented detection results directly. Our approach, named Oriented_SSD (Single Shot MultiBox Detector, SSD), uses a set of default boxes with various scales on each feature map location to produce detection bounding boxes. Meanwhile, offsets are predicted for each default box to better match the object shape, which contain the angle parameter for oriented bounding boxes’ generation. Evaluation results on the public DLR Vehicle Aerial dataset and Vehicle Detection in Aerial Imagery (VEDAI) dataset demonstrate that our method can detect both the location and orientation of the vehicle with high accuracy and fast speed. For test images in the DLR Vehicle Aerial dataset with a size of 5616× 3744, our method achieves 76.1% average precision (AP) and 78.7% correct direction classification at 5.17 s on an NVIDIA GTX-1060.", "title": "" }, { "docid": "fea10e9e5bf2c930d609d3fb48f1efaf", "text": "Recent years have seen a surge of interest in Statistical Relational Learning (SRL) models that combine logic with probabilities. One prominent and highly expressive SRL model is Markov Logic Networks (MLNs), but the expressivity comes at the cost of learning complexity. Most of the current methods for learning MLN structure follow a two-step approach where first they search through the space of possible clauses (i.e. structures) and then learn weights via gradient descent for these clauses. We present a functional-gradient boosting algorithm to learn both the weights (in closed form) and the structure of the MLN simultaneously. Moreover most of the learning approaches for SRL apply the closed-world assumption, i.e., whatever is not observed is assumed to be false in the world. We attempt to open this assumption. We extend our algorithm for MLN structure learning to handle missing data by using an EM-based approach and show this algorithm can also be used to learn Relational Dependency Networks and relational policies. Our results in many domains demonstrate that our approach can effectively learn MLNs even in the presence of missing data.", "title": "" }, { "docid": "59597ab549189c744aae774259f84745", "text": "This paper addresses the problem of multi-view people occupancy map estimation. Existing solutions either operate per-view, or rely on a background subtraction preprocessing. Both approaches lessen the detection performance as scenes become more crowded. The former does not exploit joint information, whereas the latter deals with ambiguous input due to the foreground blobs becoming more and more interconnected as the number of targets increases. Although deep learning algorithms have proven to excel on remarkably numerous computer vision tasks, such a method has not been applied yet to this problem. In large part this is due to the lack of large-scale multi-camera data-set. The core of our method is an architecture which makes use of monocular pedestrian data-set, available at larger scale than the multi-view ones, applies parallel processing to the multiple video streams, and jointly utilises it. Our end-to-end deep learning method outperforms existing methods by large margins on the commonly used PETS 2009 data-set. Furthermore, we make publicly available a new three-camera HD data-set.", "title": "" }, { "docid": "6fd511ffcdb44c39ecad1a9f71a592cc", "text": "s Providing Supporting Policy Compositional Operators Functional Composition Network Layered Abstract Topologies Topological Decomposition Packet Extensible Headers Policy & Network Abstractions Pyretic (Contributions)", "title": "" }, { "docid": "a2adeb9448c699bbcbb10d02a87e87a5", "text": "OBJECTIVE\nTo quantify the presence of health behavior theory constructs in iPhone apps targeting physical activity.\n\n\nMETHODS\nThis study used a content analysis of 127 apps from Apple's (App Store) Health & Fitness category. Coders downloaded the apps and then used an established theory-based instrument to rate each app's inclusion of theoretical constructs from prominent behavior change theories. Five common items were used to measure 20 theoretical constructs, for a total of 100 items. A theory score was calculated for each app. Multiple regression analysis was used to identify factors associated with higher theory scores.\n\n\nRESULTS\nApps were generally observed to be lacking in theoretical content. Theory scores ranged from 1 to 28 on a 100-point scale. The health belief model was the most prevalent theory, accounting for 32% of all constructs. Regression analyses indicated that higher priced apps and apps that addressed a broader activity spectrum were associated with higher total theory scores.\n\n\nCONCLUSION\nIt is not unexpected that apps contained only minimal theoretical content, given that app developers come from a variety of backgrounds and many are not trained in the application of health behavior theory. The relationship between price and theory score corroborates research indicating that higher quality apps are more expensive. There is an opportunity for health and behavior change experts to partner with app developers to incorporate behavior change theories into the development of apps. These future collaborations between health behavior change experts and app developers could foster apps superior in both theory and programming possibly resulting in better health outcomes.", "title": "" }, { "docid": "e045619ede30efb3338e6278f23001d7", "text": "Particle filtering has become a standard tool for non-parametric estimation in computer vision tracking applications. It is an instance of stochastic search. Each particle represents a possible state of the system. Higher concentration of particles at any given region of the search space implies higher probabilities. One of its major drawbacks is the exponential growth in the number of particles for increasing dimensions in the search space. We present a graph based filtering framework for hierarchical model tracking that is capable of substantially alleviate this issue. The method relies on dividing the search space in subspaces that can be estimated separately. Low correlated subspaces may be estimated with parallel, or serial, filters and have their probability distributions combined by a special aggregator filter. We describe a new algorithm to extract parameter groups, which define the subspaces, from the system model. We validate our method with different graph structures within a simple hand tracking experiment with both synthetic and real data", "title": "" }, { "docid": "38c2508c0da3826f767336ae46cac505", "text": "Caricature generation is an interesting yet challenging task. The primary goal is to generate a plausible caricature with reasonable exaggerations given a face image. Conventional caricature generation approaches mainly use low-level geometric transformations such as image warping to generate exaggerated images, which lack richness and diversity in terms of content and style. The recent progress in generative adversarial networks (GANs) makes it possible to learn an image-to-image transformation from data so as to generate diverse output images. However, directly applying GAN-based models to this task leads to unsatisfactory results due to the large variance in the caricature distribution. Moreover, some models require pixel-wisely paired training data which largely limits their usage scenarios. In this paper, we model caricature generation as a weakly paired image-to-image translation task, and propose CariGAN to address these issues. Specifically, to enforce reasonable exaggeration and facial deformation, facial landmarks are adopted as an additional condition to constrain the generated image. Furthermore, an image fusion mechanism is designed to encourage our model to focus on the key facial parts so that more vivid details in these regions can be generated. Finally, a diversity loss is proposed to encourage the model to produce diverse results to help alleviate the “mode collapse” problem of the conventional GAN-based models. Extensive experiments on a large-scale “WebCaricature” dataset show that the proposed CariGAN can generate more plausible caricatures with larger diversity compared with the state-of-the-art models.", "title": "" }, { "docid": "466c0d9436e1f1878aaafa2297022321", "text": "Acetic acid was used topically at concentrations of between 0.5% and 5% to eliminate Pseudomonas aeruginosa from the burn wounds or soft tissue wounds of 16 patients. In-vitro studies indicated the susceptibility of P. aeruginosa to acetic acid; all strains exhibited a minimum inhibitory concentration of 2 per cent. P. aeruginosa was eliminated from the wounds of 14 of the 16 patients within two weeks of treatment. Acetic acid was shown to be an inexpensive and efficient agent for the elimination of P. aeruginosa from burn and soft tissue wounds.", "title": "" }, { "docid": "188d9e1b0244aa7f68610dab9d852ab9", "text": "We present two real-time hidden Markov model-based systems for recognizing sentence-level continuous American Sign Language (ASL) using a single camera to track the user’s unadorned hands. The first system observes the user from a desk mounted camera and achieves 92 percent word accuracy. The second system mounts the camera in a cap worn by the user and achieves 98 percent accuracy (97 percent with an unrestricted grammar). Both experiments use a 40-word lexicon.", "title": "" }, { "docid": "48c8ee0758da2897513b2b5a18ebe7db", "text": "The Internet is smoothly migrating from an Internet of people towards an Internet of Things (IoT). By 2020, it is expected to have 50 billion things connected to the Internet. However, such a migration induces a strong level of complexity when handling interoperability between the heterogeneous Internet things, e.g., RFIDs (Radio Frequency Identification), mobile handheld devices, and wireless sensors. In this context, a couple of standards have been already set, e.g., IPv6, 6LoWPAN (IPv6 over Low power Wireless Personal Area Networks), and M2M (Machine to Machine communications). In this paper, we focus on the integration of wireless sensor networks into IoT, and shed further light on the subtleties of such integration. We present a real-world test bed deployment where wireless sensors are used to control electrical appliances in a smart building. Encountered problems are highlighted and suitable solutions are presented.", "title": "" }, { "docid": "584456ef251fbf31363832fc82bd3d42", "text": "Neural network architectures found by sophistic search algorithms achieve strikingly good test performance, surpassing most human-crafted network models by significant margins. Although computationally efficient, their design is often very complex, impairing execution speed. Additionally, finding models outside of the search space is not possible by design. While our space is still limited, we implement undiscoverable expert knowledge into the economic search algorithm Efficient Neural Architecture Search (ENAS), guided by the design principles and architecture of ShuffleNet V2. While maintaining baselinelike 2.85% test error on CIFAR-10, our ShuffleNASNets are significantly less complex, require fewer parameters, and are two times faster than the ENAS baseline in a classification task. These models also scale well to a low parameter space, achieving less than 5% test error with little regularization and only 236K parameters.", "title": "" }, { "docid": "575da85b3675ceaec26143981dbe9b53", "text": "People are increasingly required to disclose personal information to computerand Internetbased systems in order to register, identify themselves or simply for the system to work as designed. In the present paper, we outline two different methods to easily measure people’s behavioral self-disclosure to web-based forms. The first, the use of an ‘I prefer not to say’ option to sensitive questions is shown to be responsive to the manipulation of level of privacy concern by increasing the salience of privacy issues, and to experimental manipulations of privacy. The second, blurring or increased ambiguity was used primarily by males in response to an income question in a high privacy condition. Implications for the study of self-disclosure in human–computer interaction and web-based research are discussed. 2007 Elsevier Ltd. All rights reserved.", "title": "" } ]
scidocsrr
c5788ef24dc3f683af4fab2cb64c65b6
A bad following : the big five factors of personality and follower reactions to unethical leader behavior
[ { "docid": "c647b0b28c61da096b781b4aa3c89f03", "text": "This article concerns the real-world importance of leadership for the success or failure of organizations and social institutions. The authors propose conceptualizing leadership and evaluating leaders in terms of the performance of the team or organization for which they are responsible. The authors next offer a taxonomy of the dependent variables used as criteria in leadership studies. A review of research using this taxonomy suggests that the vast empirical literature on leadership may tell us more about the success of individual managerial careers than the success of these people in leading groups, teams, and organizations. The authors then summarize the evidence showing that leaders do indeed affect the performance of organizations--for better or for worse--and conclude by describing the mechanisms through which they do so.", "title": "" } ]
[ { "docid": "b80eb073e4ace4e8e5b8a2ea392fbdd9", "text": "The letter describes the design of a wideband microstrip 180° hybrid, which makes use of two sides of a single-layer substrate. The device is formed by the out-of-phase and in-phase dividers which are suitably unified. By making use of ground slots and microstrip to slot transitions all four its ports are of microstrip type. The hybrid exhibits a well-balanced power division between the output ports and good quality return losses of the out-of-phase (difference) and in-phase (summation) ports across the band from 3 to 9 GHz. The isolation between the output ports is better than 10 dB. The isolation between the difference and summation ports is greater than 40 dB and the phase characteristics are close to the ideal 180° and 0°.", "title": "" }, { "docid": "43cfae2190595a6fd52b795169a09a48", "text": "While Kolmogorov complexity is the accepted absolute measure of information content of an individual finite object, a similarly absolute notion is needed for the relation between an individual data sample and an individual model summarizing the information in the data, for example, a finite set (or probability distribution) where the data sample typically came from. The statistical theory based on such relations between individual objects can be called algorithmic statistics, in contrast to ordinary statistical theory that deals with relations between probabilistic ensembles. Since the algorithmic theory deals with individual objects and not with averages over ensembles of objects it is surprising that similar properties hold, albeit sometimes in weaker form. We first recall the notion of algorithmic mutual information between individual objects and show that this information cannot be increased by algorithmic or probabilistic means (as is the case with probabilistic mutual information). We develop the algorithmic theory of typical statistic, sufficient statistic, and minimal sufficient statistic. This theory is based on two-part codes consisting of the code for the statistic (the model embodying the regularities, the meaningful information, in the data) and the model-to-data code. In contrast to the situation in probabilistic statistical theory, the algorithmic relation of (minimal) sufficiency is an absolute relation between the individual model and the individual data sample. We distinguish implicit and explicit descriptions of the models. We give characterizations of algorithmic (a.k.a. Kolmogorov) minimal sufficient statistics for all data samples for both description modes—in the explicit mode under some constraints. We also strengthen and elaborate some earlier results by Shen on the “Kolmogorov structure function” and “absolutely non-stochastic objects”—objects that have no simpler algorithmic (explicit) sufficient statistics and are literally their own algorithmic (explicit) minimal sufficient statistics. We discuss the implication of the results for potential applications.", "title": "" }, { "docid": "43bab96fad8afab1ea350e327a8f7aec", "text": "The traditional databases are not capable of handling unstructured data and high volumes of real-time datasets. Diverse datasets are unstructured lead to big data, and it is laborious to store, manage, process, analyze, visualize, and extract the useful insights from these datasets using traditional database approaches. However, many technical aspects exist in refining large heterogeneous datasets in the trend of big data. This paper aims to present a generalized view of complete big data system which includes several stages and key components of each stage in processing the big data. In particular, we compare and contrast various distributed file systems and MapReduce-supported NoSQL databases concerning certain parameters in data management process. Further, we present distinct distributed/cloud-based machine learning (ML) tools that play a key role to design, develop and deploy data models. The paper investigates case studies on distributed ML tools such as Mahout, Spark MLlib, and FlinkML. Further, we classify analytics based on the type of data, domain, and application. We distinguish various visualization tools pertaining three parameters: functionality, analysis capabilities, and supported development environment. Furthermore, we systematically investigate big data tools and technologies (Hadoop 3.0, Spark 2.3) including distributed/cloud-based stream processing tools in a comparative approach. Moreover, we discuss functionalities of several SQL Query tools on Hadoop based on 10 parameters. Finally, We present some critical points relevant to research directions and opportunities according to the current trend of big data. Investigating infrastructure tools for big data with recent developments provides a better understanding that how different tools and technologies apply to solve real-life applications.", "title": "" }, { "docid": "b1fabdbfea2fcffc8071371de8399b69", "text": "Cities across the United States are implementing information communication technologies in an effort to improve government services. One such innovation in e-government is the creation of 311 systems, offering a centralized platform where citizens can request services, report non-emergency concerns, and obtain information about the city via hotline, mobile, or web-based applications. The NYC 311 service request system represents one of the most significant links between citizens and city government, accounting for more than 8,000,000 requests annually. These systems are generating massive amounts of data that, when properly managed, cleaned, and mined, can yield significant insights into the real-time condition of the city. Increasingly, these data are being used to develop predictive models of citizen concerns and problem conditions within the city. However, predictive models trained on these data can suffer from biases in the propensity to make a request that can vary based on socio-economic and demographic characteristics of an area, cultural differences that can affect citizens’ willingness to interact with their government, and differential access to Internet connectivity. Using more than 20,000,000 311 requests together with building violation data from the NYC Department of Buildings and the NYC Department of Housing Preservation and Development; property data from NYC Department of City Planning; and demographic and socioeconomic data from the U.S. Census American Community Survey we develop a two-step methodology to evaluate the propensity to complain: (1) we predict, using a gradient boosting regression model, the likelihood of heating and hot water violations for a given building, and (2) we then compare the actual complaint volume for buildings with predicted violations to quantify discrepancies across the City. Our model predicting service request volumes over time will contribute to the efficiency of the 311 system by informing shortand long-term resource allocation strategy and improving the agency’s performance in responding to requests. For instance, the outcome of our longitudinal pattern analysis allows the city to predict building safety hazards early and take action, leading to anticipatory safety and inspection actions. Furthermore, findings will provide novel insight into equity and community engagement through 311, and provide the basis for acknowledging and accounting for Bloomberg Data for Good Exchange Conference. 24-Sep-2017, Chicago, IL, USA. bias in machine learning applications trained on 311 data.", "title": "" }, { "docid": "6ac231de51b69685fcb45d4ef2b32051", "text": "This paper deals with a design and motion planning algorithm of a caterpillar-based pipeline robot that can be used for inspection of 80-100-mm pipelines in an indoor pipeline environment. The robot system uses a differential drive to steer the robot and spring loaded four-bar mechanisms to assure that the robot expands to grip the pipe walls. Unique features of this robot are the caterpillar wheels, the analysis of the four-bar mechanism supporting the treads, a closed-form kinematic approach, and an intuitive user interface. In addition, a new motion planning approach is proposed, which uses springs to interconnect two robot modules and allows the modules to cooperatively navigate through difficult segments of the pipes. Furthermore, an analysis method of selecting optimal compliance to assure functionality and cooperation is suggested. Simulation and experimental results are used throughout the paper to highlight algorithms and approaches.", "title": "" }, { "docid": "c8b47edcc7d4818afad806e8f5307c97", "text": "Local surrogate models, to approximate the local decision boundary of a black-box classifier, constitute one approach to generate explanations for the rationale behind an individual prediction made by the back-box. This paper highlights the importance of defining the right locality, the neighborhood on which a local surrogate is trained, in order to approximate accurately the local black-box decision boundary. Unfortunately, as shown in this paper, this issue is not only a parameter or sampling distribution challenge and has a major impact on the relevance and quality of the approximation of the local black-box decision boundary and thus on the meaning and accuracy of the generated explanation. To overcome the identified problems, quantified with an adapted measure and procedure, we propose to generate surrogate-based explanations for individual predictions based on a sampling centered on particular place of the decision boundary, relevant for the prediction to be explained, rather than on the prediction itself as it is classically done. We evaluate the novel approach compared to state-of-the-art methods and a straightforward improvement thereof on four UCI datasets.", "title": "" }, { "docid": "8844f14e92e2c4aa7df276505af8b7fe", "text": "Tensor completion is a powerful tool used to estimate or recover missing values in multi-way data. It has seen great success in domains such as product recommendation and healthcare. Tensor completion is most often accomplished via low-rank sparse tensor factorization, a computationally expensive non-convex optimization problem which has only recently been studied in the context of parallel computing. In this work, we study three optimization algorithms that have been successfully applied to tensor completion: alternating least squares (ALS), stochastic gradient descent (SGD), and coordinate descent (CCD++). We explore opportunities for parallelism on shared- and distributed-memory systems and address challenges such as memory- and operation-efficiency, load balance, cache locality, and communication. Among our advancements are an SGD algorithm which combines stratification with asynchronous communication, an ALS algorithm rich in level-3 BLAS routines, and a communication-efficient CCD++ algorithm. We evaluate our optimizations on a variety of real datasets using a modern supercomputer and demonstrate speedups through 1024 cores. These improvements effectively reduce time-to-solution from hours to seconds on real-world datasets. We show that after our optimizations, ALS is advantageous on parallel systems of small-to-moderate scale, while both ALS and CCD++ will provide the lowest time-to-solution on large-scale distributed systems.", "title": "" }, { "docid": "7c93ceb1f71e5ac65c2c0d22f8a36afe", "text": "NEON is a vector instruction set included in a large fraction of new ARM-based tablets and smartphones. This paper shows that NEON supports high-security cryptography at surprisingly high speeds; normally data arrives at lower speeds, giving the CPU time to handle tasks other than cryptography. In particular, this paper explains how to use a single 800MHz Cortex A8 core to compute the existing NaCl suite of high-security cryptographic primitives at the following speeds: 5.60 cycles per byte (1.14 Gbps) to encrypt using a shared secret key, 2.30 cycles per byte (2.78 Gbps) to authenticate using a shared secret key, 527102 cycles (1517/second) to compute a shared secret key for a new public key, 650102 cycles (1230/second) to verify a signature, and 368212 cycles (2172/second) to sign a message. These speeds make no use of secret branches and no use of secret memory addresses.", "title": "" }, { "docid": "04d5824991ada6194f3028a900d7f31b", "text": "In this work, we present a solution to real-time monocular dense mapping. A tightly-coupled visual-inertial localization module is designed to provide metric and high-accuracy odometry. A motion stereo algorithm is proposed to take the video input from one camera to produce local depth measurements with semi-global regularization. The local measurements are then integrated into a global map for noise filtering and map refinement. The global map obtained is able to support navigation and obstacle avoidance for aerial robots through our indoor and outdoor experimental verification. Our system runs at 10Hz on an Nvidia Jetson TX1 by properly distributing computation to CPU and GPU. Through onboard experiments, we demonstrate its ability to close the perception-action loop for autonomous aerial robots. We release our implementation as open-source software1.", "title": "" }, { "docid": "f3a89c01dbbd40663811817ef7ba4be3", "text": "In order to address the mental health disparities that exist for Latino adolescents in the United States, psychologists must understand specific factors that contribute to the high risk of mental health problems in Latino youth. Given the significant percentage of Latino youth who are immigrants or the children of immigrants, acculturation is a key factor in understanding mental health among this population. However, limitations in the conceptualization and measurement of acculturation have led to conflicting findings in the literature. Thus, the goal of the current review is to examine and critique research linking acculturation and mental health outcomes for Latino youth, as well as to integrate individual, environmental, and family influences of this relationship. An integrated theoretical model is presented and implications for clinical practice and future directions are discussed.", "title": "" }, { "docid": "d41bbac7ec2596fe2a6503a0ac468947", "text": "Artistic style transfer is the problem of synthesizing an image with content similar to a given image and style similar to another. Although recent feed-forward neural networks can generate stylized images in real-time, these models produce a single stylization given a pair of style/content images, and the user doesn’t have control over the synthesized output. Moreover, the style transfer depends on the hyper-parameters of the model with varying “optimum” for different input images. Therefore, if the stylized output is not appealing to the user, she/he has to try multiple models or retrain one with different hyper-parameters to get a favorite stylization. In this paper, we address these issues by proposing a novel method which allows adjustment of crucial hyper-parameters, after the training and in real-time, through a set of manually adjustable parameters. These parameters enable the user to modify the synthesized outputs from the same pair of style/content images, in search of a favorite stylized image. Our quantitative and qualitative experiments indicate how adjusting these parameters is comparable to retraining the model with different hyperparameters. We also demonstrate how these parameters can be randomized to generate results which are diverse but still very similar in style and content.", "title": "" }, { "docid": "1fb87bc370023dc3fdfd9c9097288e71", "text": "Protein is essential for living organisms, but digestibility of crude protein is poorly understood and difficult to predict. Nitrogen is used to estimate protein content because nitrogen is a component of the amino acids that comprise protein, but a substantial portion of the nitrogen in plants may be bound to fiber in an indigestible form. To estimate the amount of crude protein that is unavailable in the diets of mountain gorillas (Gorilla beringei) in Bwindi Impenetrable National Park, Uganda, foods routinely eaten were analyzed to determine the amount of nitrogen bound to the acid-detergent fiber residue. The amount of fiber-bound nitrogen varied among plant parts: herbaceous leaves 14.5+/-8.9% (reported as a percentage of crude protein on a dry matter (DM) basis), tree leaves (16.1+/-6.7% DM), pith/herbaceous peel (26.2+/-8.9% DM), fruit (34.7+/-17.8% DM), bark (43.8+/-15.6% DM), and decaying wood (85.2+/-14.6% DM). When crude protein and available protein intake of adult gorillas was estimated over a year, 15.1% of the dietary crude protein was indigestible. These results indicate that the proportion of fiber-bound protein in primate diets should be considered when estimating protein intake, food selection, and food/habitat quality.", "title": "" }, { "docid": "7d01463ce6dd7e7e08ebaf64f6916b1d", "text": "An effective location algorithm, which considers nonline-of-sight (NLOS) propagation, is presented. By using a new variable to replace the square term, the problem becomes a mathematical programming problem, and then the NLOS propagation’s effect can be evaluated. Compared with other methods, the proposed algorithm has high accuracy.", "title": "" }, { "docid": "bdd98774fd8e73ee41ea808d01bc7a03", "text": "We present a new approach to transfer grasp configurations from prior example objects to novel objects. We assume the novel and example objects have the same topology and similar shapes. We perform 3D segmentation on these objects using geometric and semantic shape characteristics. We compute a grasp space for each part of the example object using active learning. We build bijective contact mapping between these model parts and compute the corresponding grasps for novel objects. Finally, we assemble the individual parts and use local replanning to adjust grasp configurations while maintaining its stability and physical constraints. Our approach is general, can handle all kind of objects represented using mesh or point cloud and a variety of robotic hands.", "title": "" }, { "docid": "2146687ac4ac019be6cfc828208187a9", "text": "Researchers and program developers in medical education presently face the challenge of implementing and evaluating curricula that teach medical students and house staff how to effectively and respectfully deliver health care to the increasingly diverse populations of the United States. Inherent in this challenge is clearly defining educational and training outcomes consistent with this imperative. The traditional notion of competence in clinical training as a detached mastery of a theoretically finite body of knowledge may not be appropriate for this area of physician education. Cultural humility is proposed as a more suitable goal in multicultural medical education. Cultural humility incorporates a lifelong commitment to self-evaluation and self-critique, to redressing the power imbalances in the patient-physician dynamic, and to developing mutually beneficial and nonpaternalistic clinical and advocacy partnerships with communities on behalf of individuals and defined populations.", "title": "" }, { "docid": "b77b4786128a214b9d91caec1232d513", "text": "FANET are wireless ad hoc networks on unmanned aerial vehicles, and are characterized by high nodes mobility, dynamically changing topology and movement in 3D-space. FANET routing is an extremely complicated problem. The article describes the bee algorithm and the routing process based on the mentioned algorithm in ad hoc networks. The classification of FANET routing methods is given. The overview of the routing protocols based on the bee colony algorithms is provided. Owing to the experimental analysis, bio-inspired algorithms based on the bee colony were proved to show good results, having better efficiency than traditional FANET routing algorithms in most cases.", "title": "" }, { "docid": "6b5d153443e204bdf9a97d74a0be8adb", "text": "It is difficult to manually identify opportunities for enhancing data locality. To address this problem, we extended the HPCToolkit performance tools to support data-centric profiling of scalable parallel programs. Our tool uses hardware counters to directly measure memory access latency and attributes latency metrics to both variables and instructions. Different hardware counters provide insight into different aspects of data locality (or lack thereof). Unlike prior tools for data-centric analysis, our tool employs scalable measurement, analysis, and presentation methods that enable it to analyze the memory access behavior of scalable parallel programs with low runtime and space overhead. We demonstrate the utility of HPCToolkit's new data-centric analysis capabilities with case studies of five well-known benchmarks. In each benchmark, we identify performance bottlenecks caused by poor data locality and demonstrate non-trivial performance optimizations enabled by this guidance.", "title": "" }, { "docid": "a76b0892d32af28833819860ea8bd9ff", "text": "Understanding how to group a set of binary files into the piece of software they belong to is highly desirable for software profiling, malware detection, or enterprise audits, among many other applications. Unfortunately, it is also extremely challenging: there is absolutely no uniformity in the ways different applications rely on different files, in how binaries are signed, or in the versioning schemes used across different pieces of software. In this paper, we show that, by combining information gleaned from a large number of endpoints (millions of computers), we can accomplish large-scale application identification automatically and reliably. Our approach relies on collecting metadata on billions of files every day, summarizing it into much smaller \"sketches\", and performing approximate k-nearest neighbor clustering on non-metric space representations derived from these sketches. We design and implement our proposed system using Apache Spark, show that it can process billions of files in a matter of hours, and thus could be used for daily processing. We further show our system manages to successfully identify which files belong to which application with very high precision, and adequate recall.", "title": "" }, { "docid": "fe5e812801390b54e7a7a524d5f0e0ef", "text": "OBJECTIVE\nAcute pancreatitis represents a spectrum of disease ranging from a mild, self-limited course requiring only brief hospitalization to a rapidly progressive, fulminant illness resulting in the multiple organ dysfunction syndrome (MODS), with or without accompanying sepsis. The goal of this consensus statement is to provide recommendations regarding the management of the critically ill patient with severe acute pancreatitis (SAP).\n\n\nDATA SOURCES AND METHODS\nAn international consensus conference was held in April 2004 to develop recommendations for the management of the critically ill patient with SAP. Evidence-based recommendations were developed by a jury of ten persons representing surgery, internal medicine, and critical care after conferring with experts and reviewing the pertinent literature to address specific questions concerning the management of patients with severe acute pancreatitis.\n\n\nDATA SYNTHESIS\nThere were a total of 23 recommendations developed to provide guidance to critical care clinicians caring for the patient with SAP. Topics addressed were as follows. 1) When should the patient admitted with acute pancreatitis be monitored in an ICU or stepdown unit? 2) Should patients with severe acute pancreatitis receive prophylactic antibiotics? 3) What is the optimal mode and timing of nutritional support for the patient with SAP? 4) What are the indications for surgery in acute pancreatitis, what is the optimal timing for intervention, and what are the roles for less invasive approaches including percutaneous drainage and laparoscopy? 5) Under what circumstances should patients with gallstone pancreatitis undergo interventions for clearance of the bile duct? 6) Is there a role for therapy targeting the inflammatory response in the patient with SAP? Some of the recommendations included a recommendation against the routine use of prophylactic systemic antibacterial or antifungal agents in patients with necrotizing pancreatitis. The jury also recommended against pancreatic debridement or drainage for sterile necrosis, limiting debridement or drainage to those with infected pancreatic necrosis and/or abscess confirmed by radiologic evidence of gas or results or fine needle aspirate. Furthermore, the jury recommended that whenever possible, operative necrosectomy and/or drainage be delayed at least 2-3 wk to allow for demarcation of the necrotic pancreas.\n\n\nCONCLUSIONS\nThis consensus statement provides 23 different recommendations concerning the management of patients with SAP. These recommendations differ in several ways from previous recommendations because of the release of recent data concerning the management of these patients and also because of the focus on the critically ill patient. There are a number of important questions that could not be answered using an evidence-based approach, and areas in need of further research were identified.", "title": "" }, { "docid": "adc51e9fdbbb89c9a47b55bb8823c7fe", "text": "State-of-the-art model counters are based on exhaustive DPLL algorithms, and have been successfully used in probabilistic reasoning, one of the key problems in AI. In this article, we present a new exhaustive DPLL algorithm with a formal semantics, a proof of correctness, and a modular design. The modular design is based on the separation of the core model counting algorithm from SAT solving techniques. We also show that the trace of our algorithm belongs to the language of Sentential Decision Diagrams (SDDs), which is a subset of Decision-DNNFs, the trace of existing state-of-the-art model counters. Still, our experimental analysis shows comparable results against state-of-the-art model counters. Furthermore, we obtain the first top-down SDD compiler, and show orders-of-magnitude improvements in SDD construction time against the existing bottom-up SDD compiler.", "title": "" } ]
scidocsrr
41f6ebabdc330c179320754f8d5ed447
The Pricing Strategy Guideline Framework for SaaS Vendors
[ { "docid": "30e229f91456c3d7eb108032b3470b41", "text": "Software as a service (SaaS) is a rapidly growing model of software licensing. In contrast to traditional software where users buy a perpetual-use license, SaaS users buy a subscription from the publisher. Whereas traditional software publishers typically release new product features as part of new versions of software once in a few years, publishers using SaaS have an incentive to release new features as soon as they are completed. We show that this property of the SaaS licensing model leads to greater investment in product development under most conditions. This increased investment leads to higher software quality in equilibrium under SaaS compared to perpetual licensing. The software publisher earns greater profits under SaaS while social welfare is also higher", "title": "" }, { "docid": "43bc62e674ae5c8785d00406b307b478", "text": "We explore the theoretical foundations of value creation in e-business by examining how 59 American and European e-businesses that have recently become publicly traded corporations create value. We observe that in e-business new value can be created by the ways in which transactions are enabled. Grounded in the rich data obtained from case study analyses and in the received theory in entrepreneurship and strategic management, we develop a model of the sources of value creation. The model suggests that the value creation potential of e-businesses hinges on four interdependent dimensions, namely: efficiency, complementarities, lock-in, and novelty. Our findings suggest that no single entrepreneurship or strategic management theory can fully explain the value creation potential of e-business. Rather, an integration of the received theoretical perspectives on value creation is needed. To enable such an integration, we offer the business model construct as a unit of analysis for future research on value creation in e-business. A business model depicts the design of transaction content, structure, and governance so as to create value through the exploitation of business opportunities. We propose that a firm’s business model is an important locus of innovation and a crucial source of value creation for the firm and its suppliers, partners, and customers. Copyright  2001 John Wiley & Sons, Ltd.", "title": "" } ]
[ { "docid": "93dd7ecb1707f7b404e79d79dac0a7ba", "text": "Information quality has received great attention from both academics and practitioners since it plays an important role in decision-making process. The need of high information quality in organization is increase in order to reach business excellent. Total Information Quality Management (TIQM) offers solution to solve information quality problems through a method for building an effective information quality management (IQM) with continuous improvement in whole process. However, TIQM does not have a standard measure in determining the process maturity level. Thus causes TIQM process maturity level cannot be determined exactly so that the assessment and improvement process will be difficult to be done. The contribution of this research is the process maturity indicators and measures based on TIQM process and Capability Maturity Model (CMM) concepts. It have been validated through an Expert Judgment using the Delphi method and implemented through a case study.", "title": "" }, { "docid": "552276c35889e4cf0492b164a58e25c5", "text": "the numbers of the botnet attacks are increasing day by day and the detection of botnet spreading in the network has become very challenging. Bots are having specific characteristics in comparison of normal malware as they are controlled by the remote master server and usually don’t show their behavior like normal malware until they don’t receive any command from their master server. Most of time bot malware are inactive, hence it is very difficult to detect. Further the detection or tracking of the network of theses bots requires an infrastructure that should be able to collect the data from a diverse range of data sources and correlate the data to bring the bigger picture in view.In this paper, we are sharing our experience of botnet detection in the private network as well as in public zone by deploying the nepenthes honeypots. The automated framework for malware collection using nepenthes and analysis using antivirus scan are discussed. The experimental results of botnet detection by enabling nepenthes honeypots in network are shown. Also we saw that existing known bots in our network can be detected.", "title": "" }, { "docid": "b0bfa683c37ad25600c414c7c082962b", "text": "Complexity in modern vehicles has increased dramatically during the last years due to new features and applications. Modern vehicles are connected to the Internet as well as to other vehicles in close proximity and to the environment for different novel comfort services and safety-related applications. Enabling such services and applications requires wireless interfaces to the vehicle and therefore leads to open interfaces to the outside world. Attackers can use those interfaces to impair the privacy of the vehicle owner or to take control (of parts of) the vehicle, which strongly endangers the safety of the passengers as well as other road users. To avoid such attacks and to ensure the safety of modern vehicles, sophisticated structured processes and methods are needed. In this paper we propose a security metric to analyse cyberphysical systems (CPS) in a structured way. Its application leads to a secure system configuration with comparable as well as reusable results. Additionally, the security metric can be used to support the conceptual phase for the development of CPS specified in the new SAE security standard SAE J3061. A case study has been carried out to illustrate the application of the security metric.", "title": "" }, { "docid": "79ca455db7e7348000c6590a442f9a4c", "text": "This paper considers the electrical actuation of aircraft wing surfaces, with particular emphasis upon flap systems. It discusses existing electro-hydraulic systems and proposes an electrical alternative, examining the potential system benefits in terms of increased functionality, maintenance and life cycle costs. The paper then progresses to describe a full scale actuation demonstrator of the flap system, including the high speed electrical drive, step down gearbox and flaps. Detailed descriptions are given of the fault tolerant motor, power electronics, control architecture and position sensor systems, along with a range of test results, demonstrating the system in operation", "title": "" }, { "docid": "deccc7ba3b930a9c56a377053699a46b", "text": "Preview: Some traditional measurements of forecast accuracy are unsuitable for intermittent-demand data because they can give infinite or undefined values. Rob Hyndman summarizes these forecast accuracy metrics and explains their potential failings. He also introduces a new metric—the mean absolute scaled error (MASE)—which is more appropriate for intermittent-demand data. More generally, he believes that the MASE should become the standard metric for comparing forecast accuracy across multiple time series.", "title": "" }, { "docid": "81c0f095cb17087e3f3aace0f4bac34e", "text": "Numerous studies investigated executive functioning (EF) problems in people with autism spectrum disorders (ASD) using laboratory EF tasks. As laboratory task performances often differ from real life observations, the current study focused on EF in everyday life of 118 children and adolescents with ASD (6-18 years). We investigated age-related and individual differences in EF problems as reported by parents on the Behavioral Rating Inventory Executive Functions (BRIEF: Gioia et al. in Behavior rating inventory of executive function. Psychological Assessment Resources, Odesse 2000), and examined the association with autism severity. Inhibition problems were mostly found in the youngest group (6- to 8-year-olds), whereas problems with planning where more evident for 12- to 14-year-olds as compared to 9- to 11-year-olds. In a subsample of participants meeting the ADOS ASD cut-off criteria the age related differences in planning were absent, while problems with cognitive flexibility were less apparent in 15- to 18-year-olds, compared to 9- to 11-, and 12- to 14-year olds. EF problems surpassing the clinical cutoff were only observed in 20% (planning) to 51% (cognitive flexibility) of the children and adolescents, and no relation was found with ASD symptom severity. This underlines the heterogeneous nature of ASD.", "title": "" }, { "docid": "1abcede6d3044e5550df404cfb7c87a4", "text": "There is intense interest in graphene in fields such as physics, chemistry, and materials science, among others. Interest in graphene's exceptional physical properties, chemical tunability, and potential for applications has generated thousands of publications and an accelerating pace of research, making review of such research timely. Here is an overview of the synthesis, properties, and applications of graphene and related materials (primarily, graphite oxide and its colloidal suspensions and materials made from them), from a materials science perspective.", "title": "" }, { "docid": "d6cd21d21d7a0522db4156a3c45548f5", "text": "Context: There are numerous studies on effort estimation in Agile Software Development (ASD) and the state of the art in this area has been recently documented in a Systematic Literature Review (SLR). However, to date there are no studies on the state of the practice in this area, focusing on similar issues to those investigated in the above-mentioned SLR. Objectives: The aim of this paper is to report on the state of the practice on effort estimation in ASD, focusing on a wide range of aspects such as the estimation techniques and effort predictors used, to name a few. Method: A survey was carried out using as instrument an on-line questionnaire answered by agile practitioners who have experience in effort estimation. Results: Data was collected from 60 agile practitioners from 16 different countries, and the main findings are: 1) Planning poker (63%), analogy (47%) and expert judgment (38%) are frequently practiced estimation techniques in ASD; 2) Story points is the most frequently (62%) employed size metric, used solo or in combination with other metrics (e.g., function points); 3) Team's expertise level and prior experience are most commonly used cost drivers; 4) 52% of the respondents believe that their effort estimates on average are under/over estimated by an error of 25% or more; 5) Most agile teams take into account implementation and testing activities during effort estimation; and 6) Estimation is mostly performed at sprint and release planning levels in ASD. Conclusions: Estimation techniques that rely on experts' subjective assessment are the ones used the most in ASD, with effort underestimation being the dominant trend. Further, the use of multiple techniques in combination and story points seem to present a positive association with estimation accuracy, and team-related cost drivers are the ones used by most agile teams. Finally, requirements and management related issues are perceived as the main reasons for inaccurate estimates.", "title": "" }, { "docid": "1d3192e66e042e67dabeae96ca345def", "text": "Privacy-enhancing technologies (PETs), which constitute a wide array of technical means for protecting users’ privacy, have gained considerable momentum in both academia and industry. However, existing surveys of PETs fail to delineate what sorts of privacy the described technologies enhance, which makes it difficult to differentiate between the various PETs. Moreover, those surveys could not consider very recent important developments with regard to PET solutions. The goal of this chapter is two-fold. First, we provide an analytical framework to differentiate various PETs. This analytical framework consists of high-level privacy principles and concrete privacy concerns. Secondly, we use this framework to evaluate representative up-to-date PETs, specifically with regard to the privacy concerns they address, and how they address them (i.e., what privacy principles they follow). Based on findings of the evaluation, we outline several future research directions.", "title": "" }, { "docid": "4f50f1e3b42bfadb95988200b1b1abb0", "text": "Agent-based computational economics (ACE) has received increased attention and importance over recent years. Some researchers have attempted to develop an agent-based model of the stock market to investigate the behavior of investors and provide decision support for innovation of trading mechanisms. However, challenges remain regarding the design and implementation of such a model, due to the complexity of investors, financial information, policies, and so on. This paper will describe a novel architecture to model the stock market by utilizing stock agent, finance agent and investor agent. Each type of investor agent has a different investment strategy and learning method. A prototype system for supporting stock market simulation and evolution is also presented to demonstrate the practicality and feasibility of the proposed intelligent agent-based artificial stock market system architecture. 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "0d382cf8e63e65521e600f6f91920eb1", "text": "Bioactive plant secondary products are frequently the drivers of complex rhizosphere interactions, including those with other plants, herbivores and microbiota. These chemically diverse molecules typically accumulate in a highly regulated manner in specialized plant tissues and organelles. We studied the production and localization of bioactive naphthoquinones (NQs) in the roots of Echium plantagineum, an invasive endemic weed in Australia. Roots of E. plantagineum produced red-coloured NQs in the periderm of primary and secondary roots, while seedling root hairs exuded NQs in copious quantities. Confocal imaging and microspectrofluorimetry confirmed that bioactive NQs were deposited in the outer layer of periderm cells in mature roots, resulting in red colouration. Intracellular examination revealed that periderm cells contained numerous small red vesicles for storage and intracellular transport of shikonins, followed by subsequent extracellular deposition. Periderm and root hair extracts of field- and phytotron-grown plants were analysed by UHPLC/Q-ToF MS (ultra high pressure liquid chromatography coupled to quadrupole time of flight mass spectrometry) and contained more than nine individual NQs, with dimethylacrylshikonin, and phytotoxic shikonin, deoxyshikonin and acetylshikonin predominating. In seedlings, shikonins were first found 48h following germination in the root-hypocotyl junction, as well as in root hair exudates. In contrast, the root cortices of both seedling and mature root tissues were devoid of NQs. SPRE (solid phase root zone extraction) microprobes strategically placed in soil surrounding living E. plantagineum plants successfully extracted significant levels of bioactive shikonins from living roots, rhizosphere and bulk soil surrounding roots. These findings suggest important roles for accumulation of shikonins in the root periderm and subsequent rhizodeposition in plant defence, interference, and invasion success.", "title": "" }, { "docid": "76e407bc17d0317eae8ff004dc200095", "text": "Major advances have recently been made in merging language and vision representations. But most tasks considered so far have confined themselves to the processing of objects and lexicalised relations amongst objects (content words). We know, however, that humans (even preschool children) can abstract over raw data to perform certain types of higher-level reasoning, expressed in natural language by function words. A case in point is given by their ability to learn quantifiers, i.e. expressions like few, some and all. From formal semantics and cognitive linguistics, we know that quantifiers are relations over sets which, as a simplification, we can see as proportions. For instance, in most fish are red, most encodes the proportion of fish which are red fish. In this paper, we study how well current language and vision strategies model such relations. We show that state-of-the-art attention mechanisms coupled with a traditional linguistic formalisation of quantifiers gives best performance on the task. Additionally, we provide insights on the role of 'gist' representations in quantification. A 'logical' strategy to tackle the task would be to first obtain a numerosity estimation for the two involved sets and then compare their cardinalities. We however argue that precisely identifying the composition of the sets is not only beyond current state-of-the-art models but perhaps even detrimental to a task that is most efficiently performed by refining the approximate numerosity estimator of the system.", "title": "" }, { "docid": "11921e86e16931d5d36c3608cb6b0cd2", "text": "In this paper we report a case study on how firms realize the business value of CRM by developing CRM-enabled ambidexterity to simultaneously pursue stability and develop adaptability. Results show that as the focal firm used CRM as a platform for standardizing operations and building routines, the top management team (TMT) and the operation team worked together to reflexively monitor the status of CRM-enabled capabilities and the usage of CRM. Such reflexive monitoring led to proactive changes in CRM and CRM usage in response to market changes and market needs, ensuring adaptability. In this way CRM enabled organizational ambidexterity through mechanisms of capabilities building and reflexive monitoring. By examining the previously unexplored relationship between CRM use and organizational ambidexterity, this study contributes to literature in both ambidexterity and CRM business value.", "title": "" }, { "docid": "581ba39d86678aa23cd9348bbd997c72", "text": "We present a system to track the positions of multiple persons in a scene from overlapping cameras. The distinguishing aspect of our method is a novel, two-step approach that jointly estimates person position and track assignment. The proposed approach keeps solving the assignment problem tractable, while taking into account how different assignments influence feature measurement. In a hypothesis generation stage, the similarity between a person at a particular position and an active track is based on a subset of cues (appearance, motion) that are guaranteed observable in the camera views. This allows for efficient computation of the K-best joint estimates for person position and track assignment under an approximation of the likelihood function. In a subsequent hypothesis verification stage, the known person positions associated with these K-best solutions are used to define a larger set of actually visible cues, which enables a re-ranking of the found assignments using the full likelihood function. We demonstrate that our system outperforms the state-of-the-art on four challenging multi-person datasets (indoor and outdoor), involving 3–5 overlapping cameras and up to 23 persons simultaneously. Two of these datasets are novel: we make the associated images and annotations public to facilitate", "title": "" }, { "docid": "c7351e8ce6d32b281d5bd33b245939c6", "text": "In TREC 2002 the Berkeley group participated only in the English-Arabic cross-language retrieval (CLIR) track. One Arabic monolingual run and three English-Arabic cross-language runs were submitted. Our approach to the crosslanguage retrieval was to translate the English topics into Arabic using online English-Arabic machine translation systems. The four official runs are named as BKYMON, BKYCL1, BKYCL2, and BKYCL3. The BKYMON is the Arabic monolingual run, and the other three runs are English-to-Arabic cross-language runs. This paper reports on the construction of an Arabic stoplist and two Arabic stemmers, and the experiments on Arabic monolingual retrieval, English-to-Arabic cross-language retrieval.", "title": "" }, { "docid": "4be5f35876daebc0c00528bede15b66c", "text": "Information Extraction (IE) is concerned with mining factual structures from unstructured text data, including entity and relation extraction. For example, identifying Donald Trump as “person” and Washington D.C. as “location”, and understand the relationship between them (say, Donald Trump spoke at Washington D.C.), from a specific sentence. Typically, IE systems rely on large amount of training data, primarily acquired via human annotation, to achieve the best performance. But since human annotation is costly and non-scalable, the focus has shifted to adoption of a new strategy Distant Supervision [1]. Distant supervision is a technique that can automatically extract labeled training data from existing knowledge bases without human efforts. However the training data generated by distant supervision is context-agnostic and can be very noisy. Moreover, we also observe the difference between the quality of training examples in terms of to what extent it infers the target entity/relation type. In this project, we focus on removing the noise and identifying the quality difference in the training data generated by distant supervision, by leveraging the feedback signals from one of IE’s downstream applications, QA, to improve the performance of one of the state-of-the-art IE framework, CoType [3]. Keywords—Data Mining, Relation Extraction, Question Answering.", "title": "" }, { "docid": "cc6c485fdd8d4d61c7b68bfd94639047", "text": "Passive geolocaton of communication emitters provides great benefits to military and civilian surveillance and security operations. Time Difference of Arrival (TDOA) and Frequency Difference of Arrival (FDOA) measurement combination for stationary emitters may be obtained by sensors mounted on mobile platforms, for example on a pair of UAVs. Complex Ambiguity Function (CAF) of received complex signals can be efficiently calculated to provide required TDOA / FDOA measurement combination. TDOA and FDOA measurements are nonlinear in the sense that the emitter uncertainty given measurements in the Cartesian domain is non-Gaussian. Multiple non-linear measurements of emitter location need to be fused to provide the geolocation estimates. Gaussian Mixture Measurement (GMM) filter fuses nonlinear measurements as long as the uncertainty of each measurement in the surveillance (Cartesian) space is modeled by a Gaussian Mixture. Simulation results confirm this approach and compare it with geolocation using Bearings Only (BO) measurements.", "title": "" }, { "docid": "27d2e87f175568d08f9391f0a31b53ee", "text": "An inverted-F antenna for heptaband WWAN/LTE operations is proposed and applied for the mobile phone surrounded by a metal rim with two slits. The proposed strategy provides a simple and effective solution for designing the multiband antenna for metal-rimmed mobile phone applications. In order to obtain remarkable radiation performance of the antenna, the metal rim with a height of 5 mm is cut into three parts. By taking advantage of this three-part frame, the proposed antenna is capable of covering GSM850/900/DCS/PCS/UMTS 2100/LTE 2300/2500 operating bands. The operation mechanisms of the proposed antenna have been analyzed carefully. To demonstrate the above method, the proposed antenna is fabricated, and the characteristics such as reflection coefficients, radiation efficiency, and radiation patterns have been measured. Reasonable results show that the proposed antenna is a candidate for WWAN/LTE mobile phone applications.", "title": "" }, { "docid": "cc5e5efde794b1b02033c490527732d3", "text": "In this paper we present hand and foot based immersive multimodal interaction approach for handheld devices. A smart phone based immersive football game is designed as a proof of concept. Our proposed method combines input modalities (i.e. hand & foot) and provides a coordinated output to both modalities along with audio and video. In this work, human foot gesture is detected and tracked using template matching method and Tracking-Learning-Detection (TLD) framework. We evaluated our system's usability through a user study in which we asked participants to evaluate proposed interaction method. Our preliminary evaluation demonstrates the efficiency and ease of use of proposed multimodal interaction approach.", "title": "" }, { "docid": "436900539406faa9ff34c1af12b6348d", "text": "The accomplishments to date on the development of automatic vehicle control (AVC) technology in the Program on Advanced Technology for the Highway (PATH) at the University of California, Berkeley, are summarized. The basic prqfiiples and assumptions underlying the PATH work are identified, ‘followed by explanations of the work on automating vehicle lateral (steering) and longitudinal (spacing and speed) control. For both lateral and longitudinal control, the modeling of plant dynamics is described first, followed by development of the additional subsystems needed (communications, reference/sensor systems) and the derivation of the control laws. Plans for testing on vehicles in both near and long term are then discussed.", "title": "" } ]
scidocsrr
cc8686e6a5d73abec5be6812513e358f
Bilingual Chronological Classification of Hafez's Poems
[ { "docid": "e53de7a588d61f513a77573b7b27f514", "text": "In the past, there have been dozens of studies on automatic authorship classification, and many of these studies concluded that the writing style is one of the best indicators for original authorship. From among the hundreds of features which were developed, syntactic features were best able to reflect an author's writing style. However, due to the high computational complexity for extracting and computing syntactic features, only simple variations of basic syntactic features such as function words, POS(Part of Speech) tags, and rewrite rules were considered. In this paper, we propose a new feature set of k-embedded-edge subtree patterns that holds more syntactic information than previous feature sets. We also propose a novel approach to directly mining them from a given set of syntactic trees. We show that this approach reduces the computational burden of using complex syntactic structures as the feature set. Comprehensive experiments on real-world datasets demonstrate that our approach is reliable and more accurate than previous studies.", "title": "" }, { "docid": "69d3c943755734903b9266ca2bd2fad1", "text": "This paper describes experiments in Machine Learning for text classification using a new representation of text based on WordNet hypernyms. Six binary classification tasks of varying diff iculty are defined, and the Ripper system is used to produce discrimination rules for each task using the new hypernym density representation. Rules are also produced with the commonly used bag-of-words representation, incorporating no knowledge from WordNet. Experiments show that for some of the more diff icult tasks the hypernym density representation leads to significantly more accurate and more comprehensible rules.", "title": "" }, { "docid": "fed956373dc9c477d393be5087e8fbc7", "text": "We develop a quantitative method to assess the style of American poems and to visualize a collection of poems in relation to one another. Qualitative poetry criticism helped guide our development of metrics that analyze various orthographic, syntactic, and phonemic features. These features are used to discover comprehensive stylistic information from a poem's multi-layered latent structure, and to compute distances between poems in this space. Visualizations provide ready access to the analytical components. We demonstrate our method on several collections of poetry, showing that it better delineates poetry style than the traditional word-occurrence features that are used in typical text analysis algorithms. Our method has potential applications to academic research of texts, to research of the intuitive personal response to poetry, and to making recommendations to readers based on their favorite poems.", "title": "" } ]
[ { "docid": "733b998017da30fe24521158a6aaa749", "text": "Memristor crossbars were fabricated at 40 nm half-pitch, using nanoimprint lithography on the same substrate with Si metal-oxide-semiconductor field effect transistor (MOS FET) arrays to form fully integrated hybrid memory resistor (memristor)/transistor circuits. The digitally configured memristor crossbars were used to perform logic functions, to serve as a routing fabric for interconnecting the FETs and as the target for storing information. As an illustrative demonstration, the compound Boolean logic operation (A AND B) OR (C AND D) was performed with kilohertz frequency inputs, using resistor-based logic in a memristor crossbar with FET inverter/amplifier outputs. By routing the output signal of a logic operation back onto a target memristor inside the array, the crossbar was conditionally configured by setting the state of a nonvolatile switch. Such conditional programming illuminates the way for a variety of self-programmed logic arrays, and for electronic synaptic computing.", "title": "" }, { "docid": "3891138c186fa72cdf8a19ef6be33638", "text": "In the past decade, internet of things (IoT) has been a focus of research. Security and privacy are the key issues for IoT applications, and still face some enormous challenges. In order to facilitate this emerging domain, we in brief review the research progress of IoT, and pay attention to the security. By means of deeply analyzing the security architecture and features, the security requirements are given. On the basis of these, we discuss the research status of key technologies including encryption mechanism, communication security, protecting sensor data and cryptographic algorithms, and briefly outline the challenges.", "title": "" }, { "docid": "35060ab7be361f6158bccb4b2ffe0b6b", "text": "In recent years, the potential of stem cell research for tissue engineering-based therapies and regenerative medicine clinical applications has become well established. In 2006, Chung pioneered the first entire organ transplant using adult stem cells and a scaffold for clinical evaluation. With this a new milestone was achieved, with seven patients with myelomeningocele receiving stem cell-derived bladder transplants resulting in substantial improvements in their quality of life. While a bladder is a relatively simple organ, the breakthrough highlights the incredible benefits that can be gained from the cross-disciplinary nature of tissue engineering and regenerative medicine (TERM) that encompasses stem cell research and stem cell bioprocessing. Unquestionably, the development of bioprocess technologies for the transfer of the current laboratory-based practice of stem cell tissue culture to the clinic as therapeutics necessitates the application of engineering principles and practices to achieve control, reproducibility, automation, validation and safety of the process and the product. The successful translation will require contributions from fundamental research (from developmental biology to the 'omics' technologies and advances in immunology) and from existing industrial practice (biologics), especially on automation, quality assurance and regulation. The timely development, integration and execution of various components will be critical-failures of the past (such as in the commercialization of skin equivalents) on marketing, pricing, production and advertising should not be repeated. This review aims to address the principles required for successful stem cell bioprocessing so that they can be applied deftly to clinical applications.", "title": "" }, { "docid": "afdc57b5d573e2c99c73deeef3c2fd5f", "text": "The purpose of this article is to consider oral reading fluency as an indicator of overall reading competence. We begin by examining theoretical arguments for supposing that oral reading fluency may reflect overall reading competence. We then summarize several studies substantiating this phenomenon. Next, we provide an historical analysis of the extent to which oral reading fluency has been incorporated into measurement approaches during the past century. We conclude with recommendations about the assessment of oral reading fluency for research and practice.", "title": "" }, { "docid": "c11f1b087955db1cac8c6350ad8a256e", "text": "Cloud computing enables users to consume various IT resources in an on-demand manner, and with low management overhead. However, customers can face new security risks when they use cloud computing platforms. In this paper, we focus on one such threat—the co-resident attack, where malicious users build side channels and extract private information from virtual machines co-located on the same server. Previous works mainly attempt to address the problem by eliminating side channels. However, most of these methods are not suitable for immediate deployment due to the required modifications to current cloud platforms. We choose to solve the problem from a different perspective, by studying how to improve the virtual machine allocation policy, so that it is difficult for attackers to co-locate with their targets. Specifically, we (1) define security metrics for assessing the attack; (2) model these metrics, and compare the difficulty of achieving co-residence under three commonly used policies; (3) design a new policy that not only mitigates the threat of attack, but also satisfies the requirements for workload balance and low power consumption; and (4) implement, test, and prove the effectiveness of the policy on the popular open-source platform OpenStack.", "title": "" }, { "docid": "334fec816cca1b8da5ca632c2ce58754", "text": "Image segmentation is often required as a preliminary and indispensable stage in the computer aided medical image process, particularly during the clinical analysis of magnetic resonance (MR) brain image. Fuzzy c-means (FCM) clustering algorithm has been widely used in many medical image segmentations. However, the conventionally standard FCM algorithm is sensitive to noise because of not taking into account the spatial information. To overcome the above problem, a modified FCM algorithm (called mFCM later) for MRI brain image segmentation is presented in this paper. The algorithm is realized by incorporating the spatial neighborhood information into the standard FCM algorithm and modifying the membership weighting of each cluster. The proposed algorithm is applied to both artificial synthesized image and real image. Segmentation results not only on synthesized image but also MRI brain image which degraded by Gaussian noise and salt-pepper noise demonstrates that the presented algorithm performs more robust to noise than the standard FCM algorithm.", "title": "" }, { "docid": "3b88cd186023cc5d4a44314cdb521d0e", "text": "RATIONALE, AIMS AND OBJECTIVES\nThis article aims to provide evidence to guide multidisciplinary clinical practitioners towards successful initiation and long-term maintenance of oral feeding in preterm infants, directed by the individual infant maturity.\n\n\nMETHOD\nA comprehensive review of primary research, explorative work, existing guidelines, and evidence-based opinions regarding the transition to oral feeding in preterm infants was studied to compile this document.\n\n\nRESULTS\nCurrent clinical hospital practices are described and challenged and the principles of cue-based feeding are explored. \"Traditional\" feeding regimes use criteria, such as the infant's weight, gestational age and being free of illness, and even caregiver intuition to initiate or delay oral feeding. However, these criteria could compromise the infant and increase anxiety levels and frustration for parents and caregivers. Cue-based feeding, opposed to volume-driven feeding, lead to improved feeding success, including increased weight gain, shorter hospital stay, fewer adverse events, without increasing staff workload while simultaneously improving parents' skills regarding infant feeding. Although research is available on cue-based feeding, an easy-to-use clinical guide for practitioners could not be found. A cue-based infant feeding regime, for clinical decision making on providing opportunities to support feeding success in preterm infants, is provided in this article as a framework for clinical reasoning.\n\n\nCONCLUSIONS\nCue-based feeding of preterm infants requires care providers who are trained in and sensitive to infant cues, to ensure optimal feeding success. An easy-to-use clinical guideline is presented for implementation by multidisciplinary team members. This evidence-based guideline aims to improve feeding outcomes for the newborn infant and to facilitate the tasks of nurses and caregivers.", "title": "" }, { "docid": "0abe21f9fd9e484004ba659bcdb71da8", "text": "The development and manifestation of gratitude in youth is unclear. We examined the effects of a grateful outlook on subjective well-being and other outcomes of positive psychological functioning in 221 early adolescents. Eleven classes were randomly assigned to either a gratitude, hassles, or control condition. Results indicated that counting blessings was associated with enhanced self-reported gratitude, optimism, life satisfaction, and decreased negative affect. Feeling grateful in response to aid mediated the relationship between experimental condition and general gratitude at the 3-week follow-up. The most significant finding was the robust relationship between gratitude and satisfaction with school experience at both the immediate post-test and 3-week follow-up. Counting blessings seems to be an effective intervention for well-being enhancement in early adolescents.", "title": "" }, { "docid": "1fa087607c8acd16394299cd2cc82a82", "text": "Purpose – Mobile commerce (m-commerce) represents a new area of business opportunity. Past research has often focused on customer acceptance and its antecedents, rather than factors that build trust in m-commerce. The purpose of this paper is to provide an explanation of factors influencing customer trust towards vendors on the mobile internet. Design/methodology/approach – M-commerce relies on mobile technology and well-maintained service quality. This paper has applied the service quality model (SERVQUAL) and technology acceptance model (TAM), coupled with proposed quality factors in relation to m-commerce that, according to the literature, influence customer trust, to empirically test the formation of trust. The proposed model was empirically evaluated using online survey data from 212 experienced m-commerce customers. Findings – The results showed that despite customisation, brand image and satisfaction all directly affecting customer trust towards the vendor in m-commerce, customisation and brand image equally had a stronger direct effect on trust formation. In addition, interactivity and responsiveness had no direct impact, but had an indirect impact via satisfaction on trust towards the vendor. Practical implications – This paper contributes a theoretical understanding of factors that activate the development of trust towards the vendor. For vendors in general the results enable them to better develop customer trust in m-commerce. Originality/value – The paper verifies the effects of satisfaction and proposed quality factors on customer confidence in m-commerce. Moreover, this article validates the determinants of satisfaction.", "title": "" }, { "docid": "2eb344b6701139be184624307a617c1b", "text": "This work combines the central ideas from two different areas, crowd simulation and social network analysis, to tackle some existing problems in both areas from a new angle. We present a novel spatio-temporal social crowd simulation framework, Social Flocks, to revisit three essential research problems, (a) generation of social networks, (b) community detection in social networks, (c) modeling collective social behaviors in crowd simulation. Our framework produces social networks that satisfy the properties of high clustering coefficient, low average path length, and power-law degree distribution. It can also be exploited as a novel dynamic model for community detection. Finally our framework can be used to produce real-life collective social behaviors over crowds, including community-guided flocking, leader following, and spatio-social information propagation. Social Flocks can serve as visualization of simulated crowds for domain experts to explore the dynamic effects of the spatial, temporal, and social factors on social networks. In addition, it provides an experimental platform of collective social behaviors for social gaming and movie animations. Social Flocks demo is at http://mslab.csie.ntu.edu.tw/socialflocks/ .", "title": "" }, { "docid": "622c01d51e93e36d5a0d813323e131a3", "text": "Document or passage retrieval is typically used as the first step in current question answering systems. The accuracy of the answer that is extracted from the passages and the efficiency of the question answering process will depend to some extent on the quality of this initial ranking. We show how language model approaches can be used to improve answer passage ranking. In particular, we show how a variety of prior language models trained on correct answer text allow us to incorporate into the retrieval step information that is often used in answer extraction, for example, the presence of tagged entities. We demonstrate the effectiveness of these models on the TREC9 QA Corpus.", "title": "" }, { "docid": "7a337f2a2fcf6c5e0990aec419e63208", "text": "Asynchronous event-based sensors present new challenges in basic robot vision problems like feature tracking. The few existing approaches rely on grouping events into models and computing optical flow after assigning future events to those models. Such a hard commitment in data association attenuates the optical flow quality and causes shorter flow tracks. In this paper, we introduce a novel soft data association modeled with probabilities. The association probabilities are computed in an intertwined EM scheme with the optical flow computation that maximizes the expectation (marginalization) over all associations. In addition, to enable longer tracks we compute the affine deformation with respect to the initial point and use the resulting residual as a measure of persistence. The computed optical flow enables a varying temporal integration different for every feature and sized inversely proportional to the length of the flow. We show results in egomotion and very fast vehicle sequences and we show the superiority over standard frame-based cameras.", "title": "" }, { "docid": "d2d8f1079b5bab3f37ec74a9bf3ac018", "text": "This paper is focused on the design of generalized composite right/left handed (CRLH) transmission lines in a fully planar configuration, that is, without the use of surface-mount components. These artificial lines exhibit multiple, alternating backward and forward-transmission bands, and are therefore useful for the synthesis of multi-band microwave components. Specifically, a quad-band power splitter, a quad-band branch line hybrid coupler and a dual-bandpass filter, all of them based on fourth-order CRLH lines (i.e., lines exhibiting 2 left-handed and 2 right-handed bands alternating), are presented in this paper. The accurate circuit models, including parasitics, of the structures under consideration (based on electrically small planar resonators), as well as the detailed procedure for the synthesis of these lines using such circuit models, are given. It will be shown that satisfactory results in terms of performance and size can be obtained through the proposed approach, fully compatible with planar technology.", "title": "" }, { "docid": "06755f8680ee8b43e0b3d512b4435de4", "text": "Stacked autoencoders (SAEs), as part of the deep learning (DL) framework, have been recently proposed for feature extraction in hyperspectral remote sensing. With the help of hidden nodes in deep layers, a high-level abstraction is achieved for data reduction whilst maintaining the key information of the data. As hidden nodes in SAEs have to deal simultaneously with hundreds of features from hypercubes as inputs, this increases the complexity of the process and leads to limited abstraction and performance. As such, segmented SAE (S-SAE) is proposed by confronting the original features into smaller data segments, which are separately processed by different smaller SAEs. This has resulted in reduced complexity but improved efficacy of data abstraction and accuracy of data classification.", "title": "" }, { "docid": "d7e7cdc9ac55d5af199395becfe02d73", "text": "Text recognition in images is a research area which attempts to develop a computer system with the ability to automatically read the text from images. These days there is a huge demand in storing the information available in paper documents format in to a computer storage disk and then later reusing this information by searching process. One simple way to store information from these paper documents in to computer system is to first scan the documents and then store them as images. But to reuse this information it is very difficult to read the individual contents and searching the contents form these documents line-by-line and word-by-word. The challenges involved in this the font characteristics of the characters in paper documents and quality of images. Due to these challenges, computer is unable to recognize the characters while reading them. Thus there is a need of character recognition mechanisms to perform Document Image Analysis (DIA) which transforms documents in paper format to electronic format. In this paper we have discuss method for text recognition from images. The objective of this paper is to recognition of text from image for better understanding of the reader by using particular sequence of different processing module.", "title": "" }, { "docid": "0fb844031a50d8e631cef050c6c7f05a", "text": "Due to the availability and increased usage of multimedia applications, features such as compression and security has gained more importance. Here, we propose a key generation algorithm and a double image encryption scheme with combined compression and encryption. The keys for encryption are generated using a novel modified convolution and chaotic mapping technique. First, the four least significant bits of the two images were truncated and then combined after permutation using the proposed logistic mapping. Also, cellular automata based diffusion is performed on the resultant image to strengthen the security further. Here, both confusion and diffusion seem to be integrated thus improvising the encryption scheme. The performance results and the test of randomness of the key and the algorithm were found to be successful. Since two images are compressed and encrypted simultaneously, it is useful in real - time scenarios.", "title": "" }, { "docid": "59f583df7d2aaad02a4e351bc7479cdf", "text": "Language is systematically structured at all levels of description, arguably setting it apart from all other instances of communication in nature. In this article, I survey work over the last 20 years that emphasises the contributions of individual learning, cultural transmission, and biological evolution to explaining the structural design features of language. These 3 complex adaptive systems exist in a network of interactions: individual learning biases shape the dynamics of cultural evolution; universal features of linguistic structure arise from this cultural process and form the ultimate linguistic phenotype; the nature of this phenotype affects the fitness landscape for the biological evolution of the language faculty; and in turn this determines individuals' learning bias. Using a combination of computational simulation, laboratory experiments, and comparison with real-world cases of language emergence, I show that linguistic structure emerges as a natural outcome of cultural evolution once certain minimal biological requirements are in place.", "title": "" }, { "docid": "1e50abe2821e6dad2e8ede1a163e8cc8", "text": "In vitro dissolution/release tests are an important tool in the drug product development phase as well as in its quality control and the regulatory approval process. Mucosal drug delivery systems are aimed to provide both local and systemic drug action via mucosal surfaces of the body and exhibit significant differences in formulation design, as well as in their physicochemical and release characteristics. Therefore it is not possible to devise a single test system which would be suitable for release testing of such complex dosage forms. This article is aimed to provide a comprehensive review of both compendial and noncompendial methods used for in vitro dissolution/release testing of novel mucosal drug delivery systems aimed for ocular, nasal, oromucosal, vaginal and rectal administration.", "title": "" }, { "docid": "d9b7636d566d82f9714272f1c9f83f2f", "text": "OBJECTIVE\nFew studies have investigated the association between religion and suicide either in terms of Durkheim's social integration hypothesis or the hypothesis of the regulative benefits of religion. The relationship between religion and suicide attempts has received even less attention.\n\n\nMETHOD\nDepressed inpatients (N=371) who reported belonging to one specific religion or described themselves as having no religious affiliation were compared in terms of their demographic and clinical characteristics.\n\n\nRESULTS\nReligiously unaffiliated subjects had significantly more lifetime suicide attempts and more first-degree relatives who committed suicide than subjects who endorsed a religious affiliation. Unaffiliated subjects were younger, less often married, less often had children, and had less contact with family members. Furthermore, subjects with no religious affiliation perceived fewer reasons for living, particularly fewer moral objections to suicide. In terms of clinical characteristics, religiously unaffiliated subjects had more lifetime impulsivity, aggression, and past substance use disorder. No differences in the level of subjective and objective depression, hopelessness, or stressful life events were found.\n\n\nCONCLUSIONS\nReligious affiliation is associated with less suicidal behavior in depressed inpatients. After other factors were controlled, it was found that greater moral objections to suicide and lower aggression level in religiously affiliated subjects may function as protective factors against suicide attempts. Further study about the influence of religious affiliation on aggressive behavior and how moral objections can reduce the probability of acting on suicidal thoughts may offer new therapeutic strategies in suicide prevention.", "title": "" }, { "docid": "f365988f4b131e39a59e00a39d428bc3", "text": "The ethanol and water extracts of Sansevieria trifasciata leaves showed dose-dependent and significant (P < 0.05) increase in pain threshold in tail-immersion test. Moreover, both the extracts (100 - 200 mg/kg) exhibited a dose-dependent inhibition of writhing and also showed a significant (P < 0.001) inhibition of both phases of the formalin pain test. The ethanol extract (200 mg/kg) significantly (P < 0.01) reversed yeast-induced fever. Preliminary phytochemical screening of the extracts showed the presence of alkaloids, flavonoids, saponins, glycosides, terpenoids, tannins, proteins and carbohydrates.", "title": "" } ]
scidocsrr
722355ec30518b173dad2972263c964d
Emerging Security Threats and Countermeasures in IoT
[ { "docid": "ccebd8a3c44632d760c9d9d4a4adfe01", "text": "Status of This Memo This document specifies an Internet standards track protocol for the Internet community, and requests discussion and suggestions for improvements. Please refer to the current edition of the \"Internet Official Protocol Standards\" (STD 1) for the standardization state and status of this protocol. Distribution of this memo is unlimited. Abstract The Domain Name System Security Extensions (DNSSEC) add data origin authentication and data integrity to the Domain Name System. This document introduces these extensions and describes their capabilities and limitations. This document also discusses the services that the DNS security extensions do and do not provide. Last, this document describes the interrelationships between the documents that collectively describe DNSSEC.", "title": "" } ]
[ { "docid": "e69c815be9ef71b84c9c78bb458e8d72", "text": "In a world where traditional notions of privacy are increasingly challenged by the myriad companies that collect and analyze our data, it is important that decision-making entities are held accountable for unfair treatments arising from irresponsible data usage. Unfortunately, a lack of appropriate methodologies and tools means that even identifying unfair or discriminatory effects can be a challenge in practice. We introduce the unwarranted associations (UA) framework, a principled methodology for the discovery of unfair, discriminatory, or offensive user treatment in data-driven applications. The UA framework unifies and rationalizes a number of prior attempts at formalizing algorithmic fairness. It uniquely combines multiple investigative primitives and fairness metrics with broad applicability, granular exploration of unfair treatment in user subgroups, and incorporation of natural notions of utility that may account for observed disparities. We instantiate the UA framework in FairTest, the first comprehensive tool that helps developers check data-driven applications for unfair user treatment. It enables scalable and statistically rigorous investigation of associations between application outcomes (such as prices or premiums) and sensitive user attributes (such as race or gender). Furthermore, FairTest provides debugging capabilities that let programmers rule out potential confounders for observed unfair effects. We report on use of FairTest to investigate and in some cases address disparate impact, offensive labeling, and uneven rates of algorithmic error in four data-driven applications. As examples, our results reveal subtle biases against older populations in the distribution of error in a predictive health application and offensive racial labeling in an image tagger.", "title": "" }, { "docid": "2248dae965b78e9e83d4389f5aa370d2", "text": "Methods that learn representations of graph nodes play a critical role in network analysis since they enable many downstream learning tasks. We propose Graph2Gauss – an approach that can efficiently learn versatile node embeddings on large scale (attributed) graphs that show strong performance on tasks such as link prediction and node classification. Unlike most approaches that represent nodes as (point) vectors in a lower-dimensional continuous space, we embed each node as a Gaussian distribution, allowing us to capture uncertainty about the representation. Furthermore, in contrast to previous approaches we propose a completely unsupervised method that is also able to handle inductive learning scenarios and is applicable to different types of graphs (plain, attributed, directed, undirected). By leveraging both the topological network structure and the associated node attributes, we are able to generalize to unseen nodes without additional training. To learn the embeddings we adopt a personalized ranking formulation w.r.t. the node distances that exploits the natural ordering between the nodes imposed by the network structure. Experiments on real world networks demonstrate the high performance of our approach, outperforming state-of-the-art network embedding methods on several different tasks.", "title": "" }, { "docid": "cbac071c932c73813630fd7384e4f98c", "text": "In this paper we propose a method that, given a query submitte d to a search engine, suggests a list of related queries. The rela t d queries are based in previously issued queries, and can be issued by the user to the search engine to tune or redirect the search process. The method proposed i s based on a query clustering process in which groups of semantically similar queries are identified. The clustering process uses the content of historical prefe renc s of users registered in the query log of the search engine. The method not onl y discovers the related queries, but also ranks them according to a relevanc criterion. Finally, we show with experiments over the query log of a search engine the ffectiveness of the method.", "title": "" }, { "docid": "bbeebb29c7220009c8d138dc46e8a6dd", "text": "Let’s begin with a problem that many of you have seen before. It’s a common question in technical interviews. You’re given as input an array A of length n, with the promise that it has a majority element — a value that is repeated in strictly more than n/2 of the array’s entries. Your task is to find the majority element. In algorithm design, the usual “holy grail” is a linear-time algorithm. For this problem, your post-CS161 toolbox already contains a subroutine that gives a linear-time solution — just compute the median of A. (Note: it must be the majority element.) So let’s be more ambitious: can we compute the majority element with a single left-to-right pass through the array? If you haven’t seen it before, here’s the solution:", "title": "" }, { "docid": "fe57e844c12f7392bdd29a2e2396fc50", "text": "With the help of modern information communication technology, mobile banking as a new type of financial services carrier can provide efficient and effective financial services for clients. Compare with Internet banking, mobile banking is more secure and user friendly. The implementation of wireless communication technologies may result in more complicated information security problems. Based on the principles of information security, this paper presented issues of information security of mobile banking and discussed the security protection measures such as: encryption technology, identity authentication, digital signature, WPKI technology.", "title": "" }, { "docid": "6c8d6b171284881dad7efc76ac800a54", "text": "Lakes and reservoirs are important water resources. Reservoirs are vital water resources to support all living organism. They provide clean water and habitat for a complex variety of aquatic life. Water from such resources can be used for diverse purposes such as, industry usage, agriculture and supplies for drinking water and recreation and aesthetic value. Apart from this, reservoirs also helpful to get hydro-electric power, flood control and scenic beauty. Water collected in such resources can be utilized in drought situation also. Unfortunately, these important resources are being polluted and the quality of water is being influenced by numerous factors. The quality of water is deteriorated by anthropogenic activities, indiscriminate disposal of sewage, human activities and also industry waste. Water quality monitoring of reservoirs is essential in exploitation of aquatic resources conservation. The quality of water helps in regulating the biotic diversity and biomass, energy and rate of", "title": "" }, { "docid": "a7b0f0455482765efd3801c3ae9f85b7", "text": "The Business Process Modelling Notation (BPMN) is a standard for capturing business processes in the early phases of systems development. The mix of constructs found in BPMN makes it possible to create models with semantic errors. Such errors are especially serious, because errors in the early phases of systems development are among the most costly and hardest to correct. The ability to statically check the semantic correctness of models is thus a desirable feature for modelling tools based on BPMN. Accordingly, this paper proposes a mapping from BPMN to a formal language, namely Petri nets, for which efficient analysis techniques are available. The proposed mapping has been implemented as a tool that, in conjunction with existing Petri net-based tools, enables the static analysis of BPMN models. The formalisation also led to the identification of deficiencies in the BPMN standard specification.", "title": "" }, { "docid": "93ec9adabca7fac208a68d277040c254", "text": "UNLABELLED\nWe developed cyNeo4j, a Cytoscape App to link Cytoscape and Neo4j databases to utilize the performance and storage capacities Neo4j offers. We implemented a Neo4j NetworkAnalyzer, ForceAtlas2 layout and Cypher component to demonstrate the possibilities a distributed setup of Cytoscape and Neo4j have.\n\n\nAVAILABILITY AND IMPLEMENTATION\nThe app is available from the Cytoscape App Store at http://apps.cytoscape.org/apps/cyneo4j, the Neo4j plugins at www.github.com/gsummer/cyneo4j-parent and the community and commercial editions of Neo4j can be found at http://www.neo4j.com.\n\n\nCONTACT\[email protected].", "title": "" }, { "docid": "076ab7223de2d7eee7b3875bc2bb82e4", "text": "Firewalls are network devices which enforce an organization’s security policy. Since their development, various methods have been used to implement firewalls. These methods filter network traffic at one or more of the seven layers of the ISO network model, most commonly at the application, transport, and network, and data-link levels. In addition, researchers have developed some newer methods, such as protocol normalization and distributed firewalls, which have not yet been widely adopted. Firewalls involve more than the technology to implement them. Specifying a set of filtering rules, known as a policy, is typically complicated and error-prone. High-level languages have been developed to simplify the task of correctly defining a firewall’s policy. Once a policy has been specified, the firewall needs to be tested to determine if it actually implements the policy correctly. Little work exists in the area of firewall theory; however, this article summarizes what exists. Because some data must be able to pass in and out of a firewall, in order for the protected network to be useful, not all attacks can be stopped by firewalls. Some emerging technologies, such as Virtual Private Networks (VPN) and peer-to-peer networking pose new challenges for firewalls.", "title": "" }, { "docid": "ebde7eb6e61bf56f84267b14e913b74a", "text": "Contraction of want to to wanna is subject to constraints which have been related to the operation of Universal Grammar. Contraction appears to be blocked when the trace of an extracted wh-word intervenes. Evidence for knowledge of these constraints by young English-speaking children in as been taken to show the operation of Universal Grammar in early child language acquisition. The present study investigates the knowledge these constraints in adults, both English native speakers and advanced Korean learners of English. The results of three experiments, using elicited production, oral repair, and grammaticality judgements, confirmed native speaker knowledge of the constraints. A second process of phonological elision may also operate to produce wanna. Learners also showed some differentiation of contexts, but much less clearly than native speakers. We speculate that non-natives may be using rules of complement selection, rather than the constraints of UG, to control contraction. Introduction: wanna contraction and language learnability In English, want to can be contracted to wanna, but not invariably. As first observed by Lakoff (1970) in examples such as (1), in which the object of the infinitival complement of want has been extracted by wh-movement, contraction is possible, but not in (2), in which the subject of the infinitival complement is extracted from the position between want and to. We shall call examples like (1) \"subject extraction questions\" (SEQ) and examples like (2) \"object extraction questions\" (OEQ).", "title": "" }, { "docid": "aa2401a302c7f0b394abb11961420b50", "text": "A program is then asked the question “what was too small” as a follow-up to (1a), and the question “what was too big” as a follow-up to (1b). Levesque et. al. call a sentence such as that in (1) “Google proof” since a system that processed a large corpus cannot “learn” how to resolve such references by finding some statistical correlations in the data, as the only difference between (1a) and (1b) are antonyms that are known to co-occur in similar contexts with the same frequency. In a recent paper Trinh and Le (2018) henceforth T&L suggested that they have successfully formulated a „simple‟ machine learning method for performing commonsense reasoning, and in particular, the kind of reasoning that would be required in the process of language understanding. In doing so, T&L use the Winograd Schema (WS) challenge as a benchmark. In simple terms, T&L suggest the following method for “learning” how to successfully resolve the reference “it” in sentences such as those in (1): generate two", "title": "" }, { "docid": "df83f2aa0347bfb3131e8c53b805084b", "text": "Spoken language interfaces are being incorporated into various devices such as smart phones and TVs. However, dialogue systems may fail to respond correctly when users' request functionality is not supported by currently installed apps. This paper proposes a feature-enriched matrix factorization (MF) approach to model open domain intents, which allows a system to dynamically add unexplored domains according to users' requests. First we leverage the structured knowledge from Wikipedia and Freebase to automatically acquire domain-related semantics to enrich features of input utterances, and then MF is applied to model automatically acquired knowledge, published app textual descriptions and users' spoken requests in a joint fashion; this generates latent feature vectors for utterances and user intents without need of prior annotations. Experiments show that the proposed MF models incorporated with rich features significantly improve intent prediction, achieving about 34% of mean average precision (MAP) for both ASR and manual transcripts.", "title": "" }, { "docid": "1217a503d107142c8ce686ef0ea4d3c8", "text": "In this paper, we analyze the suitability of different IPv6 addressing strategies for nodes, gateways, and various access network deployment scenarios in the Internet of Things. A vast number of things connected to the Internet need IPv6 addresses, as the IPv4 address space was effectively consumed before the introduction of the Internet of Things. We highlight how the heterogeneity of nodes and network technologies, extreme constraint and miniaturization, renumbering, and multihoming, present serious challenges toward IPv6 address allocation. By considering the topologies of various types of IoT networks, their intended uses as well as the types of IPv6 addresses that need to be deployed, we draw attention to allocation solutions as well as potential pitfalls.", "title": "" }, { "docid": "76c7b343d2f03b64146a0d6ed2d60668", "text": "Three important stages within automated 3D object reconstruction via multi-image convergent photogrammetry are image pre-processing, interest point detection for feature-based matching and triangular mesh generation. This paper investigates approaches to each of these. The Wallis filter is initially examined as a candidate image pre-processor to enhance the performance of the FAST interest point operator. The FAST algorithm is then evaluated as a potential means to enhance the speed, robustness and accuracy of interest point detection for subsequent feature-based matching. Finally, the Poisson Surface Reconstruction algorithm for wireframe mesh generation of objects with potentially complex 3D surface geometry is evaluated. The outcomes of the investigation indicate that the Wallis filter, FAST interest operator and Poisson Surface Reconstruction algorithms present distinct benefits in the context of automated image-based object reconstruction. The reported investigation has advanced the development of an automatic procedure for high-accuracy point cloud generation in multi-image networks, where robust orientation and 3D point determination has enabled surface measurement and visualization to be implemented within a single software system.", "title": "" }, { "docid": "22d8bfa59bb8e25daa5905dbb9e1deea", "text": "BACKGROUND\nSubacromial impingement syndrome (SAIS) is a painful condition resulting from the entrapment of anatomical structures between the anteroinferior corner of the acromion and the greater tuberosity of the humerus.\n\n\nOBJECTIVE\nThe aim of this study was to evaluate the short-term effectiveness of high-intensity laser therapy (HILT) versus ultrasound (US) therapy in the treatment of SAIS.\n\n\nDESIGN\nThe study was designed as a randomized clinical trial.\n\n\nSETTING\nThe study was conducted in a university hospital.\n\n\nPATIENTS\nSeventy patients with SAIS were randomly assigned to a HILT group or a US therapy group.\n\n\nINTERVENTION\nStudy participants received 10 treatment sessions of HILT or US therapy over a period of 2 consecutive weeks.\n\n\nMEASUREMENTS\nOutcome measures were the Constant-Murley Scale (CMS), a visual analog scale (VAS), and the Simple Shoulder Test (SST).\n\n\nRESULTS\nFor the 70 study participants (42 women and 28 men; mean [SD] age=54.1 years [9.0]; mean [SD] VAS score at baseline=6.4 [1.7]), there were no between-group differences at baseline in VAS, CMS, and SST scores. At the end of the 2-week intervention, participants in the HILT group showed a significantly greater decrease in pain than participants in the US therapy group. Statistically significant differences in change in pain, articular movement, functionality, and muscle strength (force-generating capacity) (VAS, CMS, and SST scores) were observed after 10 treatment sessions from the baseline for participants in the HILT group compared with participants in the US therapy group. In particular, only the difference in change of VAS score between groups (1.65 points) surpassed the accepted minimal clinically important difference for this tool.\n\n\nLIMITATIONS\nThis study was limited by sample size, lack of a control or placebo group, and follow-up period.\n\n\nCONCLUSIONS\nParticipants diagnosed with SAIS showed greater reduction in pain and improvement in articular movement functionality and muscle strength of the affected shoulder after 10 treatment sessions of HILT than did participants receiving US therapy over a period of 2 consecutive weeks.", "title": "" }, { "docid": "417eff5fd6251c70790d69e2b8dae255", "text": "This paper is a report on the initial trial for its kind in the development of the performance index of the autonomous mobile cleaning robot. The unique characteristic features of the cleaning robot have been identified as autonomous mobility, dust collection, and operation noise. Along with the identification of the performance indices the standardized performance-evaluation methods including the corresponding performance evaluation platform for each indices have been developed as well. The validity of the proposed performance evaluation methods has been demonstrated by applying the proposed evaluation methods on two commercial cleaning robots available in market. The proposed performance evaluation methods can be applied to general-purpose autonomous service robots which will be introduced in the consumer market in near future.", "title": "" }, { "docid": "31f838fb0c7db7e8b58fb1788d5554c8", "text": "Today’s smartphones operate independently of each other, using only local computing, sensing, networking, and storage capabilities and functions provided by remote Internet services. It is generally difficult or expensive for one smartphone to share data and computing resources with another. Data is shared through centralized services, requiring expensive uploads and downloads that strain wireless data networks. Collaborative computing is only achieved using ad hoc approaches. Coordinating smartphone data and computing would allow mobile applications to utilize the capabilities of an entire smartphone cloud while avoiding global network bottlenecks. In many cases, processing mobile data in-place and transferring it directly between smartphones would be more efficient and less susceptible to network limitations than offloading data and processing to remote servers. We have developed Hyrax, a platform derived from Hadoop that supports cloud computing on Android smartphones. Hyrax allows client applications to conveniently utilize data and execute computing jobs on networks of smartphones and heterogeneous networks of phones and servers. By scaling with the number of devices and tolerating node departure, Hyrax allows applications to use distributed resources abstractly, oblivious to the physical nature of the cloud. The design and implementation of Hyrax is described, including experiences in porting Hadoop to the Android platform and the design of mobilespecific customizations. The scalability of Hyrax is evaluated experimentally and compared to that of Hadoop. Although the performance of Hyrax is poor for CPU-bound tasks, it is shown to tolerate node-departure and offer reasonable performance in data sharing. A distributed multimedia search and sharing application is implemented to qualitatively evaluate Hyrax from an application development perspective.", "title": "" }, { "docid": "cf94d312bb426e64e364dfa33b09efeb", "text": "The attractiveness of a face is a highly salient social signal, influencing mate choice and other social judgements. In this study, we used event-related functional magnetic resonance imaging (fMRI) to investigate brain regions that respond to attractive faces which manifested either a neutral or mildly happy face expression. Attractive faces produced activation of medial orbitofrontal cortex (OFC), a region involved in representing stimulus-reward value. Responses in this region were further enhanced by a smiling facial expression, suggesting that the reward value of an attractive face as indexed by medial OFC activity is modulated by a perceiver directed smile.", "title": "" }, { "docid": "a4f960905077291bd6da9359fd803a9c", "text": "In this paper, we propose a new framework named Data Augmentation for Domain-Invariant Learning (DADIL). In the field of manufacturing, labeling sensor data as normal or abnormal is helpful for improving productivity and avoiding problems. In practice, however, the status of equipment may change due to changes in maintenance and settings (referred to as a “domain change”), which makes it difficult to collect sufficient homogeneous data. Therefore, it is important to develop a discriminative model that can use a limited number of data samples. Moreover, real data might contain noise that could have a negative impact. We focus on the following aspect: The difficulties of a domain change are also due to the limited data. Although the number of data samples in each domain is low, we make use of data augmentation which is a promising way to mitigate the influence of noise and enhance the performance of discriminative models. In our data augmentation method, we generate “pseudo data” by combining the data for each label regardless of the domain and extract a domain-invariant representation for classification. We experimentally show that this representation is effective for obtaining the label precisely using real datasets.", "title": "" }, { "docid": "e0e78e12dd56d950a6f7320e1d8bc33a", "text": "The partial pressure of carbon dioxide (pCO2), concentration of total dissolved inorganic carbon, and total alkalinity were measured at both high tide and low tide in the surface water of three Georgia estuaries from September 2002 to May 2004. Of the three estuaries, Sapelo and Doboy Sounds are marine-dominated estuaries, while Altamaha Sound is a river-dominated estuary. During all sampling months, the three estuaries were supersaturated in CO2 with respect to the atmosphere (39.5–342.5 Pa, or 390–3380 matm) because of CO2 inputs from within the estuarine zone (mainly intertidal marshes) and the river. Overall, pCO2 in the river-dominated estuary is much higher than that in the marine-dominated estuaries. The calculated annual air–water CO2 flux in Altamaha Sound (69.3 mmol m22 d21) is 2.4 times those of Sapelo and Doboy Sounds (28.7–29.4 mmol m22 d21). The higher CO2 degassing in the river-dominated estuary is fueled largely by CO2 loading from the river. Because of the substantial differences between riverand marine-dominated estuaries, current estimates of air–water CO2 fluxes in global estuaries (which are based almost entirely on river-dominated estuaries) could be overestimated. Recent studies have shown that estuaries are significant sources of carbon dioxide (CO2) to the atmosphere, with partial pressure of carbon dioxide (pCO2) varying from about 40 to 960 Pa (or ,400–9500 matm) (Frankignoulle et al. 1998; Borges 2005; Borges et al. 2005). Even though the surface area of global estuaries is only about a 20th that of continental shelves (Woodwell et al. 1973), it is argued that CO2 degassing by estuaries (Borges 2005; Borges et al. 2005) could nearly counterbalance the continental shelf CO2 sink (Tsunogai et al. 1999; Borges et al. 2005; Cai et al. 2006), which is about 30–70% of the atmospheric CO2 sink of the open ocean (1.2–1.6 Pg C yr21) (Takahashi et al. in press). However, most estuarine CO2 studies have focused on estuaries that receive substantial freshwater discharge; much less attention has been given to estuaries that receive little freshwater discharge besides precipitation and groundwater (Frankignoulle et al. 1998; Borges 2005; Borges et al. 2005). Definitions of estuaries vary widely. Most definitions restrict an estuary to the mouth of a river or a body of seawater reaching inland, while others argue that an estuary extends to the continental shelf (Perillo 1995). One of the most frequently cited definitions of an estuary is that of Cameron and Pritchard (1963): ‘‘a semi-enclosed coastal body of water, which has a free connection with the open sea, and within which seawater is measurably diluted with freshwater derived from land drainage.’’ According to this definition, all river mouths and coastal brackish lagoons qualify as estuaries, although the former have been the focus for most estuarine studies. Following Elliott and McLusky (2002), we have adopted the most widely held point of view that considers both river mouths and coastal brackish lagoons to be estuaries. The inclusion of coastal brackish lagoons as estuaries is also consistent with the fact that the most cited surface area of global estuaries was estimated ‘‘without differentiating mouths of rivers and coastal brackish lagoons’’ (Woodwell et al. 1973). For this study, we refer to mouths of rivers that receive significant amounts of upland river inflow as riverdominated estuaries and coastal brackish lagoons that receive little freshwater besides precipitation and groundwater as marine-dominated estuaries. The salt marsh–surrounded estuaries of the southeastern United States cover approximately 3 3 109 m2. Riverand marine-dominated estuaries are typical features of this region, with marine-dominated estuaries covering approximately 50% of the total estuarine area in this region (National Ocean Service 1985). In this paper we present a comparative study of CO2 in riverand marinedominated estuaries around Sapelo Island, Georgia (Fig. 1). The proximity of these two types of estuaries and their similarities in physical conditions provide a unique opportunity to examine the CO2 differences between these two types of estuaries. We also discuss this study’s global implications on air–water CO2 fluxes of estuaries.", "title": "" } ]
scidocsrr
2b2d2bc749d9a78c6ee815fcccab5239
Visualizing timelines: evolutionary summarization via iterative reinforcement between text and image streams
[ { "docid": "6215c6ca6826001291314405ea936dda", "text": "This paper describes a text mining tool that performs two tasks, namely document clustering and text summarization. These tasks have, of course, their corresponding counterpart in “conventional” data mining. However, the textual, unstructured nature of documents makes these two text mining tasks considerably more difficult than their data mining counterparts. In our system document clustering is performed by using the Autoclass data mining algorithm. Our text summarization algorithm is based on computing the value of a TF-ISF (term frequency – inverse sentence frequency) measure for each word, which is an adaptation of the conventional TF-IDF (term frequency – inverse document frequency) measure of information retrieval. Sentences with high values of TF-ISF are selected to produce a summary of the source text. The system has been evaluated on real-world documents, and the results are satisfactory.", "title": "" }, { "docid": "78976c627fb72db5393837169060a92a", "text": "Although many variants of language models have been proposed for information retrieval, there are two related retrieval heuristics remaining \"external\" to the language modeling approach: (1) proximity heuristic which rewards a document where the matched query terms occur close to each other; (2) passage retrieval which scores a document mainly based on the best matching passage. Existing studies have only attempted to use a standard language model as a \"black box\" to implement these heuristics, making it hard to optimize the combination parameters.\n In this paper, we propose a novel positional language model (PLM) which implements both heuristics in a unified language model. The key idea is to define a language model for each position of a document, and score a document based on the scores of its PLMs. The PLM is estimated based on propagated counts of words within a document through a proximity-based density function, which both captures proximity heuristics and achieves an effect of \"soft\" passage retrieval. We propose and study several representative density functions and several different PLM-based document ranking strategies. Experiment results on standard TREC test collections show that the PLM is effective for passage retrieval and performs better than a state-of-the-art proximity-based retrieval model.", "title": "" }, { "docid": "f2af56bef7ae8c12910d125a3b729e6a", "text": "We investigate an important and challenging problem in summary generation, i.e., Evolutionary Trans-Temporal Summarization (ETTS), which generates news timelines from massive data on the Internet. ETTS greatly facilitates fast news browsing and knowledge comprehension, and hence is a necessity. Given the collection of time-stamped web documents related to the evolving news, ETTS aims to return news evolution along the timeline, consisting of individual but correlated summaries on each date. Existing summarization algorithms fail to utilize trans-temporal characteristics among these component summaries. We propose to model trans-temporal correlations among component summaries for timelines, using inter-date and intra-date sentence dependencies, and present a novel combination. We develop experimental systems to compare 5 rival algorithms on 6 instinctively different datasets which amount to 10251 documents. Evaluation results in ROUGE metrics indicate the effectiveness of the proposed approach based on trans-temporal information.", "title": "" }, { "docid": "f0c1bfed4083e6f6e5748fdbe76bd42a", "text": "Multidocument extractive summarization relies on the concept of sentence centrality to identify the most important sentences in a document. Centrality is typically defined in terms of the presence of particular important words or in terms of similarity to a centroid pseudo-sentence. We are now considering an approach for computing sentence importance based on the concept of eigenvector centrality (prestige) that we call LexPageRank. In this model, a sentence connectivity matrix is constructed based on cosine similarity. If the cosine similarity between two sentences exceeds a particular predefined threshold, a corresponding edge is added to the connectivity matrix. We provide an evaluation of our method on DUC 2004 data. The results show that our approach outperforms centroid-based summarization and is quite successful compared to other summarization systems.", "title": "" }, { "docid": "2c6d8e232c2d609c5ff1577ae39a9bad", "text": "In this paper, we present a framework and a system that extracts events relevant to a query from a collection C of documents, and places such events along a timeline. Each event is represented by a sentence extracted from C, based on the assumption that \"important\" events are widely cited in many documents for a period of time within which these events are of interest. In our experiments, we used queries that are event types (\"earthquake\") and person names (e.g. \"George Bush\"). Evaluation was performed using G8 leader names as queries: comparison made by human evaluators between manually and system generated timelines showed that although manually generated timelines are on average more preferable, system generated timelines are sometimes judged to be better than manually constructed ones.", "title": "" } ]
[ { "docid": "39430478909e5818b242e0b28db419f0", "text": "BACKGROUND\nA modified version of the Berg Balance Scale (mBBS) was developed for individuals with intellectual and visual disabilities (IVD). However, the concurrent and predictive validity has not yet been determined.\n\n\nAIM\nThe purpose of the current study was to evaluate the concurrent and predictive validity of the mBBS for individuals with IVD.\n\n\nMETHOD\nFifty-four individuals with IVD and Gross Motor Functioning Classification System (GMFCS) Levels I and II participated in this study. The mBBS, the Centre of Gravity (COG), the Comfortable Walking Speed (CWS), and the Barthel Index (BI) were assessed during one session in order to determine the concurrent validity. The percentage of explained variance was determined by analyzing the squared multiple correlation between the mBBS and the BI, COG, CWS, GMFCS, and age, gender, level of intellectual disability, presence of epilepsy, level of visual impairment, and presence of hearing impairment. Furthermore, an overview of the degree of dependence between the mBBS, BI, CWS, and COG was obtained by graphic modelling. Predictive validity of mBBS was determined with respect to the number of falling incidents during 26 weeks and evaluated with Zero-inflated regression models using the explanatory variables of mBBS, BI, COG, CWS, and GMFCS.\n\n\nRESULTS\nThe results demonstrated that two significant explanatory variables, the GMFCS Level and the BI, and one non-significant variable, the CWS, explained approximately 60% of the mBBS variance. Graphical modelling revealed that BI was the most important explanatory variable for mBBS moreso than COG and CWS. Zero-inflated regression on the frequency of falling incidents demonstrated that the mBBS was not predictive, however, COG and CWS were.\n\n\nCONCLUSIONS\nThe results indicated that the concurrent validity as well as the predictive validity of mBBS were low for persons with IVD.", "title": "" }, { "docid": "32ae0b0c5b3ca3a7ede687872d631d29", "text": "Background—The benefit of catheter-based reperfusion for acute myocardial infarction (MI) is limited by a 5% to 15% incidence of in-hospital major ischemic events, usually caused by infarct artery reocclusion, and a 20% to 40% need for repeat percutaneous or surgical revascularization. Platelets play a key role in the process of early infarct artery reocclusion, but inhibition of aggregation via the glycoprotein IIb/IIIa receptor has not been prospectively evaluated in the setting of acute MI. Methods and Results —Patients with acute MI of,12 hours’ duration were randomized, on a double-blind basis, to placebo or abciximab if they were deemed candidates for primary PTCA. The primary efficacy end point was death, reinfarction, or any (urgent or elective) target vessel revascularization (TVR) at 6 months by intention-to-treat (ITT) analysis. Other key prespecified end points were early (7 and 30 days) death, reinfarction, or urgent TVR. The baseline clinical and angiographic variables of the 483 (242 placebo and 241 abciximab) patients were balanced. There was no difference in the incidence of the primary 6-month end point (ITT analysis) in the 2 groups (28.1% and 28.2%, P50.97, of the placebo and abciximab patients, respectively). However, abciximab significantly reduced the incidence of death, reinfarction, or urgent TVR at all time points assessed (9.9% versus 3.3%, P50.003, at 7 days; 11.2% versus 5.8%, P50.03, at 30 days; and 17.8% versus 11.6%, P50.05, at 6 months). Analysis by actual treatment with PTCA and study drug demonstrated a considerable effect of abciximab with respect to death or reinfarction: 4.7% versus 1.4%, P50.047, at 7 days; 5.8% versus 3.2%, P50.20, at 30 days; and 12.0% versus 6.9%, P50.07, at 6 months. The need for unplanned, “bail-out” stenting was reduced by 42% in the abciximab group (20.4% versus 11.9%, P50.008). Major bleeding occurred significantly more frequently in the abciximab group (16.6% versus 9.5%, P 0.02), mostly at the arterial access site. There was no intracranial hemorrhage in either group. Conclusions—Aggressive platelet inhibition with abciximab during primary PTCA for acute MI yielded a substantial reduction in the acute (30-day) phase for death, reinfarction, and urgent target vessel revascularization. However, the bleeding rates were excessive, and the 6-month primary end point, which included elective revascularization, was not favorably affected.(Circulation. 1998;98:734-741.)", "title": "" }, { "docid": "b0fcdc52d4a1bc1f8e6c4b8940d7a17f", "text": "Convolutional neural networks (CNNs) are deployed in a wide range of image recognition, scene segmentation and object detection applications. Achieving state of the art accuracy in CNNs often results in large models and complex topologies that require significant compute resources to complete in a timely manner. Binarised neural networks (BNNs) have been proposed as an optimised variant of CNNs, which constrain the weights and activations to +1 or —1 and thus offer compact models and lower computational complexity per operation. This paper presents a high performance BNN accelerator on the Intel®Xeon+FPGA™ platform. The proposed accelerator is designed to take advantage of the Xeon+FPGA system in a way that a specialised FPGA architecture can be targeted for the most compute intensive parts of the BNN whilst other parts of the topology can be handled by the Xeon™ CPU. The implementation is evaluated by comparing the raw compute performance and energy efficiency for key layers in standard CNN topologies against an Nvidia Titan X Pascal GPU and other published FPGA BNN accelerators. The results show that our single-package integrated Arria™ 10 FPGA accelerator coupled with a high-end Xeon CPU can offer comparable performance and better energy efficiency than a high-end discrete Titan X GPU card. In addition, our solution delivers the best performance compared to previous BNN FPGA implementations.", "title": "" }, { "docid": "20f379e3b4f62c4d319433bb76f3a490", "text": "We propose probabilistic generative models, called parametric mixture models (PMMs), for multiclass, multi-labeled text categorization problem. Conventionally, the binary classification approach has been employed, in which whether or not text belongs to a category is judged by the binary classifier for every category. In contrast, our approach can simultaneously detect multiple categories of text using PMMs. We derive efficient learning and prediction algorithms for PMMs. We also empirically show that our method could significantly outperform the conventional binary methods when applied to multi-labeled text categorization using real World Wide Web pages.", "title": "" }, { "docid": "770f31265aa7107a0890275a54089bc1", "text": "The analytic hierarchy process (AHP) provides a structure on decision-making processes where there are a limited numbers of choices but each has a number of attributes. This paper explores the use of AHP for deciding on car purchase. In the context of shopping, it is important to include elements that provide attributes that make consumer decision-making easier, comfortable and therefore, lead to a car purchase. As the car market becomes more competitive, there is a greater demand for innovation that provides better customer service and strategic competition in the business management. This paper presents a new methodological extension of the AHP by focusing on two issues. One combines pairwise comparison with a spreadsheet method using a 5-point rating scale. The other applies the group weight to a reciprocal consistency ratio. Three newly formed car models of midsize are used to show how the method allows choice to be prioritized and analyzed statistically. # 2001 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "9902a306ff4c633f30f6d9e56aa8335c", "text": "The bank director was pretty upset noticing Joe, the system administrator, spending his spare time playing Mastermind, an old useless game of the 70ies. He had fought the instinct of telling him how to better spend his life, just limiting to look at him in disgust long enough to be certain to be noticed. No wonder when the next day the director fell on his chair astonished while reading, on the newspaper, about a huge digital fraud on the ATMs of his bank, with millions of Euros stolen by a team of hackers all around the world. The article mentioned how the hackers had ‘played with the bank computers just like playing Mastermind’, being able to disclose thousands of user PINs during the one-hour lunch break. That precise moment, a second before falling senseless, he understood the subtle smile on Joe’s face the day before, while training at his preferred game, Mastermind.", "title": "" }, { "docid": "7cef2fac422d9fc3c3ffbc130831b522", "text": "Development of advanced driver assistance systems with vehicle hardware-in-the-loop simulations , \" (Received 00 Month 200x; In final form 00 Month 200x) This paper presents a new method for the design and validation of advanced driver assistance systems (ADASs). With vehicle hardware-in-the-loop (VEHIL) simulations the development process, and more specifically the validation phase, of intelligent vehicles is carried out safer, cheaper, and more manageable. In the VEHIL laboratory a full-scale ADAS-equipped vehicle is set up in a hardware-in-the-loop simulation environment, where a chassis dynamometer is used to emulate the road interaction and robot vehicles to represent other traffic. In this controlled environment the performance and dependability of an ADAS is tested to great accuracy and reliability. The working principle and the added value of VEHIL are demonstrated with test results of an adaptive cruise control and a forward collision warning system. Based on the 'V' diagram, the position of VEHIL in the development process of ADASs is illustrated.", "title": "" }, { "docid": "8a5e4a6f418975f352a6b9e3d8958d50", "text": "BACKGROUND\nDysphagia is associated with poor outcome in stroke patients. Studies investigating the association of dysphagia and early dysphagia screening (EDS) with outcomes in patients with acute ischemic stroke (AIS) are rare. The aims of our study are to investigate the association of dysphagia and EDS within 24 h with stroke-related pneumonia and outcomes.\n\n\nMETHODS\nOver a 4.5-year period (starting November 2007), all consecutive AIS patients from 15 hospitals in Schleswig-Holstein, Germany, were prospectively evaluated. The primary outcomes were stroke-related pneumonia during hospitalization, mortality, and disability measured on the modified Rankin Scale ≥2-5, in which 2 indicates an independence/slight disability to 5 severe disability.\n\n\nRESULTS\nOf 12,276 patients (mean age 73 ± 13; 49% women), 9,164 patients (74%) underwent dysphagia screening; of these patients, 55, 39, 4.7, and 1.5% of patients had been screened for dysphagia within 3, 3 to <24, 24 to ≤72, and >72 h following admission. Patients who underwent dysphagia screening were likely to be older, more affected on the National Institutes of Health Stroke Scale score, and to have higher rates of neurological symptoms and risk factors than patients who were not screened. A total of 3,083 patients (25.1%; 95% CI 24.4-25.8) had dysphagia. The frequency of dysphagia was higher in patients who had undergone dysphagia screening than in those who had not (30 vs. 11.1%; p < 0.001). During hospitalization (mean 9 days), 1,271 patients (10.2%; 95% CI 9.7-10.8) suffered from stroke-related pneumonia. Patients with dysphagia had a higher rate of pneumonia than those without dysphagia (29.7 vs. 3.7%; p < 0.001). Logistic regression revealed that dysphagia was associated with increased risk of stroke-related pneumonia (OR 3.4; 95% CI 2.8-4.2; p < 0.001), case fatality during hospitalization (OR 2.8; 95% CI 2.1-3.7; p < 0.001) and disability at discharge (OR 2.0; 95% CI 1.6-2.3; p < 0.001). EDS within 24 h of admission appeared to be associated with decreased risk of stroke-related pneumonia (OR 0.68; 95% CI 0.52-0.89; p = 0.006) and disability at discharge (OR 0.60; 95% CI 0.46-0.77; p < 0.001). Furthermore, dysphagia was independently correlated with an increase in mortality (OR 3.2; 95% CI 2.4-4.2; p < 0.001) and disability (OR 2.3; 95% CI 1.8-3.0; p < 0.001) at 3 months after stroke. The rate of 3-month disability was lower in patients who had received EDS (52 vs. 40.7%; p = 0.003), albeit an association in the logistic regression was not found (OR 0.78; 95% CI 0.51-1.2; p = 0.2).\n\n\nCONCLUSIONS\nDysphagia exposes stroke patients to a higher risk of pneumonia, disability, and death, whereas an EDS seems to be associated with reduced risk of stroke-related pneumonia and disability.", "title": "" }, { "docid": "0c4a9ee404cec4176e9d0f41c6d73b15", "text": "A novel envelope detector structure is proposed in this paper that overcomes the traditional trade-off required in these circuits, improving both the tracking and keeping of the signal. The method relies on holding the signal by two capacitors, discharging one when the other is in hold mode and employing the held signals to form the output. Simulation results show a saving greater than 60% of the capacitor area for the same ripple (0.3%) and a release time constant (0.4¿s) much smaller than that obtained by the conventional circuits.", "title": "" }, { "docid": "02605f4044a69b70673121985f1bd913", "text": "A novel class of low-cost, small-footprint and high-gain antenna arrays is presented for W-band applications. A 4 × 4 antenna array is proposed and demonstrated using substrate-integrated waveguide (SIW) technology for the design of its feed network and longitudinal slots in the SIW top metallic surface to drive the array antenna elements. Dielectric cubes of low-permittivity material are placed on top of each 1 × 4 antenna array to increase the gain of the circular patch antenna elements. This new design is compared to a second 4 × 4 antenna array which, instead of dielectric cubes, uses vertically stacked Yagi-like parasitic director elements to increase the gain. Measured impedance bandwidths of the two 4 × 4 antenna arrays are about 7.5 GHz (94.2-101.8 GHz) at 18 ± 1 dB gain level, with radiation patterns and gains of the two arrays remaining nearly constant over this bandwidth. While the fabrication effort of the new array involving dielectric cubes is significantly reduced, its measured radiation efficiency of 81 percent is slightly lower compared to 90 percent of the Yagi-like design.", "title": "" }, { "docid": "4494d5b42c8daf6a45608159a748fd7d", "text": "A number of recent papers have provided evidence that practical design questions about neural networks may be tackled theoretically by studying the behavior of random networks. However, until now the tools available for analyzing random neural networks have been relatively ad hoc. In this work, we show that the distribution of pre-activations in random neural networks can be exactly mapped onto lattice models in statistical physics. We argue that several previous investigations of stochastic networks actually studied a particular factorial approximation to the full lattice model. For random linear networks and random rectified linear networks we show that the corresponding lattice models in the wide network limit may be systematically approximated by a Gaussian distribution with covariance between the layers of the network. In each case, the approximate distribution can be diagonalized by Fourier transformation. We show that this approximation accurately describes the results of numerical simulations of wide random neural networks. Finally, we demonstrate that in each case the large scale behavior of the random networks can be approximated by an effective field theory.", "title": "" }, { "docid": "00dbe58bcb7d4415c01a07255ab7f365", "text": "The paper deals with a time varying vehicle-to-vehicle channel measurement in the 60 GHz millimeter wave (MMW) band using a unique time-domain channel sounder built from off-the-shelf components and standard measurement devices and employing Golay complementary sequences as the excitation signal. The aim of this work is to describe the sounder architecture, primary data processing technique, achievable system parameters, and preliminary measurement results. We measured the signal propagation between two passing vehicles and characterized the signal reflected by a car driving on a highway. The proper operation of the channel sounder is verified by a reference measurement performed with an MMW vector network analyzer in a rugged stationary office environment. The goal of the paper is to show the measurement capability of the sounder and its superior features like 8 GHz measuring bandwidth enabling high time resolution or good dynamic range allowing an analysis of weak multipath components.", "title": "" }, { "docid": "283449016e04bcfff09fca91da137dca", "text": "This paper proposes a depth hole filling method for RGBD images obtained from the Microsoft Kinect sensor. First, the proposed method labels depth holes based on 8-connectivity. For each labeled depth hole, the proposed method fills depth hole using the depth distribution of neighboring pixels of the depth hole. Then, we refine the hole filling result with cross-bilateral filtering. In experiments, by simply using the depth distribution of neighboring pixels, the proposed method improves the acquired depth map and reduces false filling caused by incorrect depth-color fusion.", "title": "" }, { "docid": "e4f26f4ed55e51fb2a9a55fd0f04ccc0", "text": "Nowadays, the Web has revolutionized our vision as to how deliver courses in a radically transformed and enhanced way. Boosted by Cloud computing, the use of the Web in education has revealed new challenges and looks forward to new aspirations such as MOOCs (Massive Open Online Courses) as a technology-led revolution ushering in a new generation of learning environments. Expected to deliver effective education strategies, pedagogies and practices, which lead to student success, the massive open online courses, considered as the “linux of education”, are increasingly developed by elite US institutions such MIT, Harvard and Stanford by supplying open/distance learning for large online community without paying any fees, MOOCs have the potential to enable free university-level education on an enormous scale. Nevertheless, a concern often is raised about MOOCs is that a very small proportion of learners complete the course while thousands enrol for courses. In this paper, we present LASyM, a learning analytics system for massive open online courses. The system is a Hadoop based one whose main objective is to assure Learning Analytics for MOOCs’ communities as a mean to help them investigate massive raw data, generated by MOOC platforms around learning outcomes and assessments, and reveal any useful information to be used in designing learning-optimized MOOCs. To evaluate the effectiveness of the proposed system we developed a method to identify, with low latency, online learners more likely to drop out. Keywords—Cloud Computing; MOOCs; Hadoop; Learning", "title": "" }, { "docid": "dfa5343bbeffc89cdd86afb2e5b3d2ae", "text": "We introduce an effective model to overcome the problem of mode collapse when training Generative Adversarial Networks (GAN). Firstly, we propose a new generator objective that finds it better to tackle mode collapse. And, we apply an independent Autoencoders (AE) to constrain the generator and consider its reconstructed samples as “real” samples to slow down the convergence of discriminator that enables to reduce the gradient vanishing problem and stabilize the model. Secondly, from mappings between latent and data spaces provided by AE, we further regularize AE by the relative distance between the latent and data samples to explicitly prevent the generator falling into mode collapse setting. This idea comes when we find a new way to visualize the mode collapse on MNIST dataset. To the best of our knowledge, our method is the first to propose and apply successfully the relative distance of latent and data samples for stabilizing GAN. Thirdly, our proposed model, namely Generative Adversarial Autoencoder Networks (GAAN), is stable and has suffered from neither gradient vanishing nor mode collapse issues, as empirically demonstrated on synthetic, MNIST, MNIST-1K, CelebA and CIFAR-10 datasets. Experimental results show that our method can approximate well multi-modal distribution and achieve better results than state-of-the-art methods on these benchmark datasets. Our model implementation is published here.", "title": "" }, { "docid": "a6e18aa7f66355fb8407798a37f53f45", "text": "We review some of the recent advances in level-set methods and their applications. In particular, we discuss how to impose boundary conditions at irregular domains and free boundaries, as well as the extension of level-set methods to adaptive Cartesian grids and parallel architectures. Illustrative applications are taken from the physical and life sciences. Fast sweeping methods are briefly discussed.", "title": "" }, { "docid": "6bd3568d195c0cd67e663d69d7ebca0c", "text": "Academic studies offer a generally positive portrait of the effect of customer relationship management (CRM) on firm performance, but practitioners question its value. The authors argue that a firm’s strategic commitments may be an overlooked organizational factor that influences the rewards for a firm’s investments in CRM. Using the context of online retailing, the authors consider the effects of two key strategic commitments of online retailers on the performance effect of CRM: their bricks-and-mortar experience and their online entry timing. They test the proposed model with a multimethod approach that uses manager ratings of firm CRM and strategic commitments and third-party customers’ ratings of satisfaction from 106 online retailers. The findings indicate that firms with moderate bricks-and-mortar experience are better able to leverage CRM for superior customer satisfaction outcomes than firms with either low or high bricks-and-mortar experience. Likewise, firms with moderate online experience are better able to leverage CRM into superior customer satisfaction outcomes than firms with either low or high online experience. These findings help resolve disparate results about the value of CRM, and they establish the importance of examining CRM within the strategic context of the firm.", "title": "" }, { "docid": "c699ede2caeb5953decc55d8e42c2741", "text": "Traditionally, two distinct approaches have been employed for exploratory factor analysis: maximum likelihood factor analysis and principal component analysis. A third alternative, called regularized exploratory factor analysis, was introduced recently in the psychometric literature. Small sample size is an important issue that has received considerable discussion in the factor analysis literature. However, little is known about the differential performance of these three approaches to exploratory factor analysis in a small sample size scenario. A simulation study and an empirical example demonstrate that regularized exploratory factor analysis may be recommended over the two traditional approaches, particularly when sample sizes are small (below 50) and the sample covariance matrix is near singular.", "title": "" }, { "docid": "7b104b14b4219ecc2d1d141fbf0e707b", "text": "As hospitals throughout Europe are striving exploit advantages of IT and network technologies, electronic medical records systems are starting to replace paper based archives. This paper suggests and describes an add-on service to electronic medical record systems that will help regular patients in getting insight to their diagnoses and medical record. The add-on service is based annotating polysemous and foreign terms with WordNet synsets. By exploiting the way that relationships between synsets are structured and described in WordNet, it is shown how patients can get interactive opportunities to generalize and understand their personal records.", "title": "" } ]
scidocsrr
d50acf9be1a941c1f0a710d6effa381e
HPC Containers in Use
[ { "docid": "0d95f43ba40942b83e5f118b01ebf923", "text": "Containers are a lightweight virtualization method for running multiple isolated Linux systems under a common host operating system. Container-based computing is revolutionizing the way applications are developed and deployed. A new ecosystem has emerged around the Docker platform to enable container based computing. However, this revolution has yet to reach the HPC community. In this paper, we provide background on Linux Containers and Docker, and how they can be of value to the scientific and HPC community. We will explain some of the use cases that motivate the need for user defined images and the uses of Docker. We will describe early work in deploying and integrating Docker into an HPC environment, and some of the pitfalls and challenges we encountered. We will discuss some of the security implications of using Docker and how we have addressed those for a shared user system typical of HPC centers. We will also provide performance measurements to illustrate the low overhead of containers. While our early work has been on cluster-based/CS-series systems, we will describe some preliminary assessment of supporting Docker on Cray XC series supercomputers, and a potential partnership with Cray to explore the feasibility and approaches to using Docker on large systems. Keywords-Docker; User Defined Images; containers; HPC systems", "title": "" }, { "docid": "7f06370a81e7749970cd0359c5b5f993", "text": "The use of virtualization technologies in high performance computing (HPC) environments has traditionally been avoided due to their inherent performance overhead. However, with the rise of container-based virtualization implementations, such as Linux VServer, OpenVZ and Linux Containers (LXC), it is possible to obtain a very low overhead leading to near-native performance. In this work, we conducted a number of experiments in order to perform an in-depth performance evaluation of container-based virtualization for HPC. We also evaluated the trade-off between performance and isolation in container-based virtualization systems and compared them with Xen, which is a representative of the traditional hypervisor-based virtualization systems used today.", "title": "" } ]
[ { "docid": "67dfda9049212916f402c968ac17d980", "text": "A large number of online health communities exist today, helping millions of people with social support during difficult phases of their lives when they suffer from serious diseases. Interactions between members in these communities contain discussions on practical problems faced by people during their illness such as depression, side-effects of medications, etc and answers to those problems provided by other members. Analyzing these interactions can be helpful in getting crucial information about the community such as dominant health issues, identifying sentimental effects of interactions on individual members and identifying influential members. In this paper, we analyze user messages of an online cancer support community, Cancer Survivors Network (CSN), to identity the two types of social support present in them: emotional support and informational support. We model the task as a binary classification problem. We use several generic and novel domain-specific features. Experimental results show that we achieve high classification performance. We, then, use the classifier to predict the type of support in CSN messages and analyze the posting behaviors of regular members and influential members in CSN in terms of the type of support they provide in their messages. We find that influential members generally provide more emotional support as compared to regular members in CSN.", "title": "" }, { "docid": "5e5c2619ea525ef77cbdaabb6a21366f", "text": "Data profiling is an information analysis technique on data stored inside database. Data profiling purpose is to ensure data quality by detecting whether the data in the data source compiles with the established business rules. Profiling could be performed using multiple analysis techniques depending on the data element to be analyzed. The analysis process also influenced by the data profiling tool being used. This paper describes tehniques of profiling analysis using open-source tool OpenRefine. The method used in this paper is case study method, using data retrieved from BPOM Agency website for checking commodity traditional medicine permits. Data attributes that became the main concern of this paper is Nomor Ijin Edar (NIE / distribution permit number) and registrar company name. The result of this research were suggestions to improve data quality on NIE and company name, which consists of data cleansing and improvement to business process and applications.", "title": "" }, { "docid": "7b92f0b05bed5340d3036c50bdd137aa", "text": "Information that is stored in an encrypted format is, by definition, usually not amenable to statistical analysis or machine learning methods. In this paper we present detailed analysis of coordinate and accelerated gradient descent algorithms which are capable of fitting least squares and penalised ridge regression models, using data encrypted under a fully homomorphic encryption scheme. Gradient descent is shown to dominate in terms of encrypted computational speed, and theoretical results are proven to give parameter bounds which ensure correctness of decryption. The characteristics of encrypted computation are empirically shown to favour a non-standard acceleration technique. This demonstrates the possibility of approximating conventional statistical regression methods using encrypted data without compromising privacy.", "title": "" }, { "docid": "9c9e1458740337c7b074710297a386a8", "text": "Seed dormancy is an innate seed property that defines the environmental conditions in which the seed is able to germinate. It is determined by genetics with a substantial environmental influence which is mediated, at least in part, by the plant hormones abscisic acid and gibberellins. Not only is the dormancy status influenced by the seed maturation environment, it is also continuously changing with time following shedding in a manner determined by the ambient environment. As dormancy is present throughout the higher plants in all major climatic regions, adaptation has resulted in divergent responses to the environment. Through this adaptation, germination is timed to avoid unfavourable weather for subsequent plant establishment and reproductive growth. In this review, we present an integrated view of the evolution, molecular genetics, physiology, biochemistry, ecology and modelling of seed dormancy mechanisms and their control of germination. We argue that adaptation has taken place on a theme rather than via fundamentally different paths and identify similarities underlying the extensive diversity in the dormancy response to the environment that controls germination.", "title": "" }, { "docid": "f008e38cd63db0e4cf90705cc5e8860e", "text": "6  Abstract— The purpose of this paper is to propose a MATLAB/ Simulink simulators for PV cell/module/array based on the Two-diode model of a PV cell.This model is known to have better accuracy at low irradiance levels which allows for more accurate prediction of PV systems performance.To reduce computational time , the input parameters are reduced as the values of Rs and Rp are estimated by an efficient iteration method. Furthermore ,all of the inputs to the simulators are information available on a standard PV module datasheet. The present paper present first abrief introduction to the behavior and functioning of a PV device and write the basic equation of the two-diode model,without the intention of providing an indepth analysis of the photovoltaic phenomena and the semicondutor physics. The introduction on PV devices is followed by the modeling and simulation of PV cell/PV module/PV array, which is the main subject of this paper. A MATLAB Simulik based simulation study of PV cell/PV module/PV array is carried out and presented .The simulation model makes use of the two-diode model basic circuit equations of PV solar cell, taking the effect of sunlight irradiance and cell temperature into consideration on the output current I-V characteristic and output power P-V characteristic . A particular typical 50W solar panel was used for model evaluation. The simulation results , compared with points taken directly from the data sheet and curves pubblished by the manufacturers, show excellent correspondance to the model.", "title": "" }, { "docid": "a9be6d3f45b0d8df850865a33e46df6b", "text": "A fuzzy number intuitionistic fuzzy set (FNIFS) is a generalization of intuitionistic fuzzy set. The fundamental characteristic of FNIFS is that the values of its membership function and non-membership function are trigonometric fuzzy numbers rather than exact numbers. In this paper, we define some operational laws of fuzzy number intuitionistic fuzzy numbers, and, based on these operational laws, develop some new arithmetic aggregation operators, such as the fuzzy number intuitionistic fuzzy weighted averaging (FIFWA) operator, the fuzzy number intuitionistic fuzzy ordered weighted averaging (FIFOWA) operator and the fuzzy number intuitionistic fuzzy hybrid aggregation (FIFHA) operator for aggregating fuzzy number intuitionistic fuzzy information. Furthermore, we give an application of the FIFHA operator to multiple attribute decision making based on fuzzy number intuitionistic fuzzy information. Finally, an illustrative example is given to verify the developed approach.", "title": "" }, { "docid": "26fb170d41ba099b92a6ea41d057a049", "text": "Although \"mental models\" are of central importance to system dynamics research and practice, the field has yet to develop an unambiguous and agreed upon definition of them. To begin to address this problem, existing definitions and descriptions of mental models in system dynamics and several literatures related to cognitive science were reviewed and compared. Available definitions were found to be overly brief, general, and vague, and different authors were found to markedly disagree on the basic characteristics of mental models. Based on this review, we conc luded that in order to reduce the amount of confusion in the literature, the mental models concept should be \"unbundled\" and the term \"mental models\" should be used more narrowly. To initiate a dialogue through which the system dynamics community might achieve a shared understanding of mental models, we proposed a new definition of \"mental models of dynamic systems\" accompanied by an extended annotation that explains the definitional choices made and suggests terms for other cognitive structures left undefined by narrowing the mental model concept. Suggestions for future research that could improve the field's ability to further define mental models are discussed. 3 A difficulty for those who want to understand or to appraise mental models is that their proponents seem to have somewhat different views. Although the phrase \"mental models\" is ubiquitous in the literature, there are surprisingly few explicit definitions of them.", "title": "" }, { "docid": "02ea5b61b22d5af1b9362ca46ead0dea", "text": "This paper describes a student project examining mechanisms with which to attack Bluetooth-enabled devices. The paper briefly describes the protocol architecture of Bluetooth and the Java interface that programmers can use to connect to Bluetooth communication services. Several types of attacks are described, along with a detailed example of two attack tools, Bloover II and BT Info.", "title": "" }, { "docid": "94da7ecfe2267092953780b03c6ecd55", "text": "Low-power design has become a key technology for battery-power biomedical devices in Wireless Body Area Network. In order to meet the requirement of low-power dissipation for electrocardiogram related applications, a down-sampling QRS complex detection algorithm is proposed. Based on Wavelet Transform (WT), this letter characterizes the energy distribution of QRS complex corresponding to the frequency band of WT. Then this letter details for the first time the process of down-sampled filter design, and presents the time and frequency response of the filter. The algorithm is evaluated in fixed point on MIT-BIH and QT database. Compared with other existing results, our work reduces the power dissipation by 23%, 61%, and 72% for 1 ×, 2 ×, and 3 × down-sampling rate, respectively, while maintaining almost constant detection performance.", "title": "" }, { "docid": "a380ee9ea523d1a3a09afcf2fb01a70d", "text": "Back-translation has become a commonly employed heuristic for semi-supervised neural machine translation. The technique is both straightforward to apply and has led to stateof-the-art results. In this work, we offer a principled interpretation of back-translation as approximate inference in a generative model of bitext and show how the standard implementation of back-translation corresponds to a single iteration of the wake-sleep algorithm in our proposed model. Moreover, this interpretation suggests a natural iterative generalization, which we demonstrate leads to further improvement of up to 1.6 BLEU.", "title": "" }, { "docid": "59e815b174f443160785d6f687a2ca1e", "text": "We introduce Rovables, a miniature robot that can move freely on unmodified clothing. The robots are held in place by magnetic wheels, and can climb vertically. The robots are untethered and have an onboard battery, microcontroller, and wireless communications. They also contain a low-power localization system that uses wheel encoders and IMU, allowing Rovables to perform limited autonomous navigation on the body. In the technical evaluations, we found that Rovables can operate continuously for 45 minutes and can carry up to 1.5N. We propose an interaction space for mobile on-body devices spanning sensing, actuation, and interfaces, and develop application scenarios in that space. Our applications include on-body sensing, modular displays, tactile feedback and interactive clothing and jewelry.", "title": "" }, { "docid": "06d43135f6f086bc14fe897fc44ac74d", "text": "Angesichts der wachsenden Bedeutung von Social Media Plattformen für Marketing-Aktivitäten von Unternehmen will der Beitrag eine erste datenschutzrechtliche Orientierung für Unternehmen geben, die sich in diesem Umfeld bewegen oder bewegen wollen. Besondere Berücksichtigung findet dabei die Social Media Plattform Facebook.", "title": "" }, { "docid": "a725138a18728b8499cdb006328a44d0", "text": "This paper presents a wideband directional bridge with a range of operating frequencies from 300 kHz to 13.5 GHz. The original topology of the directional bridge was designed, using the multilayer printed circuit board (PCB) technology, with the top layer of the laminated microwave dielectric Rogers RO4350, as resistive elements surface mounted (SMD) components are used. The circuit is designed for a nominal value of 16 dB coupling and an insertion loss of 1.6 dB.", "title": "" }, { "docid": "c0d646e248f240681e36113bf0ea41a3", "text": "Existing methods for multi-domain image-to-image translation (or generation) attempt to directly map an input image (or a random vector) to an image in one of the output domains. However, most existing methods have limited scalability and robustness, since they require building independent models for each pair of domains in question. This leads to two significant shortcomings: (1) the need to train exponential number of pairwise models, and (2) the inability to leverage data from other domains when training a particular pairwise mapping. Inspired by recent work on module networks [2], this paper proposes ModularGAN for multi-domain image generation and image-to-image translation. ModularGAN consists of several reusable and composable modules that carry on different functions (e.g., encoding, decoding, transformations). These modules can be trained simultaneously, leveraging data from all domains, and then combined to construct specific GAN networks at test time, according to the specific image translation task. This leads to ModularGAN’s superior flexibility of generating (or translating to) an image in any desired domain. Experimental results demonstrate that our model not only presents compelling perceptual results but also outperforms state-of-the-art methods on multi-domain facial attribute transfer.", "title": "" }, { "docid": "ba2c4cd490998d5a89099c57bb3a0c8e", "text": "The number of cycles for each external memory access in Single Instruction Multiple Data (SIMD) processors is heavily affected by the access pattern, such as aligned, unaligned, or stride. We developed a high-performance dynamic on-chip memory-allocation method for SIMD processors by considering the memory access pattern as well as the access frequency. The access pattern and the access count for an array of a loop are determined by both code analysis and profiling, which are performed on a developed compiler framework. This framework not only conducts dynamic on-chip memory allocation but also generates optimized codes for a target processor. The proposed allocation method has been tested with several multimedia benchmarks including motion estimation, 2-D discrete cosine transform, and MPEG2 encoder programs.", "title": "" }, { "docid": "86dfbb8dc8682f975ccb3cfce75eac3a", "text": "BACKGROUND\nAlthough many precautions have been introduced into early burn management, post burn contractures are still significant problems in burn patients. In this study, a form of Z-plasty in combination with relaxing incision was used for the correction of contractures.\n\n\nMETHODS\nPreoperatively, a Z-advancement rotation flap combined with a relaxing incision was drawn on the contracture line. Relaxing incision created a skin defect like a rhomboid. Afterwards, both limbs of the Z flap were incised. After preparation of the flaps, advancement and rotation were made in order to cover the rhomboid defect. Besides subcutaneous tissue, skin edges were closely approximated with sutures.\n\n\nRESULTS\nThis study included sixteen patients treated successfully with this flap. It was used without encountering any major complications such as infection, hematoma, flap loss, suture dehiscence or flap necrosis. All rotated and advanced flaps healed uneventfully. In all but one patient, effective contracture release was achieved by means of using one or two Z-plasty. In one patient suffering severe left upper extremity contracture, a little residual contracture remained due to inadequate release.\n\n\nCONCLUSION\nWhen dealing with this type of Z-plasty for mild contractures, it offers a new option for the correction of post burn contractures, which is safe, simple and effective.", "title": "" }, { "docid": "6073d07e5e6a05cbaa84ab8cd734bd12", "text": "Microblogging websites, e.g. Twitter and Sina Weibo, have become a popular platform for socializing and sharing information in recent years. Spammers have also discovered this new opportunity to unfairly overpower normal users with unsolicited content, namely social spams. While it is intuitive for everyone to follow legitimate users, recent studies show that both legitimate users and spammers follow spammers for different reasons. Evidence of users seeking for spammers on purpose is also observed. We regard this behavior as a useful information for spammer detection. In this paper, we approach the problem of spammer detection by leveraging the \"carefulness\" of users, which indicates how careful a user is when she is about to follow a potential spammer. We propose a framework to measure the carefulness, and develop a supervised learning algorithm to estimate it based on known spammers and legitimate users. We then illustrate how spammer detection can be improved in the aid of the proposed measure. Evaluation on a real dataset with millions of users and an online testing are performed on Sina Weibo. The results show that our approach indeed capture the carefulness, and it is effective to detect spammers. In addition, we find that the proposed measure is also beneficial for other applications, e.g. link prediction.", "title": "" }, { "docid": "5377e95300eef7496648b67749652988", "text": "This paper introduces SDF-TAR: a real-time SLAM system based on volumetric registration in RGB-D data. While the camera is tracked online on the GPU, the most recently estimated poses are jointly refined on the CPU. We perform registration by aligning the data in limited-extent volumes anchored at salient 3D locations. This strategy permits efficient tracking on the GPU. Furthermore, the small memory load of the partial volumes allows for pose refinement to be done concurrently on the CPU. This refinement is performed over batches of a fixed number of frames, which are jointly optimized until the next batch becomes available. Thus drift is reduced during online operation, eliminating the need for any posterior processing. Evaluating on two public benchmarks, we demonstrate improved rotational motion estimation and higher reconstruction precision than related methods.", "title": "" }, { "docid": "4f7fdd852f520f6928eeb69b3d0d1632", "text": "Hadoop MapReduce is a popular framework for distributed storage and processing of large datasets and is used for big data analytics. It has various configuration parameters which play an important role in deciding the performance i.e., the execution time of a given big data processing job. Default values of these parameters do not result in good performance and therefore it is important to tune them. However, there is inherent difficulty in tuning the parameters due to two important reasons - first, the parameter search space is large and second, there are cross-parameter interactions. Hence, there is a need for a dimensionality-free method which can automatically tune the configuration parameters by taking into account the cross-parameter dependencies. In this paper, we propose a novel Hadoop parameter tuning methodology, based on a noisy gradient algorithm known as the simultaneous perturbation stochastic approximation (SPSA). The SPSA algorithm tunes the selected parameters by directly observing the performance of the Hadoop MapReduce system. The approach followed is independent of parameter dimensions and requires only 2 observations per iteration while tuning. We demonstrate the effectiveness of our methodology in achieving good performance on popular Hadoop benchmarks namely Grep, Bigram, Inverted Index, Word Co-occurrence and Terasort. Our method, when tested on a 25 node Hadoop cluster shows 45-66% decrease in execution time of Hadoop jobs on an average, when compared to prior methods. Further, our experiments also indicate that the parameters tuned by our method are resilient to changes in number of cluster nodes, which makes our method suitable to optimize Hadoop when it is provided as a service on the cloud.", "title": "" }, { "docid": "2d0c5f6be15408d4814b22d28b1541af", "text": "OBJECTIVE\nOur previous study has found that circulating microRNA (miRNA, or miR) -122, -140-3p, -720, -2861, and -3149 are significantly elevated during early stage of acute coronary syndrome (ACS). This study was conducted to determine the origin of these elevated plasma miRNAs in ACS.\n\n\nMETHODS\nqRT-PCR was performed to detect the expression profiles of these 5 miRNAs in liver, spleen, lung, kidney, brain, skeletal muscles, and heart. To determine their origins, these miRNAs were detected in myocardium of acute myocardial infarction (AMI), and as well in platelets and peripheral blood mononuclear cells (PBMCs, including monocytes, circulating endothelial cells (CECs) and lymphocytes) of the AMI pigs and ACS patients.\n\n\nRESULTS\nMiR-122 was specifically expressed in liver, and miR-140-3p, -720, -2861, and -3149 were highly expressed in heart. Compared with the sham pigs, miR-122 was highly expressed in the border zone of the ischemic myocardium in the AMI pigs without ventricular fibrillation (P < 0.01), miR-122 and -720 were decreased in platelets of the AMI pigs, and miR-122, -140-3p, -720, -2861, and -3149 were increased in PBMCs of the AMI pigs (all P < 0.05). Compared with the non-ACS patients, platelets miR-720 was decreased and PBMCs miR-122, -140-3p, -720, -2861, and -3149 were increased in the ACS patients (all P < 0.01). Furthermore, PBMCs miR-122, -720, and -3149 were increased in the AMI patients compared with the unstable angina (UA) patients (all P < 0.05). Further origin identification revealed that the expression levels of miR-122 in CECs and lymphocytes, miR-140-3p and -2861 in monocytes and CECs, miR-720 in monocytes, and miR-3149 in CECs were greatly up-regulated in the ACS patients compared with the non-ACS patients, and were higher as well in the AMI patients than that in the UA patients except for the miR-122 in CECs (all P < 0.05).\n\n\nCONCLUSION\nThe elevated plasma miR-122, -140-3p, -720, -2861, and -3149 in the ACS patients were mainly originated from CECs and monocytes.", "title": "" } ]
scidocsrr